Transactions on Intelligent Welding Manufacturing: Volume III No. 4 2019 9813365013, 9789813365018

The primary aim of this volume is to provide researchers and engineers from both academic and industry with up-to-date c

123 71

English Pages 213 [203] Year 2021

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Editorials
Contents
Feature Articles
Multi-layer Multi-pass Welding of Medium Thickness Plate: Technologies, Advances and Future Prospects
1 Introduction
2 MLMP Welding Techniques
3 MLMP Welding Simulation Analysis
4 MLMP Path Planning
4.1 Weld Seam Feature Extraction
4.2 Normal Weld Path Planning
4.3 Special Weld Path Planning
4.4 Database-Based Planning
5 Weld Seam Tracking
5.1 Vision Sensing Technique
5.2 Arc Sensing Technique
6 Industrial Application
7 Conclusions and Future Prospect
7.1 Sensing Technique Application
7.2 Emerging Techniques
References
A Review: Application Research of Intelligent 3D Detection Technology Based on Linear-Structured Light
1 Introduction
2 Line-Structured Light Vision Sensing Technology
3 Applied Research of Industrial Applications
3.1 Industrial Field Calibration
3.2 Object Detection and Location
3.3 Profile Measurement
3.4 Three-Dimensional Reconstruction
4 Conclusion
References
Research Papers
Acoustic Emission-Based Weld Crack In-situ Detection and Location Using WT-TDOA
1 Introduction
2 Basic Methods
2.1 Wavelet Transform (WT)
2.2 TDOA
3 Proposed Method
3.1 Experimental Platform for Data Acquisition
3.2 Attenuation Characteristic Analyzing of AE Signals
3.3 Noise Reduction of AE Signals
3.4 Location of AE Signal Source
4 Experiments and Results
4.1 Attenuation Characteristic Analysis
4.2 Denoising of AE Signals
4.3 AE Signal Source Location
5 Conclusion
References
The Research of Real-Time Welding Quality Detection via Visual Sensor for MIG Welding Process
1 Introduction
2 Experimental Establishment
2.1 Weld Locating and Tracking Sensor
2.2 Tracking Process
2.3 Robot Communication
3 Image Processing
3.1 Welding Seam Feature Extraction
3.2 Welding Pool Feature Extraction
4 The Establishment of Predicted Model
5 Conclusion
References
A Weld Bead Profile Extraction Method Based on Scanning Monocular Stereo Vision for Multi-layer Multi-pass Welding on Mid-thick Plate
1 Introduction
2 System Configuration
3 Point Cloud Reconstruction and Weld Bead Profile Extraction
3.1 Scanning and Image Acquisition
3.2 Image Processing and Point Could Reconstruction
3.3 Point Cloud Data Processing
4 Experiment and Result
5 Conclusion
References
The Intelligent Methodology for Monitoring the Dynamic Welding Quality Using Visual and Audio Sensor
1 Introduction
2 Experiment Design
3 Arc Sound Processing
3.1 Preprocessing
3.2 Welding Sound Feature Extraction
4 Visual Image Processing
4.1 Groove Detection
4.2 Welding Pool Detection
5 Conclusion
References
Convolutional Neural Network Prediction of Aluminum Alloy GTAW Penetration Process Based on Arc Sound Sensing
1 Introduction
2 Architecture of Welding Online Monitoring Platform
2.1 Formatting of Welding Monitoring Information
2.2 Web-based Industrial Cloud Platform
3 Sound Feature Extraction
3.1 Sound Signal Preprocessing
3.2 Sound Signal Denoising Processing
3.3 Arc Sound Signal Feature Extraction
3.4 Penetration State Modeling
4 Analysis of Experimental Results and Recognition of Penetration Status
5 Conclusion
References
Identification and Penetration Prediction of Aluminum Alloy GTAW Pool Based on Network Vision Monitoring
1 Introduction
2 Experiment and Setup
2.1 Experiments Design
3 Extract the Contours of the Front Molten Pool and the Characteristic Parameters of the Front Molten Pool Based on the Advanced Two-tier Cascade Regression Algorithm
3.1 Extraction of Feature Points Around Molten Pool Based on Advanced Two-Tier Cascade Regression Algorithm
3.2 Extraction of Geometric Parameters of the Front Molten Pool
4 Models to Predict the Backside Width of Weld Pool
5 Welding Penetration Identification Model
6 Conclusion
References
Research on Welding Transient Deformation Monitoring Technology Based on Non-contact Sensor Technology
1 Introduction
2 Welding Deformation Detection Technology
2.1 Non-contact Transient Deformation Detection
2.2 Contact Transient Deformation Detection Method
3 Welding Deformation Control Method
4 Conclusion
References
Binocular Stereo Vision and Modified DBSCAN on Point Clouds for Single Leaf Segmentation
1 Introduction
2 Sparse Plant Point Clouds Acquisition
2.1 Overview of the Whole Procedure to Obtain Plant Point Clouds
2.2 Quasi-Euclidean Stereo Rectification
2.3 Binocular Stereo Matching
3 Point Clouds Segmentation
3.1 Tensor Voting
3.2 Denoising Process
3.3 Affinity Matrix
3.4 Modified DBSCAN
3.5 2D Image Segmentation
4 Experiment and Discussion
4.1 Simulation of Manifolds Segmentation
4.2 Experiments on Real Sceneries
5 Conclusion
References
Short Papers and Technical Notes
Teaching-Free Intelligent Robotic Welding of Heterocyclic Medium and Thick Plates Based on Vision
1 Introduction
2 Characteristics of Lifting Lug
3 Composition of the Intelligent Robotic Welding System of the Lifting Lug
4 Intelligent Robotic Welding of Lifting Lug Based on Compound Visual Sensing
5 Conclusion
References
In-Process Visual Monitoring of Penetration State in Nuclear Steel Pipe Welding
1 Introduction
2 Orbital Pipe Welding Penetration Monitoring System
3 Welding Penetration Monitoring Software
3.1 Functions of Welding Monitoring System
4 Result and Discussion
5 Conclusions
References
Information for Authors
Aims and Scopes
Submission
Style of Manuscripts
Format of Manuscripts
Originality and Copyright
Author Index
Recommend Papers

Transactions on Intelligent Welding Manufacturing: Volume III No. 4 2019
 9813365013, 9789813365018

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Transactions on Intelligent Welding Manufacturing Volume III No. 4 2019

Transactions on Intelligent Welding Manufacturing Editors-in-Chief Shanben Chen Shanghai Jiao Tong University Shanghai, China

Yuming Zhang Department of Electrical and Computer Engineering University of Kentucky Lexington, KY, USA

Zhili Feng Oak Ridge National Laboratory Oak Ridge, TN, USA

Honorary Editors G. Cook, USA K. L. Moore, USA Ji-Luan Pan, PRC S. A. David, USA

S. J. Na, KOR Lin Wu, PRC Y. Hirata, JAP J. Norrish, AUS

T. Lienert, USA T. J. Tarn, USA

X. Q. Chen, NZL

D. Du, PRC X. D. Jiao, PRC

Guest Editors H. P. Chen, USA J. C. Feng, PRC H. J. Li, AUS

D. Hong, USA W. Zhou, SGP

D. Fan, PRC I. Lopez-Juarez, MEX

Regional Editors Australia: Z. X. Pan, AUS Europe: S. Konovalov, RUS

Asia: L. X. Zhang, PRC America: Y. K. Liu, USA

Associate Editors Q. X. Cao, PRC B. H. Chang, PRC J. Chen, USA H. B. Chen, PRC S. J. Chen, PRC X. Z. Chen, PRC A.-K. Christiansson, SWE Z. G. Li, PRC X. M. Hua, PRC

Y. Huang, USA S. Konovalov, RUS W. H. Li, PRC X. R. Li, USA Y. K. Liu, USA L. M. Liu, PRC H. Lu, PRC Z. Luo, PRC G. H. Ma, PRC

Pedro Neto, PRT G. Panoutsos, UK Z. X. Pan, AUS X. D. Peng, NL Y. Shi, PRC J. Wu, USA J. X. Xue, PRC L. J. Yang, PRC M. Wang, PRC

S. Wang, PRC X. W. Wang, PRC Z. Z. Wang, PRC G. J. Zhang, PRC H. Zhang, B, PRC H. Zhang, N, PRC L. X. Zhang, PRC W. J. Zhang, USA

S. L. Wang, PRC J. Xiao, PRC J. J. Xu, PRC Y. L. Xu, PRC C. Yu, PRC

H. W. Yu, PRC K. Zhang, PRC W. Z. Zhang, PRC Z. F. Zhang, PRC

Academic Assistant Editors J. Cao, PRC B. Chen, PRC Y. Luo, PRC N. Lv, PRC F. Li, PRC

S. B. Lin, PRC Y. Shao, USA Y. Tao, PRC J. J. Wang, PRC H. Y. Wang, PRC

Editorial Staff Executive Editor (Manuscript and Publication): Responsible Editors (Academic and Technical):

Dr. Yan Zhang, PRC Dr. Na Lv, PRC Dr. Jing Wu, USA

More information about this series at http://www.springer.com/series/15698

Shanben Chen Yuming Zhang Zhili Feng •



Editors

Transactions on Intelligent Welding Manufacturing Volume III No. 4 2019

123

Editors Shanben Chen Shanghai Jiao Tong University Shanghai, China Zhili Feng Oak Ridge National Laboratory Oak Ridge, TN, USA

Yuming Zhang Department of Electrical and Computer Engineering University of Kentucky Lexington, KY, USA

ISSN 2520-8519 ISSN 2520-8527 (electronic) Transactions on Intelligent Welding Manufacturing ISBN 978-981-33-6501-8 ISBN 978-981-33-6502-5 (eBook) https://doi.org/10.1007/978-981-33-6502-5 © Springer Nature Singapore Pte Ltd. 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

Editorials

This issue of the Transactions on Intelligent Welding Manufacturing (TIWM) is also a collection in part selected from the high-quality contributions recommended by “The 2019 International Workshop on Intelligentized Welding Manufacturing (IWIWM’2019).” It includes two feature articles, eight research papers and two short papers. The first featured article in this issue, “Multi-layer Multi-pass Welding of Medium Thickness Plate: Technologies, Advances and Future Prospects”, is contributed by Fengjing Xu, Shanben Chen and Yanling Xu from Shanghai Jiao Tong University. This paper discusses the research status of multi-layer multi-pass (MLMP) welding of medium thickness plates and welding simulation process in detail. Novel feature extraction methods based on different sensing techniques are also summarized in this paper. The second featured article in this issue, “A Review: Application Research of Intelligent 3D Detection Technology Based on Linear-Structured Light”, is contributed by Shaojie Chen, Wei Tao, Hui Zhao and Na Lv, from Shanghai Jiao Tong University. This paper analyzes the latest application and research of line-structured light sensing technology in the industrial application of workpiece object detection and positioning, geometric profile measurement, three-dimensional reconstruction. The first research article, “Acoustic Emission-Based Weld Crack In-situ Detection and Location Using WT-TDOA”, is contributed by Zhifen Zhang, Rui Qin and Guangrui Wen all from Xi’an Jiao Tong University. This paper proposes a time difference of arrival (TDOA) method based on wavelet transform (WT) to enable the accurate location of weld cracks at the beginning of its appearance. The results of the study show that TDOA based on wavelet transform (WT-TDOA) outperformed the conventional TDOA significantly in terms of location accuracy. The second research paper is entitled “The Research of Real-Time Welding Quality Detection via Visual Sensor for MIG Welding Process”. It is a contribution from a research team at Beibu Gulf University. A real-time monitoring system based on visual sensing technology is proposed, aiming at the online monitoring of weld quality in welding process. Based on the ROI visual attention mechanism, images of the front and back of the pool were extracted. A random forest special v

vi

Editorials

fusion model based on weld parameters and image features is constructed, which realizes the recognition of weld penetration status classification and the regression prediction of weld back penetration width. The third research paper titled “A Weld Bead Profile Extraction Method Based on Scanning Monocular Stereo Vision for Multi-layer Multi-pass Welding on Midthick Plate” is contributed by a research team from Shanghai Jiao Tong University. This paper studies multi-layer multi-pass welding (MLMPW). In this paper, scanning monocular stereo vision for MLMPW is used to reconstruct the weld bead. Through the slicing and filtering of the point cloud data, the profile of the weld bead surface is obtained, which provides a solid foundation for MLMPP and its online correction. The fourth research paper, “The Intelligent Methodology for Monitoring the Dynamic Welding Quality Using Visual and Audio Sensor”, is co-authored by Zhiqiang Feng, Ziquan Jiao and Junfeng Han affiliated with the Beibu Gulf University, China. In this paper, a real-time welding quality prediction scheme based on multi-information fusion is proposed. The results show that the arc sound and visual information could complement each other and be effectively combined to achieve adequate online welding quality monitoring. The fifth research paper, “Convolutional Neural Network Prediction of Aluminum Alloy GTAW Penetration Process Based on Arc Sound Sensing”, is contributed by a research team from Shanghai Jiao Tong University. This paper uses industrial Internet of things (IoT) to design a set of technical solutions, uploading various data collected during the welding process to the cloud for storage, and to remotely monitor the welding process in real time through a browser. A weld penetration state classification model based on convolutional neural networks is also established. The sixth research paper, “Identification and Penetration Prediction of Aluminum Alloy GTAW Pool Based on Network Vision Monitoring”, is a contribution from Shanghai Jiao Tong University. Weld penetration detection based on weld pool image in tungsten inert gas (TIG) welding has been a hot research topic in industry and academia. In this paper, a prediction model of the melt width on the back of the aluminum alloy TIG weld pool and a classification model of the aluminum alloy TIG welding state are constructed and XGBoost-based models are used for real-time prediction of backside bead width. The seventh research paper in the collection is “Research on Welding Transient Deformation Monitoring Technology Based on Non-contact Sensor Technology” contributed from researchers at Beibu Gulf University. This paper tries to analyze the research on detection of welding deformation in different sensory technologies. The results show that they could get good result in a different application environment for different sensing technologies. The non-contact detection could get more intelligent and more accurate, and the contact detection is more reliable and widely used in industry. The last research paper, “Binocular Stereo Vision and Modified DBSCAN on Point Clouds for Single Leaf Segmentation”, is a contribution from Shanghai Jiao Tong University. In this paper, a modified density-based spatial clustering of

Editorials

vii

applications with noise (DBSCAN) algorithm based on the above new-defined distance metric is used to cluster the refined point clouds. In the category of short papers, “Teaching-Free Intelligent Robotic Welding of Heterocyclic Medium and Thick Plates Based on Vision” is from Zhejiang Normal University. It provides technical support for the “intelligent manufacturing” upgrade of China’s high-end marine engineering equipment and has important value in engineering promotion. “In-Process Visual Monitoring of Penetration State in Nuclear Steel Pipe Welding” from Shanghai Jiao Tong University develops a pipe inner inspection robot equipped with CMOS sensor and laser scanner. The result shows that this monitoring system has an important impact on the quality control of the all-position pipe welding process. This issue of TIWM shows the new perspectives and developments in the field of intelligent welding research, as well as the topics related to the IWIWM’2019 Conference. The publication of this issue will certainly give readers new inspiration, as we always hope so. Prof. Yuming Zhang TIWM Editor-in-Chief University of Kentucky Lexington, KY, USA [email protected]

Contents

Feature Articles Multi-layer Multi-pass Welding of Medium Thickness Plate: Technologies, Advances and Future Prospects . . . . . . . . . . . . . . . . . . . . Fengjing Xu, Runquan Xiao, Zhen Hou, Yanling Xu, Huajun Zhang, and Shanben Chen A Review: Application Research of Intelligent 3D Detection Technology Based on Linear-Structured Light . . . . . . . . . . . . . . . . . . . . Shaojie Chen, Wei Tao, Hui Zhao, and Na Lv

3

35

Research Papers Acoustic Emission-Based Weld Crack In-situ Detection and Location Using WT-TDOA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhifen Zhang, Rui Qin, Yujiao Yuan, Wenjing Ren, Zhe Yang, and Guangrui Wen The Research of Real-Time Welding Quality Detection via Visual Sensor for MIG Welding Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Junfeng Han, Zhiqiang Feng, Ziquan Jiao, and Xiangxi Han A Weld Bead Profile Extraction Method Based on Scanning Monocular Stereo Vision for Multi-layer Multi-pass Welding on Mid-thick Plate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhen Hou, Yanling Xu, Runquan Xiao, and Shanben Chen The Intelligent Methodology for Monitoring the Dynamic Welding Quality Using Visual and Audio Sensor . . . . . . . . . . . . . . . . . . . . . . . . . Zhiqiang Feng, Ziquan Jiao, Junfeng Han, and Weiming Huang

49

75

87

99

Convolutional Neural Network Prediction of Aluminum Alloy GTAW Penetration Process Based on Arc Sound Sensing . . . . . . . . . . . . . . . . . 115 Zisheng Jiang, Chao Chen, Shanben Chen, and Na Lv

ix

x

Contents

Identification and Penetration Prediction of Aluminum Alloy GTAW Pool Based on Network Vision Monitoring . . . . . . . . . . . . . . . . . . . . . . 131 YiLei Luo, Chao Chen, ZiSheng Jiang, and Shanben Chen Research on Welding Transient Deformation Monitoring Technology Based on Non-contact Sensor Technology . . . . . . . . . . . . . . . . . . . . . . . 149 Ziquan Jiao, Zhiqiang Feng, Junfeng Han, and Weiming Huang Binocular Stereo Vision and Modified DBSCAN on Point Clouds for Single Leaf Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Chengyu Tao, Na Lv, and Shanben Chen Short Papers and Technical Notes Teaching-Free Intelligent Robotic Welding of Heterocyclic Medium and Thick Plates Based on Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Hu Lan, Huajun Zhang, Jun Fu, Libin Gao, and Liang Wei In-Process Visual Monitoring of Penetration State in Nuclear Steel Pipe Welding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Liangrui Wang, Shu’ang Wang, Weihua Liu, Yuefeng Chen, and Huabin Chen Information for Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Feature Articles

Multi-layer Multi-pass Welding of Medium Thickness Plate: Technologies, Advances and Future Prospects Fengjing Xu, Runquan Xiao, Zhen Hou, Yanling Xu, Huajun Zhang, and Shanben Chen Abstract Multi-layer multi-pass (MLMP) welding of medium thickness plate is always the focus and difficulty of research and application. It is widely used in shipbuilding and pressure vessel manufacturing. This paper summarizes the research developments of MLMP in recent years, including welding technologies, advances and future prospects. In welding technology, some modified welding techniques are included, such as ASDAW. The welding simulation process is also discussed in detail. In automation part, the novel feature extraction methods based on different sensing techniques are summarized. Then, the emphasis is laid on the researches of recent years on path planning model and seam tracking techniques. At the end of this paper, summary is made and future prospects of application of advanced technology in intelligent robotic MLMP welding are given. And the promising development directions of MLMP are prospected in intelligent robotic welding of medium thickness plate. Keywords Multi-layer multi-pass · Medium thickness plate · Intelligent robotic welding · Research developments · Welding techniques · Path planning · Seam tracking · Sensing technology

1 Introduction Nowadays, in order to realize the reliable connection of large components with large size and high thickness, two main ways of welding are implemented. Firstly, the swing F. Xu · R. Xiao · Z. Hou · Y. Xu (B) · H. Zhang · S. Chen (B) Intelligentized Robotic Welding Technology Laboratory, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] S. Chen e-mail: [email protected] Y. Xu · H. Zhang · S. Chen Shanghai Key Laboratory of Materials Laser Processing and Modification, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_1

3

4

F. Xu et al.

arc filling welding is used in medium and thick plate welding process. In addition, what’s more often used method is multi-layer multi-pass welding. The general weld of MLMP is shown in Fig. 1. It stands out for its low heat input, high efficiency and stable welding quality. In terms of metallurgy, MLMP has advantages in thermal cycling and macroscopic segregation. On the one hand, long-range and short-range multi-layer welding helps refining the grains, preventing quenching process and promoting the performance. On the other hand, compared with single-pass welding, MLMP has various welding passes. It will drive the phase segregation distributed more evenly. To some degree, this has the potential of restraining thermal crack and intrusion. However, there are some challenges to be solved in multi-layer multi-pass welding process. In terms of welding techniques, the problem of low efficiency and deformation accumulation is prominent. Technologically, multiple welding passes and layers will lead to low welding efficiency. To realize high-efficiency welding process, two methods are proposed to deal with this problem: reducing filling metal amount and increasing metal deposition rate. To this end, a novel welding method automatic DSAW with two different welding robots is proposed to promote welding efficiency [1]. To further reduce the complexity of changing power sources and torch, Zhang [2] introduced the ASDAW method. Miao [3] proposed a BC-DSAW method to solve the problem of collapse of molten weld pool, and the diagram is shown in Fig. 2. Also, another big problem is that the weld deformation is caused by accumulation of welding heat input. The main deformation type is the angular distortion. In order to reduce the welding deformation, the heat input of welding process should be reduced and welding fixture setting needs to be optimized. Many researches have been done to analyse the temperature field and residual stress of weld seam to achieve less angular distortion [4–5]. Another challenge is the application of multi-layer multi-pass welding in industrial automation. As for thin and medium plate, the online teaching strategy is adopted with good performance. Currently, in the field of multi-layer multi-pass welding, the teaching and playback robots are usually used. Once the minor changes happen during welding process, it will require a re-adjustment. Thus, this method not only is time-consuming and low efficient, but also leads to error accumulation. Thus, the introduction of an automatic multi-pass welding system will be a good approach for

Fig. 1 Multi-layer multi-pass weld profile

Multi-layer Multi-pass Welding of Medium Thickness Plate …

5

Fig. 2 Schematic diagram of BC-DSAW system [3]

promoting the welding efficiency and quality. To realize reliable MLMP welding, the application of sensors is indispensable. Sensors observe the welding environments through sensing multiple information during welding process. Based on this, the accurate welding condition can be reconstructed, as a basis for the realization of robotic intelligent welding, as shown in Fig. 3. So far, many researches have been

Fig. 3 Schematic diagram of BC-DSAW system

6

F. Xu et al.

done in multi-layer multi-pass welding process automation. Various types of sensor have been implemented, such as vision sensor, arc sensor, acoustic sensor, ultrasonic sensor, thermal sensor and so on. Among these sensing techniques, they have different preferred application areas. Vision sensors are mainly used in path planning, weld seam tracking, weld pool monitoring and so on. Information from vision sensor makes up for more than 80% of the whole feeding information [6], while thermal sensor and arc spectrum sensor are more frequently used in post-welding defect inspection process. In terms of traditional single-pass welding process, many researches have been done and many novel methods have been proposed in weld seam tracking, weld seam recognition and so on. Thus, recently, many scholars have been working on this field. In vision sensing, Ding [7] proposed a tracking algorithm based on template matching. Next welding position is realized through comparing the adjacent groove shape images in welding process. Xu [8] proposed a passive vision-based real-time weld seam tracking system for GTAW. A novel image processing algorithm and self-adaptive controller are used with good performance. Zou [9] introduced a laser vision-based seam tracking system, where an innovative image processing method and a CCOT algorithm are proposed. It reduces the influence of noise and achieves high tracking accuracy. In terms of arc sensing, Jeong [10] proposed a high-speed rotating arc sensor-based tracking system for welding robots. It realizes monitoring through the value of welding current in different regions during real-time welding process. However, as is shown in Fig. 4, due to complexity of MLMP, these methods mentioned above may be incompatible with multi-layer multi-pass welding process, which has the probability to fail in weld profile extraction. This paper aims to review the technologies and advances in multi-layer multi-pass welding in recent years. This paper will be arranged as follows, with the sequence of welding process. Firstly, novel welding technology is discussed in Sect. 2. Then, the weld simulation of welding technology is shown in Sect. 3. The novel method of automatically extracting feature and parameters of weld profile is discussed in Sect. 4.2. Then, the works of MLMP welding path planning are shown in Sect. 4. Then, the weld seam tracking during MLMP welding process based on different sensing techniques is discussed in Sect. 5. Then, the technical application of MLMP Fig. 4 Difficulty of laser stripe extraction of MLMP [11]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

7

Fig. 5 Block diagram of the sequence approaching the literature review

is shown in Sect. 6. Finally, conclusions and further prospects are proposed in Sect. 7. The block diagram of this paper is shown in Fig. 5.

2 MLMP Welding Techniques Traditional welding method for thick plate with double V-shape groove requires back chipping during welding process. This conventional method is time-consuming and inefficiency with long welding procedure. Also, it leads to big welding deformation and severe defects in the weld. Thus, it is not suitable in realistic industrial welding production. To overcome the problems of huge thick plate welding, a double side double arc welding technology is proposed by Zhang [1], where communications are realized for two welding robots. It requires one or two separated power resources to supply. The motion can be homogenous or different, according to the conditions. The demonstration of this method is shown in Fig. 6. Compared with conventional welding technology, the experiments suggest that the advantages of DSAW welding technology are evident in multiple aspects. First, in terms of feasibility, the double

Fig. 6 Demonstration of ASDAW method [1]

8

F. Xu et al.

arcs have no interruptions between each other and realize good weld root fusion. In addition, the DSAW method is featured by the reduction of cooling rate. It reduces the tendency of hardenability, avoids severe deformation caused by residual stress and improves the microstructure and mechanical performance [1]. To minimize the angular distortion caused by non-uniform transverse shrinkage on thickness, Zhang [12] proposed a new method for controlling the angular distortion: asymmetrical double side arc welding (ADSAW) technology. In addition, the certain relation between the double arc distance and the angular distortion deformation through analysis of finite element analysis model. Through experiments, it is obvious that the residual angular distortion shows a positive correlation with the double arc distance, as is shown in Fig. 7. More concretely, at a small arc distance, the angular distortion increases to the peak and falls; at a larger arc distance, the distortion rises again before gradually decreasing. And an equation is induced about the ADSAW angular deformation. This paper also proposed that angular distortion can be predicted with transverse plastic strains on the top and the bottom. In order to realize the low stress and non-angular distortion weld, [2] further experiments have been carried out. The non-angular distortion appears when the arc distance is between 0 and 50 mm. Thus, the selection of proper arc distance will lead to less angular distortion. In terms of stress, the longitudinal stress is the most severe. It mainly comes from the preheating, post-heating treatment and restraint force left by former pass. Similar correlations are established between the longitudinal stress and angular distortion and arc distance, as is shown in Fig. 8. Thus, good pre-planning of arc distance will lead to a low stress and non-angular distortion weld. Based on this ADSAW method, Chen [13] proposed a novel welding technique to acquire a defect-free weld for 50-mm-thick plate. It realized the thick low alloy high tensile strength steel DSAW welding without back chipping to promote the welding

Fig. 7 Relation between arc distance and angular distortion [12]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

9

Fig. 8 Longitudinal stress and angular distortion at different arc distances [2]

efficiency with good welding qualities, as shown in Fig. 9. The acicular ferrite and a small amount of granular bainite make up for the weld seam microstructure. The average hardness can reach 384.9 HV. It is higher than that of the base metal but lower than the HAZ zone. Experiments show that no obvious macro-defects are found. The average tensile strength of the weld joint reaches 862.73 MPa, which is 98.4% of the base metal. Also, the plasticity and toughness are satisfying. In addition, Yang [14] discussed the effect of DSAW on preheating treatment temperature based on low alloy high tensile steel. The mechanical performance and numerical simulation are conducted during experiments. The results of transient temperature simulation are mostly consistent with the real temperature. It is found

Fig. 9 Macro-appearance and weld profile of welded joint [13]

10

F. Xu et al.

that the critical stress of DSAW without preheating is higher than that of the one with preheating of 100 °C. The value of stress is almost doubled. It means that this method achieves better cold crack resistance without preheat treatment. As is shown in Fig. 10, the appearance of acicular ferrite explains the performance promotion. Thus, preheat temperature can be lower or the process can totally be removed in DSAW on LAHS steel. Zhao [15] proposed a high-efficiency, high-quality multi-pass tandem gas metal arc welding (TGMAW) method for Al alloy. Through analysis of deposition rate and welding time, the welding efficiency is highly promoted with the reduction of heat input. The mechanical property of the joint has been promoted due to the tendency of less porosity and small grains. This method has promising application in industrial use of thick plate welding process (Fig. 11).

Fig. 10 Microstructure of the weld zone [14]

Fig. 11 TGMAW equipment [15]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

11

3 MLMP Welding Simulation Analysis The welding simulation process has been irreplaceable in weld studying. Finite element analysis is widely used to simulate the residual stress and deformation with high precision instead of doing actual experiments. In thick plate multi-layer multipass welding process, to get the temperature field, the evolution and distribution of stress and deformation are important for achieving better welded joint. Zhang [4] proposed a three-dimensional simulation analysis to unveil the actual relations between the inter-pass stress and deformation during the MLMP welding process. The filling process step by step is considered in the simulation process, and the numerical model is verified consistently with the theoretical results. Analysis result suggests that the high stress appears in the back run and covers pass. Lower pass and upper pass show different influences on the stress of upper surface. Latter pass is verified to reduce the tensile strength on the root. Also, with the increase of welding passes, the angular distortion shows a nonlinear relation. As the pass number gets bigger, the fluctuation of angular distortion reduces (shown in Fig. 12). In further researches, it is found that the longitudinal stress of DSAW is similar to single arc welding (SAW); however, the transverse welding stress is obviously lower than the SAW due to the high thermal imbalance and temperature gradient [16]. Welding with weaving is proposed in thick plate welding to deal with the problem of sidewall incomplete fusion of wide weld bead. In order to find the exact influences of arc weaving on the temperature and stress field, Chen [17] carried out experiments with three-dimensional model based on Abaqus. It compares the temperature field with a different weaving amplitude and weaving frequency. It is found that both factors have relations with temperature distribution. The curve is shown in Fig. 13. With the incensement of weaving amplitude and frequency, the peak and average temperature of weld seam decreases. However, the effect of frequency is minor.

Fig. 12 Relation between angular deformation and pass number [4]

12

F. Xu et al.

Fig. 13 Centre temperature with a different weaving amplitude and frequency [17]

Fig. 14 Basic principle of ADSAW process and welding system [5]

Chen [5] carried out the temperature field analysis of ADSAW (shown in Fig. 14) to explore the exact influence of the arc length on temperature and weld shape based on the finite element analysis model on Abaqus software. Experiment results agree with the calculated results. When the arc distance is limited in 20 mm, it shows greater effects on the interactions. However, when the distance is bigger than 80 mm, the approximated single peak is replaced with a double peak curve. The preheating and post-heating treatment is reduced when the distance gets greater. Also, the arc distance is proved to be related to the shape of the fore and rear molten pool, respectively, in different ranges. It gives guides for adjusting a proper arc distance in welding process.

4 MLMP Path Planning Automatic path planning is the key technology of MLMP intelligent robotic welding because it controls the geometry and property of the current and following weld bead, which will ultimately influence the whole weld joint performance. The path planning

Multi-layer Multi-pass Welding of Medium Thickness Plate …

13

is the process of planning the spatial trajectory of the welding torch amounted on the welding robots based on the welding condition, according to some certain rules. In general, the path planning process mainly consists of three parts: weld bead layout determination, welding sequence planning and robot move route planning. The weld bead layout should be determined through the CAD model of the certain welding condition. Welding sequence planning is to determine the exact welding order to fill the whole groove. Three types of welding pass are included: root weld, filler weld and cap weld. Normally, the first two or more weld layer will be made up of only one pass, which will be approximated as trapezoids. Then, the following weld pass will be simplified as parallelograms and trapezoids, respectively. The welding sequence planning can be roughly divided into two parts. They are sideto-side welding mode and side-to-centre welding mode. However, the planning of welding robot route is the process of determining the number of welding layer and beads, welding route for each bead and setting extra requirements for welding torch.

4.1 Weld Seam Feature Extraction It is a promising development trend to realize intelligent robotic welding. The complication and difficulty of MLMP are that the path planning part to achieve high efficiency and good welding quality, while the prerequisite of the planning is the weld seam profile feature extraction, which lays the foundation for further planning and tracking. Researches have been done in recent years in multiple aspects, with the main focus on vision sensing.

4.1.1

Vision Sensing Technique

To get promoted accuracy of feature point extraction than single-line laser vision sensor, some scholars do explorations on the design of laser vision sensors. Zhang [18] proposed a grid structured laser vision sensor for acquiring 3D information from weld seam, as is shown in Fig. 15. The grid structured laser light is projected onto the surface of the weld to detect the current weld seam morphology for locating the path position for the next pass. A series of image processing algorithm is used, including average smoothing, thresholding and thinning. Then, based on the processed images, an algorithm was proposed for locating the convex point in the horizontal line as the target point. The current weld bead can be detected. It lays the foundation for path plan of the next layer. To avoid problems of overlapping of two cameras and collision in spherical tank multi-pass welding, Wang [6] proposed a “quasi-double cameras” stereovision (QDCS) sensor based on one moving camera. The camera is fixed to the torch and will capture left and right images when swing with the torch during welding process. Then, the welding groove profile characters can be approximately calculated based on a particular algorithm. The calculation principle and detection results

14

F. Xu et al.

Fig. 15 Diagram of the vision sensing system [18]

Fig. 16 Diagram of calculation principle of QDCS sensor [6]

are shown in Fig. 16. The calculated weld profile features can be used for multi-pass path planning. Experimental results show that the error is limited in 0.5 mm. Thus, this method has the potential in multi-pass welding control. Zeng [11] introduced a weld seam position recognition method in multi-layer multi-pass welding based on the idea of information fusion. The weld seam recognition method can not only be used in path planning before welding, but also has potential of controlling deviation of the welding torch in real-time welding process. In order to recognize the micro-differences between weld beads, two more directional light resources along the weld seam are implemented. At the meantime, a single structured light resource is also used. As is demonstrated in Fig. 17, from the directional resources, the adjacent images are fused to eliminate the impact of noise and other interferences. Last, the images from directional resources and structured resources are fused to get accurate position of the weld seam. In order to extract weld profile of thick plater for T-joints, He [19] proposed a vision sensor-based system for simultaneously capturing the weld pool image and laser stripes in the same frame. And a seven-step novel algorithm for extracting the laser stripes on the background of weld pool image is proposed. The whole processing

Multi-layer Multi-pass Welding of Medium Thickness Plate …

15

Fig. 17 Diagram of fusing algorithm [11]

is shown in Fig. 18. A binary image superposition method is created to resolve the problem of laser stripe loss in extraction procedure caused by the complication of laser conditions. The experiments support that the method has good performance in feature extraction of T-joints.

(a)

(b)

(c)

(d)

Fig. 18 Illustration of image processing steps [19]

16

4.1.2

F. Xu et al.

Other Sensing Technique

Kim [20] invented a touch sensor base automatic multi-layer multi-pass welding system, to overcome the low efficiency and inconvenience of manual and semiautomatic welding in containerships manufacturing. The system is made up of a carriage and a controller. The touch sensing algorithm is shown in Fig. 19. Through touch sensing algorithm, the geometry parameters of the joint (groove angel, thickness and assembly gap) can be detected. Then based on this, the certain welding condition can be established with welding database. The path-and-layer information will be searched. Then, DSP board will guide the carriage’s movement according to the welding sequence. Infrared sensing technique also has applications in weld seam tracking process. Due to the limitation of high temperature and bright arc light, the infrared sensor offers a way of detecting the interior information during welding process without reaching and touching. Usually, the infrared sensing has the potential of applying in narrow groove multi-pass welding or swing arc welding process. Zhu [21] proposed an infrared sensing-based swing arc detection method for narrow groove thick plate welding. A novel algorithm is proposed to calculate the weld torch deviation off the centre. To determine the centre, a local pattern recognition (LPR) method is introduced to detect the extract location of sidewall and welding torch from the infrared images. This algorithm reduces the influences of spatter and other noise to promote detection accuracy. It is based on the sliding window with minor sliding steps to get the accurate location of sidewall. Torch location is also processed in the same way. Experiments support that this method has good performance, with the detection error within the range of ±0.086 mm, which is better than the traditional mean value method. The welding result is shown in Fig. 20. Fig. 19 Touch sensing algorithm [20]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

17

Fig. 20 Diagram of weld bead formation [21]

4.2 Normal Weld Path Planning Yang [22] proposed a vision sensor-based multi-pass path planning system for thick plate, as in Fig. 21. It solves the problem that minor error of welding assembly will lead to significant influences on the welding quality. The welding module is made of the DSAW system with a self-designed pass vision sensor, where an image processing algorithm is proposed. Then, the characteristics of the geometry can be extracted including the assembly gap, angle, thickness and width. With these and welding parameters, the path planning can be realized. In weld sequence planning, side-to-centre strategy is chosen for high welding efficiency. Accoridng to the strategy for robot route movement in filler passes, 0.5 s residence is guaranteed when moving torch to the sidewall and an angel of 10–15° adjustment is required when torch is adjacent or near the left and right. Based on this method, Yang [23] conducted experiments with DSAW on low alloy high tensile steel thick plate to verify the exact planning results. The number of layers,

Fig. 21 Planning of welding layer and sequence [22]

18

F. Xu et al.

passes and weld torch position is calculated according to the welding parameters and weld groove features. The experiment results show that the planning is effective and the weld quality is satisfying. Based on this method, the back complete fusion and good appearance of the front can be achieved, as is shown in Fig. 22. Chang [24] proposed a novel algorithm for welding path planning based on feature point detection. Under the need of double-hulled ship wall welding, to solve the problem of hostile welding condition, automatic robotic multi-pass welding requires better algorithm for planning welding path according to actual welding environment. As is shown in Fig. 23, the algorithm consists of four steps, namely scanning the depth data of a cross section, de-noising with a Gaussian filter, detecting characteristic points with a differential characteristic-point detection algorithm and planning welding path based on reference point. Then, the effectiveness of this algorithm has been validated well with welding experiments. Zhang [25] introduced a self-defined robot path route planning model for thick plate multi-pass welding, which realizes the offline path layout programming for

Fig. 22 Appearance of weld bead and welded joint [23]

Fig. 23 Flow diagram of the path planning algorithm [24]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

19

Fig. 24 Diagram of path planning model procedure [25]

MLMP. To achieve the goal of automatic path plan, a mathematical model is established. Users are supposed to choose appropriate welding parameters according to specific welding condition, such as layer number, pass number and other electronic parameters, as the input for the automatic path plan model. Then, the torch position, torch attitude and oscillation displacement will be settled automatically, as the output of the model. According to experimental results, the sidewall fusion is satisfying and the performance can totally meet the requirements of practical welding process (Fig. 24).

4.3 Special Weld Path Planning During MLMP process of thick plate, the weld width will change due to the lack of processing accuracy and heat accumulation. To overcome this problem, Zhang [26] proposed a multi-layer multi-pass welding path planning with changing weld seam width. A laser stripe recognition algorithm is proposed with rectangular target template to extract weld seam edge within the accuracy of 0.8 mm. In planning, the welding speed, amplitude and pose are adjusted for the certain changing gaps. It achieves the constant weld bead height and width. Experiments verified its effectiveness, and the fluctuation of bead height is limited in 1 mm. The welded bead is shown in Fig. 25. Aiming for fillet joint multi-pass welding, Wu [27] proposed an automatic bead layout method to replace the teaching and playback mode self-defined in welding of

20

F. Xu et al.

Fig. 25 Welded seam with changing gaps [26]

thick materials. From manual welding experiments, it is verified that value of single bead size between 25 and 30 mm2 is a more appropriate range. Thus, based on the automated welding path planning model, the bead shape and welding parameters will be selected automatically through three steps. The first step is to calculate the weld bead number and layer height of each layer, and the flow chart is shown in Fig. 26. The second step is to locate the weld toe and guide the welding torch. The third step is to get the travelling velocity based on certain welding equipment. To realize multi-pass welding for brace-to-chord joint of offshore oil rigs, normal method fails because of the obstacles from welding fixtures and complexity of the weldment. Ahmed [28] proposed a novel collision-free path planning method for multi-pass welding. The method can be divided into two steps. The first step is to

Fig. 26 Flow chart of the process of bead number determination [27]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

21

Fig. 27 Flow path of the path planning process [28]

create a weld bead path layout based on the groove geometry from the CAD model. The second step is to use A* search method to generate the intermediate trajectories based on the mathematical model for each joint. Trajectories with no collision will be exported to the robot master for further simulation evaluation. The flow path is shown in Fig. 27. Fang [29] proposed a robot path planning method for complex joint welding. It overcomes the difficulty caused by non-uniformity and irregular geometries of weld grooves. The first step is, based on the welding groove segregation scheme, several robot welding paths can be generated accordingly with good reachability and welding route consistency. Then, the second step is the optimization process on the basis of the minimum joint movement, finding the optimal welding path for the pass with a novel path searching algorithm. The optimization process is shown in Fig. 28. The simulation experiments are conducted to demonstrate the path generation and optimization process.

4.4 Database-Based Planning Yan [30] introduced a knowledge database-based pass planning method for welding huge and thick joints with irregular groove surfaces. The bead geometry control is important in multi-pass welding, which highly depends on the welding parameters. Thus, a welding knowledge database is used to establish the relations between the welding parameters and the weld bead geometrical features, with better compatibility than only ANN and regression method and better accuracy than equation fitting. Then, an algorithm for path planning for welding is used. The optimization algorithm finds

22

F. Xu et al.

Fig. 28 Flow path of the robot path optimization algorithm [29]

Fig. 29 Inputs and outputs of the model of welding process [29]

the optimal plan through maximizing the bead pass area to minimize number of pass. This method is proved to be effective in use with experiments (Fig. 29).

5 Weld Seam Tracking Weld seam tracking is an important part of automatic MLMP welding process. Based on the sensing results, the controller adjusts the welding parameters and welding

Multi-layer Multi-pass Welding of Medium Thickness Plate …

23

torch position to eliminate the deviation off the weld seam. As is known, error in a single pass may influence the weld bead shape. Then, the error accumulation will lead to serious problems, influencing the following welding passes and layers. Thus, weld seam tracking is an irreplaceable and essential part of MLMP welding process. Multiple sensing techniques are utilized in this filed, such as vision sensing, arc sensing, thermal sensing and so on. They all have preferred application field.

5.1 Vision Sensing Technique Gu [20] proposed an automatic weld seam tracking system of arc welding robot. It solves the problem of lack of accurate weld seam recognition left by the teaching and playback welding robots. This seam tracking system mainly consists of a vision sensor, control cabinet and welding robot. As is shown in Fig. 30, an advanced algorithm is invented to reduce tracking error through filtering, thresholding, denoising, thinning and fitting. A fuzzy-P controller is used for torch position correction, where the proportion controller and fuzzy controller are used in different situations. It is proved to be effective in all three different welding environments: root pass, fillet pass and cap pass. The tracking error is limited within 0.3 mm. Recently, the idea of human in the loop is laid emphasis on. Combining the advanced sensing technology and control system with human adjustment to minor errors, the system has the potential of achieving better performance. Experiments show that this kind of system can totally satisfy the industrial requirements in instantaneity and accuracy.

Fig. 30 Image processing procedure [20]

24

F. Xu et al.

Xue [31] introduced a multi-pass robotic weld seam tracking system based on passive vision sensing and human interaction. It solves the problem from spatter and arc light, such as the losses of centreline and misalignment. Though active vision sensing has good performance for weld seam recognition, human interaction and passive vision sensing are combined because structure deformation brings error for locating welding torch (Fig. 31). In pre-experiments, the certain relations between deviation of centreline and groove width and the horizontal and vertical deviation are established. In welding process, the operator is supposed to, obeying certain rules, adjust torch location according to the positioning ruler on the monitoring surface, as is shown in Fig. 32.

Fig. 31 Experimental set-up of the robotic welding tracking system [30]

Fig. 32 Monitoring surface of the system [31]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

25

Fig. 33 Diagram of the robotic welding system [31]

Experiments verify its effectiveness and accuracy. The error is limited within 0.5 mm, and the total response lag is no more than 1.5 s, which satisfies the real-time monitoring requirements. Csongor [32] proposed a multi-pass TIG welding system for Francis hydropower turbines to overcome the problem of bad working condition and low efficiency of manual welding. The introduction of human-in-the-loop model helps optimizing the welding robots’ movement and welding parameters based on the operator’s previous knowledge. It will lead to a more stable and better welding quality of welding process. Three different phases are included. The first phase is the CAD-based offline path planning. The second phase is to prepare for the update plan based on previous operations and current state. The final phase is for the operator to make amendments for welding torch trajectory and sometimes the welding parameters such as welding travelling speed, weld feeding rate and other electronic parameters. According to the experimental results, the performance of the system meets the requirements (Fig. 33). He [33] proposed a WSPE method to realize better weld seam tracking result under the background of strong arc and splashes based on the visual attention principle. In pre-processing step, the binary images are acquired. Then based on these, nearest neighbourhood clustering algorithm is used to cluster the points with highest grey value. A certain strategy to distinguish the weld seam profile points from others is proposed. And through this strategy and visual continuity principle, cluster of points that belong to the WSP is selected. Experiments verified its good robustness in dynamic welding process of thick plates. Also, He [34] proposed a top-down visual attention model used to gradually recognize the exact cluster of points of weld seam profile step by step. The EWMA control chart is used in profile fault detection and a strategy is proposed to reduce loss of extraction. The effectiveness of this method has been proven by experiments (Fig. 34).

5.2 Arc Sensing Technique Arc sensing is a significant method in weld seam tracking. Compared with vision sensing, although arc sensing has worse sensitivity and accuracy, it is free from noise and of low costs economically. Arc sensor can directly reflect the information from

26

F. Xu et al.

Fig. 34 Result of image processing of reference method and WSPE method [32]

weld arc, mainly through welding arc voltage and current. Arc sensing is usually used in thick plate weaving welding process for error correction. Daehyun [35] proposed an arc sensor-based automatic welding system with weaving width control and seam tracking algorithm for narrow groove thick plate welding. The idea of EMZ is introduced to prevent sidewall fusion problem. The seam tracking algorithm determines the corresponding correction movements through calculating the difference between the mean values of arc voltage. The weaving width control algorithm alters the weaving width according to the changes of welding width, improving the groove fusion defects, judging from the comparison of Fig. 35.

Fig. 35 Comparison between the automatic welding and manual welding [35]

Multi-layer Multi-pass Welding of Medium Thickness Plate …

27

Fig. 36 Pile leg of the offshore drilling platform

6 Industrial Application The welding of medium thickness plates is covered in many economy areas, such as bridge construction, petrochemicals, ships, electric power, pressure vessels and so on. With the exhaustion of terrestrial energy resources, the development of the marine economy is put emphasis on. It includes rationally developing and utilizing marine resources, actively developing marine oil and gas industries, and manufacturing the large marine engineering equipment. The welding technique, simulation process, path planning and seam tracking methods mentioned above have been verified effective in plate welding in offshore drilling platform, such as the pile leg, as shown in Fig. 36. The proposed ADSAW is proposed to deal with the problem of thick plate multipass multi-layer welding. It realizes the double side welding without back gouging. This method is believed to reduce the requirements of human labour. It successfully promotes the effectiveness and guarantees the weld quality of thick plate welding (Fig. 37).

7 Conclusions and Future Prospect This paper reviews the main techniques in thick plate multi-pass welding process, mainly including weld techniques, weld simulation analysis, path planning and weld seam tracking. Path planning is to determine the practical welding scheme before welding. Weld seam tracking is the process of adjusting of weld torch position during real-time welding process. 1.

In terms of welding techniques, to solve the problem of low efficiency of MLMP, the robotic DSAW is introduced and studied, where two welding torches are placed on the front and the back of the plates with controlling certain distance

28

F. Xu et al.

Fig. 37 Demonstration of ADSAW method in industrial use

2.

3.

4.

between the two arcs. Good weld root fusion is able to be realized. To achieve better performance, ADSAW is proposed. Through process test, it is verified that angular distortion and stress are reduced, so the deformation is minimized. Also, this method realized good plasticity and toughness and better cold crack resistance without preheat treatment. MLMP has the problem of serious deformation accumulation. To this end, finite element analysis is widely used to simulate stress field, temperature field and evolution of deformation. Through stress field simulation analysis, the relation between inter-pass stress and deformation can be found. In temperature field analysis, the influence of MLMP weaving and arc length on the actual thermal cycle is conduced accordingly. The path planning process mainly consists of three parts, weld bead layout determination, welding sequence planning and robot move route planning. With the geometrical characteristics of the weld groove from sensors, the weld bead layout can be determined to do welding sequence planning. After that, the move route of welding robot should be planned with certain rules in path planning model. The main sensing technique is vision sensing technique, arc sensing technique and touch sensing technique. In terms of the planning algorithm, different path planning strategies are proposed for traditional normal weld and special weld, respectively, with establishing different algorithm models, aiming for specific application condition. Also, database is used to promote the performance of path planning. The weld seam tracking is the real-time process and real-time controlling of welding torch position and conducting error correction through controller during multi-pass welding process. The sensing techniques implemented mainly consist of vision sensing technique, arc sensing technique and infrared sensing technique. Vision sensing is still the most significant part, due to the high requirement of more information in seam detection and recognition. Tracking process put emphasis on instantaneity and accuracy. The tracking error can be kept less

Multi-layer Multi-pass Welding of Medium Thickness Plate …

29

than 1 mm, but still has room for promotion. Recently, the concept of human in the loop helps to achieve better tracking accuracy.

7.1 Sensing Technique Application Tracking process put emphasis on instantaneity and accuracy. Former sections review the applied sensing techniques, and summary has been made. Some other sensing technology is also of great use, which might be the future development directions. For example, ultrasonic, acoustic and thermal sensing technique is appropriate for seam tracking. And in terms of detection and post-welding inspection, arc spectrum sensor is used. Ultrasonic sensing technique is usually used to detect the weld seam geography and realize the weld seam location. As a non-contact sensing method, it has some advantages, namely low cost and good sensitivity. Mahajan [36] introduced an ultrasonic sensing-based weld seam tracking method to detect the curve weld seam. As a non-contact sensor, this seam tracking has good robustness, which can be detect-free from other noise and influences from arc welding. Welding is a thermal process with changing and complex temperatures in different regions. Thus, it is believed that thermal information is an important signal in realtime controlling. In MLMP, the assistance of thermal information has the potential of prompting the effectiveness in weld seam tracking. Yu [37] introduced an infrared point sensing device to acquire infrared information of the weld seam. In multi-pass welding, different temperature distributions may be established in a different welding pass and layer with different defects. Through searching for approximated relation between arc spectrum band and welding parameters, the online real-time detection can be realized with high accuracy and convenience. Mirapiex [38] proposed a real-time welding detection technique based on plasma spectrum optical analysis. In MLMP welding, the defects mainly include sidewall incomplete fusion and lack of penetration. Thus, by observing the spectrum sensing results, the seam tracking and defect inspection can be realized in MLMP welding process.

7.2 Emerging Techniques Recently, many researches have been done in some emerging technology. The application of these advanced technologies in intelligent robotic welding has drawn much attention. With the development of artificial intelligence and sensing technology, it has the potential of use in many aspects of robotic welding.

30

7.2.1

F. Xu et al.

Deep Learning

Recently, with the development of artificial intelligence, deep learning has found its way to be applied in industry. Automatic prediction and monitoring have been established based on SVM and BP network. Nowadays, the advance and popular deep learning methods include convolutional network (CNN), deep belief network (DBN) and so on [39]. Ma [40] proposed a deep learning-based welding system to detect and identify the weld seam position. Traditional methods may fail in detecting multi-pass multi-layer welding process, due to the complicated weld condition.

7.2.2

Information Fusion

The multi-senor information fusion and integration is the process of combining and arranging the information signal in different sensing techniques in certain protocols. It has the advantages of redundancy, instantaneity, complementarity and low cost. Recently, with the soaring of sensing techniques, multi-sensor information fusion technology has been the hotspot research direction in intelligent robotic welding process [41]. Chen proposed a multi-sensor fusion and integration welding penetration status prediction system for pulsed GTAW. Compared with traditional single-pass welding, the characteristic of MLMP is the complexity in many ways. Thus, it is prospective that multi-sensor fusion and integration will promote the performance of the traditional sensing and controlling process.

7.2.3

Human in the Loop

Recently, some researches have been focusing on the system with human in the loop, instead of realizing total automation with no human labour, such as human– computer interaction (HCI). For one thing, with introducing human in the system to do the monitoring, the accuracy and performance can be remarkably promoted. For another, it reduces the technical difficulties of intelligent robotic MLMP welding process. Thus, with the idea of human in the loop, the human labour can do easy job to realize better performance of MLMP welding. Acknowledgements This work is partly supported by the National Natural Science Foundation of China (No. 61873164, 61973213) and the Shanghai Natural Science Foundation (No.18ZR1421500).

Multi-layer Multi-pass Welding of Medium Thickness Plate …

31

References 1. Zhang HJ (2009) New technology of double-side double arc welding and robot automatic welding for large thick plate of high strength steel. Harbin Institute of Technology 2. Zhang H, Zhang G, Cai C et al (2008) Fundamental studies on in-process controlling angular distortion in asymmetrical double-sided double arc welding. J Mater Process Technol 205(1– 3):214–223 3. Miao Y, Xu X, Wu B et al (2014) Effects of bypass current on the stability of weld pool during double sided arc welding. J Mater Process Technol 214(8):1590–1596 4. Zhang HJ, Zhang GJ, Zhang XL et al Three dimension simulation analysis of the interpass stress and deformation during multipass welding. China Weld 72–78. 5. Chen YX, Xu YL, Chen HB, Zhang HJ, Chen SB, Han Y (2015) Temperature field of doublesided asymmetrical mag backing welding for thick plates. In: Tarn TJ, Chen SB, Chen XQ (eds) Robotic welding, intelligence and automation. RWIA 2014. Advances in Intelligent Systems and Computing, vol 363. Springer, Cham. https://doi.org/10.1007/978-3-319-18997-0_19 6. Wang J, Chen Q, Sun Z (2004) Multi-pass weld profile detection for spherical tank through “quasi double cameras” stereovision sensor. In: International conference on information acquisition, 2004 Proceedings. Hefei, pp 376–379. https://doi.org/10.1109/ICIA.2004.1373393 7. Ding Y, Huang W, Kovacevic R (2016) An online shape-matching weld seam tracking system. Robot Comput-Integr Manuf 42:103–112 8. Xu Y, Yu H, Zhong J et al (2012) Real-time seam tracking control technology during welding robot GTAW process based on passive vision sensor. J Mater Process Technol 212(8):1654– 1662 9. Zou Y, Chen T (2018) Laser vision seam tracking system based on image processing and continuous convolution operator tracker. Opt Lasers Eng 105:141–149 10. Jeong S-W, Lee G-Y, Lee W-K, Kim S-B (2001) Development of high speed rotating arc sensor and seam tracking controller for welding robots. In: ISIE 2001. 2001 IEEE international symposium on industrial electronics proceedings (Cat. No. 01TH8570), vol 2. Pusan, South Korea, pp 845–850. https://doi.org/10.1109/ISIE.2001.931578 11. Jinle Z, Baohua C, Dong D et al (2018) A weld position recognition method based on directional and structured light information fusion in multi-layer/multi-pass welding. Sensors 18(2):129 12. Zhang HJ, Zhang GJ, Wu L (2007) Effects of arc distance on angular distortion by asymmetrical double sided arc welding. Sci Technol Weld Joining 12(6):564–571 13. Chen Y, Yang C, Chen H et al (2015) Microstructure and mechanical properties of HSLA thick plates welded by novel double-sided gas metal arc welding. Int J Adv Manuf Technol 78(1–4):457–464 14. Yang C, Zhang H, Zhong J et al (2014) The effect of DSAW on preheating temperature in welding thick plate of high-strength low-alloy steel. Int J Adv Manuf Technol 71(1–4):421–428 15. Jiang Z, Hua X, Huang L et al (2019) High efficiency and quality of multi-pass tandem gas metal arc welding for thick Al 5083 alloy plates. J Shanghai Jiaotong Univ (Sci) 24:148–157. https://doi.org/10.1007/s12204-018-1977-y 16. Zhang HJ, Zhang GJ, Cai CB et al (2009) Numerical simulation of three-dimension stress field in double-sided double arc multipass welding process. Mater Sci Eng a 499(1–2):309–314 17. Chen Y, He Y, Chen H et al (2014) Effect of weave frequency and amplitude on temperature field in weaving welding process. Int J Adv Manuf Technol 75(5–8):803–813 18. Zhang C, Li H, Jin Z, et al (2016) Seam sensing of multi-layer and multi-pass welding based on grid structured laser. Int J Adv Manuf Technol 2016 19. He Y, Zhou H, Wang J et al. (2016) Weld seam profile extraction of T-joints based on orientation saliency for path planning and seam tracking. In: Advanced robotics and its social impacts. IEEE 20. Zhu J, Wang J, Su N et al (2017) An infrared visual sensing detection approach for swing arc narrow gap weld deviation. J Mater Process Technol 243:258–268 21. Yang C, Ye Z, Chen Y et al (2014) Multi-pass path planning for thick plate by DSAW based on vision sensor. Sensor Rev 34(4):416–423

32

F. Xu et al.

22. Yang CD, Huang HY, Zhang HJ et al (2012) Multi-pass route planning for thick plate of low alloy high strength steel by double-sided double arc welding. Adv Mater Res 590:28–34 23. Chang D, Son D, Lee J et al (2012) A new seam-tracking algorithm through characteristic-point detection for a portable welding robot. Robot Comput Integr Manuf 28(1):1–13 24. Zhang H, Lu H, Cai C et al (2011) Robot path planning in multi-pass weaving welding for thick plates 25. Zhang X (2015) Research on robotic welding system and multipass planning based on laser vision sensor. Shanghai Jiao Tong University 26. Wu Y, Go JZM, Ahmed SM, Lu W, Chew C, Pang CK (2015) Automated bead layout methodology for robotic multi-pass welding. In: 2015 IEEE 20th conference on emerging technologies and factory automation (ETFA). Luxembourg, pp 1–4, https://doi.org/10.1109/ETFA.2015.730 1590 27. Ahmed SM, Yuan J, Wu Y, Chew CM, Pang CK (2015) Collision-free path planning for multipass robotic welding. In: 2015 IEEE 20th conference on emerging technologies and factory automation (ETFA). https://doi.org/10.1109/etfa.2015.7301594 28. Fang H, Ong S, Nee A (2016) Robot path planning optimization for welding complex joints. International Journal of Advanced Manufacturing Technology 29. Yan S, Fang H, Ong S et al (2017) Optimal pass planning for robotic welding of largedimension joints with nonuniform grooves. Proc Inst Mech Eng Part B J Eng Manuf 2017:095440541771887 30. Xue K, Wang Z, Shen J, Hu S, Zhen Y, Liu J, Yang H (2020) Robotic seam tracking system based on vision sensing and human-machine interaction for multi-pass MAG welding. J Manuf Process 31. Horváth CM, Korondi P, Thomessen T (2017) Robotized multi-pass tungsten inner gas welding of Francis hydro power turbines. In: IEEE international symposium on industrial electronics. IEEE 32. He Y, Yu Z, Li J et al (2020) Discerning weld seam profiles from strong arc background for the robotic automated welding process via visual attention features. Chin J Mech Eng 33:21. https://doi.org/10.1186/s10033-020-00438-2 33. He Y, Yu Z, Li J et al (2019) Weld seam profile extraction using top-down visual attention and fault detection and diagnosis via EWMA for the stable robotic welding process. Int J Adv Manuf Technol 104:3883–3897. https://doi.org/10.1007/s00170-019-04119-w 34. Baek D, Moon HS, Park S (2017) Development of an automatic orbital welding system with robust weaving width control and a seam-tracking function for narrow grooves. Int J Adv Manuf Technol 93:767–777. https://doi.org/10.1007/s00170-017-0562-0 35. Gunnarsson KT, Prinz FB (1984) Ultrasonic sensors in robotic seam tracking. In: American control conference. IEEE 36. Yu P, Xu G, Gu X et al (2017) A low-cost infrared sensing system for monitoring the MIG welding process. Int J Adv Manuf Technol 92:4031–4038. https://doi.org/10.1007/s00170017-0515-7 37. Mirapeix J, Cobo A, Conde OM (2006) Real-time arc welding defect detection technique by means of plasma spectrum optical analysis. NDT & E Int 39(5):356–360 38. Li PJ, Zhang YM (2001) Precision sensing of arc length in GTAW based on arc light spectrum. J Manuf Sci Eng 123(1):62 39. Ma X, Pan S, Li Y, Feng C, Wang A (2019) Intelligent welding robot system based on deep learning. In: 2019 Chinese automation congress (CAC). Hangzhou, China, pp 2944–2949. https://doi.org/10.1109/CAC48633.2019.8997310 40. Luo RC, Chou YC, Chen O (2007) Multisensor fusion and integration: algorithms, applications, and future research directions. In: International conference on mechatronics and automation. IEEE

Multi-layer Multi-pass Welding of Medium Thickness Plate …

33

41. Chen B, Wang J, Chen S (2010) Prediction of pulsed GTAW penetration status based on BP neural network and D-S evidence theory information fusion. Int J Adv Manuf Technol 48(1–4):83–94

A Review: Application Research of Intelligent 3D Detection Technology Based on Linear-Structured Light Shaojie Chen, Wei Tao, Hui Zhao, and Na Lv

Abstract With the development of sensor technology and the increasing demand of industrial upgrading, high intelligence and high-precision processing and manufacturing is the development trend of the future automation industry. As the eye of industry, sensor is the core part of intelligent industrial system. Line-structured light sensor is widely used in automatic manufacturing because of its non-contact, highrobust, and high-precision characteristics. This paper analyzes the latest application and research of line-structured light sensing technology in the industrial application of workpiece object detection and positioning, geometric profile measurement, threedimensional reconstruction. It is found that line-structured light can provide more convenient advantages for high-precision measurement in automatic production line with its non-contact advantage. Through the feature extraction and reconstruction of the surface profile, the accurate estimation of the three-dimensional shape and coordinates of the workpiece can be realized, which lays the theoretical foundation and practical basis for further realizing the automatic manufacturing. Keywords Line-structured light · 3D reconstruction · Target location · Profile measurement · Visual sensing · Industrial application

1 Introduction Industrial production has experienced many stages, including manual production, mechanized production, and automatic production. Modern industry is developing in the direction of intelligence and high precision, that is, it is adaptive to the production environment. Sensor system is the core part of intelligent production system. It is responsible for perceiving the external environment information and providing it to the main control system. It is the main basis for the master control system to make production decisions. As the front end of intelligent production line, sensor system S. Chen · W. Tao · H. Zhao · N. Lv (B) Department of Instrument Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_2

35

36

S. Chen et al.

has become the most important and basic part in the industrial upgrading of China’s industrial industry, and has also become the most concerned part of the majority of researchers. Sensors can be divided into ultrasonic sensors, magnetic sensors, visual sensors, and so on. Among them, the application scenarios of nonvisual sensors are often limited because of their single perception information. Vision sensor can obtain rich environmental information, so it can be used in a wider range of scenarios. And visual sensing is an effective non-contact measurement method, which provides a feasible detection method for high-precision detection under special working conditions in industrial environment. According to the different types of light sources, visual sensors can be divided into active and passive. Passive vision sensor uses natural light as light source, which is sensitive to the change of ambient light. Binocular vision sensor is a typical passive visual sensor. The active vision sensor has its own auxiliary light source, which makes it have high robustness. Structured light is a common light source in active vision, which has some types of point-structured light, line-structured light, grid-structured light, etc. Line-structured light sensor is a typical active visual sensor. It has simple structure, convenient calibration process, and is competent for target recognition and positioning, profile measurement, 3D reconstruction, and other work. This paper analyzes the latest progress of the research on different scene of line-structured light sensing technology in industrial application field. It is found that in the application scene of target positioning, contour measurement, and 3D reconstruction, the linearstructured light technology is an effective method to obtain the information of object space coordinates and geometric dimensions, Therefore, the analysis results show that the line-structured light technology can provide the sensing data of the object space position and 3D profile reliably and in real time in the process of intelligent, automatic industrial detection and manufacturing.

2 Line-Structured Light Vision Sensing Technology Line-structured light sensing technology is based on the principle of trigonometric distance measurement, including camera calibration, optical plane calibration, light stripe center extraction, and three-dimensional coordinate calculation. Camera calibration aims to obtain the internal and external parameters of the camera. Optical plane calibration is used to obtain the position relationship between laser plane and camera. After completing these two steps, the corresponding relationship between each pixel on the camera sensor and the space point on the light plane can be determined. Since there is no perfect line-structured light, the stripe projected on the surface of the object always has a certain width, so it is necessary to extract the center of light stripe. Many researchers have done a lot of work on the optimization of calibration and stripe extraction [1, 2]. This paper focuses on the analysis of different application scenarios of line-structured light sensing technology.

A Review: Application Research of Intelligent 3D Detection …

37

Fig. 1 Line-structured light sensing system [3]

Fig. 2 3D reconstruction of measured object [4]

A typical line-structured light vision sensing system is shown in Fig. 1. Its working process is as follows: (1) the laser projects the line-structured light stripe, (2) the light stripe is modulated and deformed by the object surface profile, (3) the deformed 2D light stripe is captured by the camera, (4) the three-dimensional coordinates of the light stripe are calculated by the computer after certain processing. On this basis, combined with the corresponding mobile platform, the line-structured light vision sensor can obtain the complete profile information of the measured object, as shown in Fig. 2.

3 Applied Research of Industrial Applications The linear-structured light vision sensor can obtain the surface profile and spatial position information of the measured object efficiently and accurately. On different platforms, it can complete a variety of tasks. The hand-eye system of industrial robot combined with line-structured light vision sensor is a common combination. As shown in Fig. 3, it is a welding robot, which can realize adaptive tracking of welding seam [5]. In addition, unmanned vehicles, unmanned submersibles, and so on are also common carrying platforms. According to the change of carrying platform and application scene, line-structured light vision sensor can be used for object detection and positioning, geometric dimension measurement, and 3D reconstruction. Around

38

S. Chen et al.

Fig. 3 A hand-eye system [5]

these application directions and some specific application scenarios, researchers have done a lot of exploration.

3.1 Industrial Field Calibration The calibration of linear-structured light vision sensor is divided into camera calibration and light plane calibration. Camera calibration developed earlier, so it is more mature. Zhang Zhengyou method [6] is widely used in camera calibration of linestructured light vision sensor. Optical plane calibration adds additional constraints and provides depth information for monocular vision system. Therefore, optical plane calibration directly affects the measurement results. In industrial applications, the calibration process is often required to be fast and convenient. Reference [7] proposed a calibration method of optical plane parameters based on line features. By using the constrained motion of the robot’s hand-eye system, two intersecting lines on the optical plane are obtained to complete the calibration, as shown in Fig. 4. The calibration method is simple and has few steps, which is suitable for industrial field calibration, but the calibration process still needs to use two-dimensional calibration target. In Ref. [8], a cross-point calibration method is proposed to transfer the real coordinate system from the physical calibrator to the displacement calibrator, which simplifies the calibration process. In Ref. [9], the parameters of the optical plane are calibrated in two steps by controlling the sensor to do some translation motions. This method belongs to the self-calibration method, the calibration process does not need to use the target, but because it requires the

A Review: Application Research of Intelligent 3D Detection …

39

Fig. 4 A hand-eye system [7]

sensor to complete several groups of different directions of translation movement, its application scope is limited. References [10, 11] used binocular camera to calibrate the optical plane. The additional camera is only used for optical plane calibration and does not participate in 3D measurement. The calibration process is simple and suitable for industrial field calibration.

3.2 Object Detection and Location The detection and location of space targets is a hot issue in the field of intelligent industrial production, and it is the premise to realize the functions of target grasping and obstacle avoidance. Space object detection and location technology based on linear-structured light vision sensor is a research direction in this field. In Ref. [12], based on the line-structured light vision sensor, the inflection point extraction algorithm of automatic dynamic programming was used to locate the initial welding position of fillet weld. In Ref. [13], aiming at the problem of small geometric difference between adjacent welds in multi-layer weld bead, the information of directional light source image and line-structured light image is fused to realize fast and accurate positioning of weld. Based on the geometric information obtained by line-structured light vision sensor in Ref. [14], the adaptive tracking of welding torch to complex box beam component weld is realized, as shown in Fig. 5, which effectively improves the welding efficiency and quality.

40

S. Chen et al.

Fig. 5 Results of automated skip welding [14]

In Refs. [15–17], the line-structured light vision sensor is mounted on the unmanned traffic equipment to detect the obstacles on the passing path and realize the obstacle avoidance function of the unmanned traffic equipment. A set of automatic picking system for mature tomatoes was designed in Ref. [18]. In order to solve the problem of recognition and location of mature tomatoes, the maximum variance between clusters of H and S gray values is used as the threshold value of color image segmentation. The mature fruits are distinguished from the background and then positioned by using the line-structured light vision sensor. The results of recognition and location are shown in Fig. 6. Combined with color image, it can effectively identify and locate mature tomatoes.

(a) Original image

(b) Identification result

Fig. 6 Identification and location of ripe tomato [18]

(c) Location result

A Review: Application Research of Intelligent 3D Detection …

41

The spatial position of objects in the field of view can be accurately obtained by using line-structured light vision sensing technology. The combination of linestructured light stripe information and image information can locate the specific object more quickly and accurately.

3.3 Profile Measurement Line-structured light vision sensing technology plays an important role in the field of non-contact profile measurement. Its non-contact characteristic makes it possible to carry out online and real-time dynamic profile measurement. Depending on the choice of line laser and camera, the measurement range ranges from several meters to several millimeters. Reference [19] proposed a dynamic measurement method of train wheel diameter based on line-structured light and speed measurement unit, which can measure the wheel diameter of train passing through the measurement point. In Ref. [20], the line-structured light vision sensor is mounted on the train to monitor the wear condition of the inner side of the train track in real time. It can realize the measurement with an interval of 190 mm at the speed of 350 km/h, meeting the detection requirements under the existing train running speed. Reference [21] used line-structured light vision sensor to detect the quality of rail components, including geometric dimensions, straightness, torsion, and other information. As shown in Fig. 7, Ref. [22] uses multiple sensors to construct a vehicle mounted dynamic high-precision railway tunnel measurement system and vibration compensation device, which can detect the slight deformation of the tunnel and provide

Fig. 7 Results of automated skip welding [22]

42

S. Chen et al.

Fig. 8 A small welding bead [24]

guarantee for tunnel safety. The system can scan the tunnel at a speed of 60 km/h at intervals of 250 mm. In Ref. [23], the linear-structured light vision sensing technology was applied to the geometric information detection of complex parts, and the maximum measurement error of 50 mm workpiece was only 0.1 mm. For the small-size weld shown in Fig. 8, low-power laser, GigE high-speed camera and focus lens are selected in Ref. [24] to realize high-speed and accurate measurement of weld height and width. With different specifications of lasers, cameras, and lenses, the line-structured light vision sensing technology can be applied to the measurement tasks ranging from meters to millimeters, and it can meet the requirements of real time and accuracy. It is an ideal non-contact measurement method in industry.

3.4 Three-Dimensional Reconstruction Combined with the mobile platform, the line-structured light vision sensor can realize the complete reconstruction of the measured object contour and can be used for 3D map construction, damage visualization, etc. In Refs. [10, 25, 26], a line-structured light vision sensor was mounted on an unmanned underwater vehicle (UUV) to scan the seabed topography. With the navigation data, a high-resolution 3D map of the seabed was constructed. Reference [26] also provided the mapping system with LED lighting module, as shown in Fig. 9, to obtain color 3D map. This set of equipment can play a role in mineral exploration and subsea engineering construction. Reference [27] constructs a pavement texture reconstruction system based on linestructured light sensing technology, which not only ensures the detection accuracy of asphalt pavement, but also improves the detection efficiency and reduces the cost. In Refs. [3, 24, 28], the 3D reconstruction technology based on line-structured light was applied to the welding field, and the 3D model of the weld was constructed. References [24, 28] visualized the welding defects and indicated the specific position of defects on the 3D reconstruction model by color coding, as shown in Fig. 10. 3D reconstruction technology based on line-structured light can generate highresolution 3D model, which has a wide application prospect in 3D map construction, nondestructive testing and damage visualization.

A Review: Application Research of Intelligent 3D Detection …

43

Fig. 9 3D color mapping technique [26]

Fig. 10 Weld defects [24]

4 Conclusion According to the technical characteristics of line-structured light vision sensing technology, the latest research of this non-contact profile detection method in intelligent manufacturing and high-precision detection scene is analyzed, and the potential industrial application prospect of 3D reconstruction technology is discussed. In this paper, the research status of line-structured light in object detection and positioning, geometric profile measurement, and three-dimensional reconstruction is studied. The results show that the line-structured light sensing technology is an effective detection method, which can provide reliable and high-precision sensing data. At the same time, the convenient on-site calibration method is also discussed. The research proves that

44

S. Chen et al.

the line-structured light technology provides an effective detection method for realtime online detection and high-precision contour measurement. At the same time, the research also provides theoretical research basis and application possibility demonstration for automatic and intelligent industrial manufacturing and high-precision online detection. Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 51975367, 61401275), the Shanghai Jiao Tong University New Teacher Support Program (No. 18X100040049), the Guangxi Natural Science Foundation of China (No. GKAD18281007), and the Research Fund of Yantai Information Technology Research Institute of Shanghai Jiao Tong University (No. G19YTJC004).

References 1. Liy Y, Yuan L, Zhang Z (2013) Survey on linear structured light stripe center extraction. Laser Optoelectron Prog 50(10):100002 2. Zhang X, Zhang J (2017) Summary on calibration method of line-structured light sensor. In: 2017 IEEE international conference on robotics and biomimetics (ROBIO). IEEE 3. Huang W, Kovacevic R (2011) A laser-based vision system for weld quality inspection. Sensors 11(1):506–521 4. Qingmeng L et al (2012) 3D scanning system for digital dental based on line structured light sensor. In: 2012 international conference on system science and engineering (ICSSE). IEEE 5. Zou Y, Li J, Chen X (2017) Seam tracking investigation via striped line laser sensor. Ind Robot Int J 6. Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334 7. Qi Y, Jing F, Tan M (2013) Line-feature-based calibration method of structured light plane parameters for robot hand-eye system. Opt Eng 52(3):037202 8. Cui B et al (2020) Cross-point calibration method for the Scheimpflug measurement system. Appl Opt 59(28):8618–8627 9. Chen T-F, Ma Z, Wu X (2012) Calibration of light plane in line structured light sensor based on active vision. Guangxue Jingmi Gongcheng (Opt Precision Eng) 20(2):256–263 10. Inglis G et al (2012) A pipeline for structured light bathymetric mapping. In: 2012 IEEE/RSJ international conference on intelligent robots and systems. IEEE 11. Yu H et al (2020) Three-dimensional shape measurement technique for large-scale objects based on line structured light combined with industrial robot. Optik 202:163656 12. Liu F, Wang Z, Ji Y (2018) Precise initial weld position identification of a fillet weld seam using laser vision technology. Int J Adv Manuf Technol 99(5–8):2059–2068 13. Zeng J et al (2018) A weld position recognition method based on directional and structured light information fusion in multi-layer/multi-pass welding. Sensors 18(1):129 14. Li G et al (2020) Welding seam trajectory recognition for automated skip welding guidance of a spatially intermittent welding seam based on laser vision sensor. Sensors 20(13):3657 15. Wu K, Wang W (2018) Detection method of obstacle for plant protection UAV based on structured light vision. Opto-Electron Eng 45(04):170613 16. Mao XR, Yuan X, Zhao CX (2013) Detection of indoor robot’s passable areas based on a single-line structured light. Appl Mech Mater 433. 17. Shao H, Zhang Z, Li K (2015) Research on water hazard detection based on line structured light sensor for long-distance all day. In: 2015 IEEE international conference on mechatronics and automation (ICMA). IEEE

A Review: Application Research of Intelligent 3D Detection …

45

18. Qingchun F et al (2014) Design of structured-light vision system for tomato harvesting robot. Int J Agric Biol Eng 7(2):19–26 19. Gao Y, Shao S, Feng Q (2012) A new method for dynamically measuring diameters of train wheels using line structured light visual sensor. In: 2012 Symposium on photonics and optoelectronics. IEEE 20. Liu Z et al (2011) Simple and fast rail wear measurement method based on structured light. Opt Lasers Eng 49(11):1343–1351 21. Zhou P, Xu K, Wang D (2018) Rail profile measurement based on line-structured light vision. IEEE Access 6:16423–16431 22. Zhan D et al (2015) Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels. Sensors 15(4):8664–8684 23. Mei J, Lai L-J (2019) Development of a novel line structured light measurement instrument for complex manufactured parts. Rev Sci Instrum 90(11):115106 24. Nguyen H-C, Lee B-R (2014) Laser-vision-based quality inspection system for small-bead laser welding. Int J Precision Eng Manuf 15(3):415–423 25. Roman C, Inglis G, Rutter J (2010) Application of structured light imaging for high resolution mapping of underwater archaeological sites. In: OCEANS’10 IEEE SYDNEY. IEEE 26. Bodenmann A, Thornton B, Ura T (2017) Generation of high-resolution three-dimensional reconstructions of the seafloor in color using a single camera and structured light. J Field Robot 34(5):833–851 27. Liang J, Gu X (2020) Development and application of a non-destructive pavement testing system based on linear structured light three-dimensional measurement. Constr Build Mater 260:119919 28. Rodriguez-Martin M et al (2017) Feasibility study of a structured light system applied to welding inspection based on articulated coordinate measure machine data. IEEE Sens J 17(13):4217–4224

Research Papers

Acoustic Emission-Based Weld Crack In-situ Detection and Location Using WT-TDOA Zhifen Zhang, Rui Qin, Yujiao Yuan, Wenjing Ren, Zhe Yang, and Guangrui Wen

Abstract As one of the essential infrastructures in rockets, the aluminum alloy tanks are mostly produced by tungsten argon arc welding. Considering the alternating load and concentrated stress through the operation of tanks, weld cracks have become one of the most severe hazards that can compromise the normal production. Therefore, the in-situ detection and accurate location for weld cracks are of great importance. With the high sensitivity and nondestructive way of testing, acoustic emission (AE) detection is suitable for weld cracks’ detection and location of the aluminum alloy tanks. A time difference of arrival (TDOA) method based on wavelet transform (WT) is proposed in this paper to enable the accurate location of weld cracks at the beginning of its appearance. The research was carried out on an AE experimental platform based on a scale-down aluminum alloy tank of rockets. The propagation speed of AE signals on the sidewall of the tank was measured after its attenuation characteristics were analyzed first and the noise was restrained based on wavelet threshold denoising. According to the time-domain invariability of WT, the component that accounts for the largest share of energy in wavelet decomposition was extracted to represent the raw AE signal for signal source location. Location errors on height and angle were calculated after three times repeated experiments. The results of the study show that TDOA based on wavelet transform (WT-TDOA) outperformed the conventional TDOA significantly in terms of location accuracy. Keywords Aluminum alloy tank · Acoustic emission · Location · Wavelet decomposition · Crack · Denoising

Z. Zhang · R. Qin · Y. Yuan · W. Ren · Z. Yang · G. Wen (B) Institute of Aero-Engine, School of Mechanical Engineering, Xi’an Jiaotong University, No. 28 Xianning West Road, Xi’an 710049, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_3

49

50

Z. Zhang et al.

1 Introduction Aluminum alloy tanks are widely used in aircrafts, ships, and vehicles due to their superb performance. However, they are prone to a variety of problems, such as metal fatigue and cracking, due to the effects of alternating load and stress concentration during service [1, 2]. The emergence and extension of cracks in aluminum alloy structures often cause sudden damage to the structure of the machine and can even cause a major accident whose consequence is the destruction of the machine and serious injury to the operator. Therefore, detection of the cracks as soon as possible and location of the cracks as accurate as possible are of great importance for the maintenance of aluminum alloy structures. AE detection is a highly sensitive nondestructive testing method that can be used to realize the online monitoring of various machines. It is very effective in detecting and locating damages at an early stage [3]. Plentiful studies have been done on crack diagnosis based on AE detection, which ensured the AE detection method to become the main technology in cracks’ early detection and location of aluminum alloy structures. Determining the location of AE signal source is one of the main purposes of AE detection. However, since the AE signals contain many components when they reach the sensors [4], and waves of different frequencies have unequal propagation speeds and initial phases, they obviously improve the difficulties of the location for AE signal source [5]. Thus, the attenuation characteristic analysis and AE signal denoising are both essential for signal source location [6]. There mainly exist two kinds of methods for AE source location currently: waveform-based location method and parameter-based location method. For waveform-based location method, the location of the AE signal source is achieved by calculating the time difference with which the AE signals of different modes reach the sensor, since rich frequencies and modes are aroused by the load in the experimental structure [7, 8]. For parameter-based location method, model parameters or experimental parameters that are set and known already are used to test the characteristics of the AE signal. And the location of the AE source is realized through the obtained characteristic parameters. A time difference of arrival (TDOA) method, regional location and intelligent location are common types of parameterbased location methods. The TDOA method determines the location of AE signal source through a geometric calculation based on the propagation speed of the wave and the time difference of the signal to reach sensors. And it is widely adopted in application. The regional location method determines the small region where the AE signal source is located according to the number of sensors and the relative time difference. It is not prone to mislocation, but only the rough region can be determined. Thus, the regional location is often applied in AE signals of small area or composite materials with high frequency, fast fade in propagation, and limited detection channels [9]. The intelligent location method applies neural networks for accurate location of AE signal source. With the time-domain and frequency-domain parameters as inputs, the network can achieve complex AE signal source recognition through nonlinear mapping from characteristic space to output space [10–12].

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

51

However, it will take much time to establish the network model; thus, the intelligent location method is suitable for AE signal source location of complicated integrated equipments. Among them, the TDOA method is easy to realize while the location accuracy is also guaranteed. And it is necessary to set threshold values for AE signals in order to get the accurate time difference traditionally, which means that the location of AE signals is determined based on the time differences of exceeding the preset thresholds. Although the threshold-based method is widely used in various AE signal source detection instruments, its location accuracy is relatively unstable. With the continuous advancements in signal processing technology, WT, S conversion, and other methods have been employed in AE signal source location with better performance. Jiao [13] applied WT of Gabor on AE signal source location and succeed in high accuracy. Quan [14] extracted the corresponding time of the maximum in the main band energy curve of AE signals and combined the time difference and three-dimensional spatial location method. Thus, the accurate location of AE signal source was realized. Fan [15] used time–frequency-domain analysis method to identify the LAMB wave, and accomplished AE signal source location of the aluminum sheet based on TDOA. Sai [16] obtained the location of AE signal source through spatial spectrum based on WT, which finally achieved considerable performance on real-time operation and location accuracy. By setting two-time windows for the automatic detection of signal’s arrival time, Sedlak [17] managed to locate the AE signals in the tensile and compressing resistance test of metal sheets, which could maintain high accuracy in strong noise. To sum up, the TDOA method is the most classical and effective location algorithms. But it needs the parameters that can better characterize the signal source location for improvements on location accuracy. However, WT is effective in extracting characteristic parameters and mitigating the influence of interference [18]. Besides, the AE signal is highly sensitive to various impacts and damages. Various AE sources are mixed and acquired in data acquisition of pressure vessels, which requires for denoising in AE signals’ processing [19]. Given the situation, we developed a method for locating the AE signal source of an aluminum alloy tank based on WT. An AE experimental platform based on a scale-down aluminum alloy tank was built. The AE signal that is generated during the crack extension of the tank was simulated by a lead-breaking module. Wavelet entropy was used as the criterion for selecting the wavelet basis function, and kurtosis was used as the criterion for selecting the number of decomposition layers. The acquired AE signal was denoised using the WT. Then, the propagation velocity of the AE signal generated on the sidewall of the tank was measured. The location was performed on 13 AE signal source spots on the sidewall of the tank using WT-TDOA, taking advantage of the time-domain invariability of the wavelet decomposition. The results of the experiment were compared with those obtained by the conventional TDOA. The comparison revealed that WT-TDOA significantly outperformed the conventional TDOA in terms of location accuracy.

52

Z. Zhang et al.

The novelty of this research work can be summarized in the following aspects: (1)

(2)

(3)

The question of locating the cracks of aluminum alloy tank is explored based on the AE detection technique, the effective ways of denoising the AE signal are explored, and the accurate location of AE signal source on the sidewall of the aluminum alloy tank is realized. An improved TDOA method is developed based on the time-domain invariability of wavelet decomposition. This method can be used in online research on AE signal source location. Regarding AE signals’ denoising, the wavelet basis function is selected based on wavelet entropy, and the number of wavelet decomposition layers is determined based on kurtosis. Thus, effective ways are found for selecting the wavelet basis and determining the number of decomposition layers.

The remainder of this paper is organized as follows: Sect. 2 provides the basic principles of WT and TDOA. Section 3 describes the implementation procedure of the method proposed in this paper. Section 4 presents the key results of this study. Finally, the work is concluded in Sect. 5.

2 Basic Methods 2.1 Wavelet Transform (WT) Signals are represented in WT through scaling and shifting of the wavelet basis functions [20]. And there are two types of transformation: discrete WT and continuous WT. As a method of timescale analysis, WT is characterized by multi-resolution analysis. Besides, a wavelet function is one kind of function that has localized features in both the time domain and the frequency domain, and its energy is very concentrated in the time domain. Suppose there is a function (t), whose Fourier transform is ˆ (t). If the localization condition indicated by formula (1) is satisfied, then (t) is a basic wavelet.    2 ˆ  |(x)|2 |x|−1 d x = (w) (1)  |w|−1 dw < +∞ R

R

The function ab (t), which results from scaling and shifting basic wavelet, is called a wavelet function, as formula (2): ab (t) = |a|− 2  1



t −b a



where a is the scale parameter and b is the position parameter.

(2)

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

53

The wavelet transforms of the signal f (t) are its expansion on the wavelet function, as formula (3). W f (a, b) =  f (t), ab (t) = |a|−1/2



 f (t)

 t −b dt a

(3)

R

The frequency-domain expression is shown in formula (4). W f (a, b) =

|a|−1/2 2π



ibw ˆ dw fˆ(w)(aw)e

(4)

R

The inverse transform of wavelet decomposition is the reconstruction of the signal, as shown in formula (5). f (t) = C−1

 

˜ W f (a, b)



 t − b −2 a dadb a

(5)

R

˜ represents the dual function of the wavelet function. In formula (5),  WT is suitable for time–frequency analysis of non-stationary signals. It is used to effectively extract information from the signal and perform multi-scale analysis on functions or signals through scaling and shifting, thereby solving the problem that the Fourier transform cannot solve. There has been a rapid development of wavelet theory in recent years, and it has been widely applied in signal processing, noise reduction, pattern recognition, and image processing [21].

2.2 TDOA The location of the signal source in TDOA is determined by measuring the time difference between two spots in the transmission of the same signal. It is the most essential signal source location method [22]. One-dimensional linear location and twodimensional planar location are often used according to the spatial relation between the signal source and sensors. Among them, one-dimensional linear location, also known as straight-line location, is the simplest positioning method. Besides, twodimensional planar location requires for at least three sensors and two groups of time differences. To obtain the only solution, four sensors are configured usually. The sensors are usually arranged in simple methods, such as triangle or rectangle forms. And the travel speed of the signal in all directions of the plane is required to be equal.

54

Z. Zhang et al.

Fig. 1 Schematic diagram of the planar location

Assume that there is a signal source Q(X, Y ) with an unknown position on the plane S, and four sensors of the same type are placed at S1 (x1 , y1 ), S2 (x2 , y2 ), S3 (x3 , y3 ), and S4 (x4 , y4 ), as shown in Fig. 1. Assuming that the propagation speed of the signal is v, the time at which the signal Q reaches the four sensors is, respectively, t1 , t2 , t3 , and t4 , as shown in formulae (6)–(9). t1 = t2 = t3 = t4 =

 

(X − x1 )2 + (Y − y1 )2 /v

(6)

(X − x2 )2 + (Y − y2 )2 /v

(7)

 

(X − x3 )2 + (Y − y3 )2 /v

(8)

(X − x4 )2 + (Y − y4 )2 /v

(9)

Time difference of t13 in the horizontal direction and t24 in the vertical direction can be calculated as formulae (10) and (11), respectively.    2 2 2 2 t13 = (x − x1 ) + (y − y1 ) − (x − x3 ) + (y − y3 ) /v

(10)

   2 2 2 2 t24 = (x − x2 ) + (y − y2 ) − (x − x4 ) + (y − y4 ) /v

(11)

Hyperbola 1 can be calculated from the time difference t13 with which the signal reaches S1 and S3 , and hyperbola 2 can be calculated from the time difference t24 with which the signal reaches S2 and S4 . The signal source Q(X, Y ) is at the

Acoustic Emission-Based Weld Crack In-situ Detection and Location … Fig. 2 Schematic diagram of the planar positioning

A

55

C

M P

P B

N

D

intersection of the two hyperbolas, and its coordinates can be obtained by solving the equations. Cylindrical location is applied in the location of the acoustic emission source on the tank sidewall, which is a special case of planar location, as shown in Fig. 2. When the propagating speed is constant in all directions, the AE signal propagates on the cylinder in the same way as it does on the plane, except that the edges AB and CD are actually connected. After the AE signal travels from M to P, it continuously travels from P to N . Overall, accurate extraction of the signal’s arrival time at each sensor is a prerequisite for implementing TDOA. However, the accuracy of AE signal source location can be affected by many factors, such as inconsistent signal propagation speed, time latency, environmental noise, and electromagnetic interference in real-world environments. Therefore, study on the attenuation characteristics and denoising methods of AE signals is necessary.

3 Proposed Method The procedure of this study is shown in Fig. 3.

3.1 Experimental Platform for Data Acquisition The WD broadband AE sensor produced by PAC was used to acquire the AE signals during the experiments, which is shown in Fig. 4a. The key parameters of the AE sensor are shown in Table 1. The preamplifier used was the 2/4/6 preamplifier of PAC, as shown in Fig. 4b. The gain can be set as 20, 40, or 60 db, and the bandwidth range is 10–2500 kHz. The gain was set to 40 db during the experiments. The SAEU2S dualchannel portable AE collector was used for signals’ collection in the experiments, as shown in Fig. 4c. The sampling precision is 16 bits, and the maximum sampling frequency is 10 MHz.

56

Z. Zhang et al.

Bench Setup Signal Acquisition Lead-breaking Denoising

Velocity Measurement

Attenuation Analysis

Location Fig. 3 Technical route of the proposed method

(a) AE Sensor

(b) Preamplifier

(c) Collector

Fig. 4 AE signal acquisition system Table 1 Key parameters of WD broadband AE sensor Sensor model

Sensor size/mm

Operating temperature/°C

Center frequency/kHz

Frequency range/kHz

Interface form

WD

18 × 17

−65 × 175

55

100 × 1000

Dual BNC

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

57

Fig. 5 Experimental platform

The experimental object (aluminum alloy tank), data collection software, and signal acquisition hardware are included in the constructed experimental platform, as shown in Fig. 5. Among them, the signal acquisition hardware consisted of an AE sensor, a preamplifier, and a data collection card. Considering the uncertainty of AE signals during the crack extension, the leadbreaking signal was used to simulate the AE signal source in this experiment. The lead-breaking test was carried out according to the GBT 18,182-2000 standard. The breaking signals of a pencil with  0.5 mm and HB were used to simulate the AE signals. In the experiments, the length of the stretched pencil core was about 2.5 mm, and its angle to the surface of the tank was about 30◦ .

3.2 Attenuation Characteristic Analyzing of AE Signals The attenuation characteristic of the AE signals is one of the key strands of its propagation characteristics. AE technology has been widely used in recent years. However, it is difficult to establish a connection between the fault source and the characteristic signal in the real equipment inspection, as the signal experiences complicated attenuation, reflection, scattering, and mode transition inside the machine due to the complexity of structure and different propagation modes of signal in different materials. Moreover, only a small number of studies have been conducted to investigate the attenuation characteristics of AE signals. All these factors have seriously hindered the development of acoustic emission technology. Research on the propagation and attenuation characteristics of AE signals of pressure vessels, such as oil tanks, is of great significance for the identification of the AE sources and improvement of location accuracy.

58

Z. Zhang et al.

To exclude the interference of other factors in the experiment, one AE sensor was kept at a fixed location and the other sensor was shifted to change its distance to the lead-breaking spot. To ensure the stability of the signal coming from the signal source, only the acquired signals whose intensities fell into a certain range were taken as valid signals that could be used for subsequent analysis. The average signal intensity was obtained based on the data of multiple tests.

3.3 Noise Reduction of AE Signals The AE signal is highly sensitive to various impacts and damages. In the on-site monitoring of the pressure vessel, signals generated by a variety of AE sources can be acquired. It is necessary to filter out various noises from the signal to achieve accurate positioning of the target signal source. Furthermore, it is essential to choose the right denoising method. Wavelet analysis and empirical mode decomposition (EMD) are commonly used and effective methods for noise reduction in signal processing. To achieve effective isolation of acoustic emission signal in the presence of noise interference, the AE signals generated under different working conditions of the tank were collected and their time-domain and frequency-domain characteristics were analyzed. Research on denoising method was conducted based on wavelet denoising and EMD denoising methods. The AE signals are decomposed firstly by wavelet basis functions through wavelet threshold denoising, as in Fig. 6. Then, the raw signals and the noise are separated according to their behavioral differences. Among them, the wavelet soft threshold denoising provides the best solution of the signal smoothness. Based on the inherent timescale characteristics of signals, EMD can decompose a complex signal into several eigenmode components and one residual. EMD has no requirements for selection of basic functions, and it is suitable for analyzing and processing of nonlinear and non-stationary signals. The intrinsic mode functions (IMFs) decomposed by EMD contain the local characteristics of the source signal on different timescales. And the IMF with higher frequency will be got earlier. The EMD-based denoising method is equivalent to frequency-selection filtering. And the noises usually appear as some high-frequency components in signals. Therefore, it is feasible to suppress the noises to some extent by removing the first or first several intrinsic mode components of the signal. Wavelet soft threshold denoising and EMD denoising methods were applied in this study. And the results were evaluated using the indicators of signal-to-noise ratio (SNR) and root-mean-square error (RMSE). Original Signal

Wavalet Decomposition

Dimensional Denoise

Fig. 6 Flowchart of the wavelet threshold denoising

Wavelet Reconstruction

Denoised Signal

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

59

SNR is the ratio of the useful component to the noise component in the signal. It is calculated as formula (12). SNR reflects the quality of the signal, which serves as an important reference for accuracy and reliability of the denoising method. n 

n



2 2 x(i) − x(i) ˆ x(i) SNR = 10 lg i=1

(12)

i=1

In formula (12), x(i) represents the original signal, x(i) ˆ is the estimated signal after denoising, and n is the length of the signal. Besides, the larger SNR represents the less noises in signal after denoising. RMSE reflects the average deviation of the denoising signal from the original signal. It is calculated as formula (13). RMSE is the measurement of similarity between the original denoised signals.    n 

2  n x(i) − x(i) ˆ RMSE =

(13)

i=1

In formula (13), x(i) is the original signal, x(i) ˆ is the estimated signal after denoising, and n is the signal length. Besides, the smaller the RMSE, the more detailed information of the source signal is retained after denoising.

3.4 Location of AE Signal Source There are two types of AE waves: the transverse wave and longitudinal wave. Among them, transverse waves have a faster propagation speed and smaller amplitude whereas longitudinal waves have a slower propagation speed and larger amplitude. The selection of the amplitude threshold makes a great influence on the location accuracy in TDOA. And WT analyzes signals in frequency domain. Thus, the location errors caused by improper selection of amplitude threshold can be avoided by establishing the behavioral pattern of the signal in frequency domain. Therefore, TDOA based on WT is applied to locate the AE source in this study. The flowchart of the proposed method is shown in Fig. 7. (1) (2) (3)

(4)

Speed measurement of AE signals on the surface of aluminum alloy tank. Denoising of AE signals. Selection of the appropriate wavelet basis function to perform three-layer decomposition on the signal, obtaining the similarity coefficient cA3 and the detail coefficients cD3, cD2, and cD1, as shown in Fig. 8. Calculation of the ratio between the energy of each component to the original energy after the wavelet decomposition. According to the definition of WT, it does not affect the time domain. The waveform frequency of each detail coefficient is half of the previous segment, and thus the total length of time remains

60

Z. Zhang et al. Method 1

TDOA Sensor Arrangement

Data Acquisition

Location

Denoising

Velocity Measurement

Wavelet Decomposition

Frequency Band Selection

Method 2

Test 1

Angle

Test 2

Error Caculation

Test 3

Height

Energy

Fig. 7 Flowchart of acoustic emission source positioning

Fig. 8 Result of three-layer wavelet decomposition

(5) (6) (7)

unchanged. The time corresponding to the spot with the largest amplitude is the arrival time of the wave in this mode. After WT, the signal frequency of each component changes, but the time remains unchanged and the noise component contained is reduced significantly. Therefore, it is feasible to use the wavelet component accounting for a relatively large share of energy to replace the original signal for location analysis. Performing correlation analysis using the signal which has undergone WT, to obtain the time difference with which the signal reaches the sensors. With the obtained wave velocity and time difference available, the location of AE signal source can be determined by solving the hyperbolic. Performing error analysis on the obtained results.

4 Experiments and Results 4.1 Attenuation Characteristic Analysis A coordinate system was established on the smooth sidewall of the aluminum alloy tank in order to analyze the attenuation characteristics of AE signals. Started from the randomly selected origin O, axes were drawn along the horizontal, vertical, and oblique (45◦ ) directions on the sidewall. The arrangement of the sensors is shown in

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

61

Fig. 9 Sensor arrangement for attenuation characteristic analysis

Table 2 Average signal intensity at different positions Directions

Coordinate value /cm 1

5

10

15

20

25

30

35

40

Horizontal

92.30

87.98

85.04

84.62

81.18

80.38

80.02

78.48

75.74

Vertical

94.58

87.68

83.40

82.34

81.52

80.46

79.26

79.00

78.36

Oblique

90.66

86.50

82.84

78.50

77.92

77.72

75.46

74.90

74.34

Fig. 9. The lead-breaking signals at the origin were used to simulate the AE signals, and two sensors were well coupled with the sidewall through coupling agent. Sensor 1 was fixed at about 7 cm away from O in the lower-left direction, and sensor 2 was used to measure the signal intensity at different spots on each coordinate axis. For data acquired by sensor 1, the signal whose intensity value fell into the range of 88.5−89.5 dB was considered effective. And valid signals at different positions on three axes were recorded by sensor 2. The sampling frequency was set as 2 MHz in this experiment. After three times repeated measurements, the average intensity of AE signals at each position for each axis was obtained, as in Table 2. And the attenuation curves along different directions for the aluminum alloy tank are presented in Fig. 10. In Fig. 10, the red, blue, and black curves represent the attenuation of the AE signals along the horizontal, vertical, and oblique directions, respectively. Amplitudes of AE signals diminish nonlinearly with the increase of distances when the signal traveled along the sidewall of the tank. At any set of three measurement spots on each axis with equal distance to the origin, the maximum amplitude difference was about 6.12 dB. Considering the influence of experimental errors, it is reasonable to consider that the attenuation rate of the AE signal traveling on the tank is roughly the same in all directions. Therefore, the attenuation rate can be regarded as a constant in this study.

62

Z. Zhang et al.

Fig. 10 Attenuation curves of three directions

4.2 Denoising of AE Signals Three axes were drawn on the sidewall of the tank in the same way as the coordinate system described in Sect. 4.1. Besides, the AE sensor was fixed at the origin of the coordinate system and was made to acquire the AE signals sequentially under the following conditions: a quiet environment; the air compressor idling; the air compressor operation continuously; and lead-breaking on X axis with 5 cm away from the origin. Then, the AE sensor was fixed near the slag crack marked on the sidewall of the tank and was made to acquire the AE signals sequentially under the following conditions: a quiet environment; the air compressor idling; and the air compressor operation continuously. The sampling frequency in this experiment was set as 2 MHz. The time-domain waveforms and frequency-domain characteristics of the signal were analyzed, and the amplitude values and frequency range of the AE signals under different conditions were obtained. When the air pressure in the tank was about 0.25 MPa. The AE signal acquired at the origin of the coordinate system was used for explaining the process of denoising. The time-domain waveform and spectrum of the signal are shown in Fig. 11a and 11b, respectively.

4.2.1

Wavelet Soft Threshold Denoising

The components extracted by wavelet decomposition can represent the characteristics of the signal better when the chosen wavelet basic function is more similar to the

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

63

Fig. 11 a Time-domain waveform of the AE signal. b Spectrogram of the AE signal

signal. Thus, the selection of wavelet basic functions has a great influence on the results of the decomposition. Besides, the wavelet entropy is a measurement of similarity between the signal and the wavelet basic functions. Therefore, the wavelet entropy values of the signal and various wavelet basic functions are, respectively,

64

Z. Zhang et al.

Table 3 Sorted wavelet entropy values Function

Entropy

Function

Entropy

Function

Entropy

Function

Entropy

Fk22

0.4938

Bior3.7

0.5121

Bior3.3

0.5378

Sym2

0.6378

Sym7

0.4944

Bior3.5

0.5130

Bior2.6

0.5468

Rbio2.4

0.6462

Bior6.8

0.4951

Sym5

0.5136

Rbio2.6

0.5504

Coif1

0.6602

Rbio6.8

0.4965

Bior3.9

0.5139

Sym4

0.5556

Rbio3.5

0.6648

Dmey

0.4990

Db8

0.5153

Bior2.4

0.5604

Bior3.1

0.6731

Db9

0.5000

Db6

0.5190

Fk6

0.5685

Fk4

0.8050

Coif3

0.5027

Db4

0.5193

Rbio1.3

0.5710

Rbio2.2

0.8948

Db7

0.5033

Sym6

0.5226

Rbio3.9

0.5723

Bior1.5

0.9211

Rbio2.8

0.5036

Bior5.5

0.5232

Bior2.2

0.5813

Bior1.3

0.9267

Db10

0.5049

Bior2.8

0.5279

Rbio5.5

0.5818

Db1

0.9361

Coif5

0.5077

Rbio1.5

0.5315

Db3

0.5867

Bior1.1

0.9361

Fk14

0.5079

Bior4.4

0.5319

Sym3

0.5867

Rbio1.1

0.9361

Coif4

0.5086

Db5

0.5328

Rbio3.7

0.5887

Rbio3.3

0.9467

Sym8

0.5088

Fk8

0.5328

Rbio4.4

0.6101

Rbio3.1

1.2701

Fk18

0.5100

Coif2

0.5338

Db2

0.6378

calculated under the same number of decomposition layers. When the number of layers is equal to four, the sorted wavelet entropy values are given in Table 3. As shown in Table 3, the wavelet entropy of the wavelet basic function fk22 got the minimum value, which means fk22 is the most proper function for denoising of the AE signals. Kurtosis is a statistical quantity that reflects the distribution characteristics of the signal. It can reflect the impact characteristics of the lead-breaking signal. In order to obtain the best decomposition layer number, the signal was decomposed with fk22 and the sorted kurtosis values corresponding to different decomposition layer numbers are presented in Table 4. It is shown in Table 4 that the best decomposition layer number is eight as it corresponds to the maximum value of kurtosis. Overall, the wavelet basic function used in the experiment was fk22 and the number of decomposition layer was eight. And the values of two indicators are calculated as follows. Table 4 Sorted kurtosis values Layers

8

9

7

10

11

12

Kurtosis

6371.593

6371.483

6371.377

6371.365

6371.289

6371.26

Layers

6

5

4

3

2

1

Kurtosis

6366.326

6339.159

6319.164

6308.077

6298.711

6297.152

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

65

SNR = 34.0046 RMSE = 7.9821e − 06

4.2.2

EMD-Based Denoising

Several high-frequency IMFs are treated as noises and removed from the reconstruction of signals in the EMD-based denoising method. Since it is uncertain in advance about the number of invalid components for signal reconstruction, the first IMF and the first two IMFs are removed from the reconstruction for comparison, respectively. In the experiment, the AE signal was decomposed by EMD firstly and 21 IMFs were obtained in total. The first IMF (Case 1) and the first two IMFs (Case 2) were discarded, respectively, and the remaining components were used to reconstruct the denoised signal. The values of two denoising indicators for WT-based method and EMD-based method were calculated and compared as in Table 5. As shown in Table 5, the SNR value of WT-based denoising method is far larger than EMD-based denoising method in both two cases. Besides, the RMSE value of WT-based denoising methods is far smaller than EMD-based denoising method, which means that WT-based denoising method has little influence on denoising. And the denoised signal is very similar to the original one. For Case 1 and Case 2, the values of two indices show that the second IMF of EMD contains much useful information but the first one is mainly noise. Thus, the first IMF should be removed in the denoising of EMD-based method. Overall, WT-based denoising method is applied in the denoising of AE signals in this experiment.

4.3 AE Signal Source Location In order to detect the characteristics of AE signals in the sidewall of aluminum alloy tank, the lead-breaking signal was used to simulate the AE signal as the guidance of GBT18182-2000. The arrangement scheme of sensors was firstly designed, and the velocity of AE signal was then measured. Thirteen points of AE source were marked on the sidewall of the tank for location. Table 5 Comparison of two methods

Evaluation

WT-based denoising

SNR

34.0046

RMSE

7.9821e−06

EMD-based denoising Case 1

Case 2

9.3749

8.0451

11.0984

17.4875

66

Z. Zhang et al.

C =192.5 cm A-A Sensor 1 270°

H

Sensor 2 0° A

A

Sensor 4 180°

Sensor 2 0°

Sensor 3 90°

Sensor 4 180°

Sensor 3 90°

Fig. 12 Arrangement scheme of sensors

4.3.1

Arrangement Scheme of Sensors

In order to detect the characteristics of AE signals in the sidewall of aluminum alloy tank, the lead-breaking signal was used to simulate the AE signal as the guidance of GBT18182-2000. The arrangement scheme of sensors was firstly designed, and the velocity of AE signal was then measured. Thirteen points of AE source were marked on the sidewall of the tank for location (Fig. 12). C = 192.5 cm

4.3.2

Velocity Measurement

Considering that the attenuation rates of AE signal were roughly the same in all directions as the conclusion in 4.1, a coordinate system was established on the sidewall of the tank as described in Sect. 4.1. And two AE sensors were placed at 10 and 40 cm of the Y -axis. Six experiments of lead-breaking were repeated at the origin O according to related standard. The AE signals of the two sensors were acquired for calculation.

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

67

Table 6 Time difference of six repeated experiments Number

1

2

3

4

5

6

Time difference/*1e − 03 s

0.9860

0.9753

0.9737

0.9850

0.6343

0.5997

Based on the former research results, the fk22 wavelet basic function was applied on denoising of AE signals with eight layers for decomposition. Correlation analysis was used to obtain the time corresponding to the maximum value of Pearson correlation coefficient. The results of six experiments are shown in Table 6. Considering the measurement error, the results of the fifth and the sixth experiment were discarded. The average value of time difference was calculated using the rest four tests as formula (14). And the wave velocity was obtained as formula (15).

4.3.3

t = (t1 + t2 + t3 + t4 )/4 = 9.8e − 04 s

(14)

v = L/t = 306.1224 m/s

(15)

Location of AE Signal Source

Thirteen points were marked in the established coordinate system of the tank in total, and the precise locations of the 13 spots were measured as shown in Table 7. Table 7 Locations of 13 marking points Number

Measurements

Conversion value

Coordinates

Height (cm)

Arc (cm)

Angle (°)

1

18.0

26.0

48.6

2

38.3

37.6

70.3

(38.3, 70.3)

3

19.3

53.8

100.7

(19.3, 100.7)

4

36.8

64.8

121.2

(36.8, 121.2)

5

24.7

68.3

127.8

(24.7, 127.8)

6

29.0

87.3

163.3

(29.0, 163.3)

7

15.9

100.1

187.1

(15.9, 187.1)

8

36.0

119.7

223.8

(36.0, 223.8)

9

23.3

133.9

250.4

(23.3, 250.4)

10

40.3

144.1

269.4

(40.3, 269.4)

11

27.9

173.4

324.2

(27.9, 324.2)

12

15.9

183.1

342.4

(15.9, 342.4)

13

31.4

7.4

13.8

(31.4, 13.8)

(18.0,48.6)

68

Z. Zhang et al.

Three repeated experiments of lead-breaking were performed at each spot of the whole 13 marking points, and four AE signals from four sensors were obtained for each point. The sampling frequency was 3 MHz. WT-TDOA was applied in the location of AE signal source as follows. (a) (b)

(c)

(d)

(e)

fk22 wavelet basic function for denoising on the collected AE signals. fk22 wavelet basic function for decomposition of denoised AE signals. According to the discussion results above, three layers were applied for wavelet decomposition. Similarity coefficient cA3 and detailed coefficients cD3, cD2, and cD1 were obtained as shown in Fig. 13. Calculation of the energy ratio. The ratio between each component of wavelet decomposition and the original signal was calculated, respectively. Finally, it was discovered that cA3 accounted for more than 70% energy of the original signal, which represented the largest share among all the components. Thus, the cA3 component was then extracted. Time difference calculation. The abovementioned correlation method was applied to calculate the time difference of the signal reaching the four sensors. It is worth noting that cA3 component was used to replace the original signal for calculation. Location.

Fig. 13 Results of wavelet decomposition

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

69

Table 8 Statistics of location error Method

Number

TDOA WT-TDOA Average boost value

Test 1

Test 2

Test 3

Height (cm)

Angle (°)

Height (cm)

Angle (°)

Height (cm)

Angle (°)

Mean

17.20

9.35

15.66

8.92

22.10

11.21

Variance

128.8

102.5

78.09

32.15

106.9

108.8

Mean

11.73

7.44

11.73

7.44

13.13

7.35

Variance

56.24

62.24

56.24

62.24

43.17

21.61

Absolute value

5.74

1.91

3.93

1.48

8.97

3.86

Percentage

33.37

20.43

25.10

16.59

40.59

34.43

The hyperbola solution method was used for the signal source location since the time difference was obtained. And the denoised signals were also used in location directly for comparison. The location results of two methods are listed as shown in Table 8. Among them, only two valid AE signals were acquired at the tenth marking point during three experiments. The location error of WT-TDOA and conventional TDOA in three experiments are shown in Fig. 14 and Fig. 15, respectively. And the mean value and variance value of location error are calculated as in Table 9. As shown in Table 9, the average height error of conventional TDOA is at least 22%, and the average angle error is 11.2% . But for WT-TDOA, the maximum of average height error and the average angle error are 14% and 7.5% , respectively. Overall, the WT-TDOA outperformed the conventional TDOA in location accuracy, while the average error is reduced by 1.48 cm and the improvement percent is up to 16.59% at least. 90 90

30

120

120

60

60 30

20 150

20

150

30

30 10

10

180

0

0

210

300 270

(a) Angle error

Fig. 14 Location error of WT-TDOA

0

0

210

330

240

180

1 2 3

330

240

300 270

(b) Height error

1 2 3

70

Z. Zhang et al. 90

90

120

60

120

30

20

150

40

60

30

30

150

30 20

10

180

10

0

180

0

210

300 270

0

210

330

240

0

1 2 3

(a) Angle error

330

240

300 270

1 2 3

(b) Height error

Fig. 15 Location error of conventional TDOA

5 Conclusion An AE experimental platform was built based on a scale-down aluminum alloy tank in this study. Attenuation characteristics, denoising method, and location method of AE signals on the aluminum alloy tank were explored, respectively. An WT-TDOA-based method was applied to signal source location, which used the largest energy share component of wavelet decomposition to replace the original signal. And the location accuracy of height and angle was improved than conventional TDOA method. Besides, EMD-based denoising method and wavelet decomposition-based denoising method were compared in this work. Among them, wavelet decomposition-based denoising method showed the best performance on AE signal denoising according to two evaluation indices of SNR and RMSE.

38.3

19.3

36.8

24.7

29.0

15.9

36.0

23.3

40.3

27.9

15.9

31.4

3

4

5

6

7

8

9

10

11

12

13

Deviation Statistics

18.0

2

Height (cm)

Measurement

1

Number

13.8

342.4

324.2

269.4

250.4

223.8

187.1

163.3

127.8

121.2

100.7

70.3

48.6 15.04

25.17

Height

43.04

128.80

Variance

15.96 17.20

13.02

354.09 23.68

332.81 17.99

251.39 30.5

260.29 10.43

163.91 3.92

186.83 18.05

143.95 11.72

84.68

109.65 6.93

Mean

37.40

15.02

33.62

37.47

18.20

38.36

13.03

25.58

23.46

39.42

13.63

48.45

13.47

Height

Test 2

102.50 Variance

9.35

5.95

3.42

2.65

6.70

3.97

26.77

0.15

11.86

33.73

9.55

1.34

3.04

12.45

Angle

Deviation %

102.01 1.66

68.18

54.68

Angle (%)

Mean

36.41

12.14

22.88

52.41

20.87

34.59

13.03

25.60

35.33

39.35

18.98

44.06

13.47

Angle Test 1 (%) Height (cm)

Calculated value

Table 9 Location results of WT-TDOA

26.50

25.17

Height

12.23

78.09

15.66

19.09

327.39 5.53

331.62 20.50

293.00 7.02

252.16 21.89

187.74 6.56

186.83 18.05

143.95 11.79

119.23 5.02

109.62 7.12

32.15

8.92

11.63

4.38

2.28

8.74

0.72

16.11

0.15

11.86

6.69

9.58

16.15

15.17

12.45

Angle

Deviation %

116.92 29.38

59.65

54.68

Angle

Variance

Mean

43.66

18.02

22.60

26.41

32.37

20.78

25.56

29.27

30.57

24.78

29.11

10.80

Height

Test 3

23.99

40.00

Height

127.96 16.93

18.16

106.9

22.10

39.03

328.55 13.33

348.65 19.00

228.92 13.35

221.65 10.08

191.81 30.69

143.95 11.86

133.70 18.50

108.8

11.21

31.20

4.04

7.53

8.57

0.97

2.51

11.86

4.64

5.55

8.56

17.27

31.83

Angle

Deviation %

109.28 28.39

58.17

64.10

Angle

Acoustic Emission-Based Weld Crack In-situ Detection and Location … 71

72

Z. Zhang et al.

Acknowledgements The authors would like to thank Dr. Gan Tian from Rocket Force Engineering University, Xi’an, China (710049). The work was supported by the National Natural Science Foundation of China (No. 51605372), the China Postdoctoral Science Foundation Funding (No. 2018T111052, 2016M602805), the National Natural Science Foundation of China (No. 51775409), and the Program for New Century Excellent Talents in University (No. NCET-13-0461). Besides, raw data of this study is unavailable for open access since the experimental aluminum alloy tank is confidential.

References 1. Tao D (2017). Research on aluminum alloy materials and bearing defects of high-speed train based on acoustic emission detection. Southwest Jiaotong University 2. Zhang W, Zhang X, Yang B et al (2013) Damage characterization and recognition of aluminum alloys based on acoustic emission signal. J Univ Sci Technol Beijing 35(5):626–633 3. Li, M (2010) Acoustic emission detection and signal processing. Science Press 4. Iturrioz I, Lacidogna G, Carpinteri A (2014) Acoustic emission detection in concrete specimens: experimental analysis and lattice model simulations. Int J Damage Mech 23(3):327–358 5. Chen S, Yang C, Wang G, Liu W (2017) Ultrasonics (75):36–45 6. Chen G, Shen G, Li B (2005) Investigation of characteristics of acoustic emission sources for metallic pressure vessels. China Saf Sci J 15(1):98–103 7. Surgeon M, Wevers M (1999) Modal analysis of acoustic emission signals from CFRP laminates. NDT and E Int 32(6):311–322 8. Pei L (2009) Research on pipeline leakage diagnosis and localization based on modal acoustic emission. Beijing University of Posts and Telecommunications 9. Hao Y, Xing Z, Shao H et al (2011) Experiment research on acoustic emission source location of leakage of pressure pipelines. J Saf Sci Technol 07(6):140–144 10. Zhang P, Chang H, Yang J et al (2017) Application of BP neural network in localization of crack acoustic emission source in wind vane tower barrel. China Measur Test 43(9):106–111 11. Wang X (2009). Several key problem researches on monitoring cracks of hydraulic turbine blades based on acoustic emission technique. Shanghai Jiao Tong University 12. Yu J (2012) Research on acoustic emission processing algorithm of helicopter composite specimen. Harbin Institute of Technology 13. Jiao J, He C, Wu B, Fei R (2005) A new acoustic emission source location technique based on wavelet transform and mode analysis. Chin J Sci Instrum 26(5):482–485 14. Quan H, Dai Y, Wang P (2010) Acoustic emission analysis and location based on generalized S-transform. Acta Electronica Sin 38(2):371–375 15. Fan B, Yan X, Fu M et al (2011) The research on locating method of acoustic emission source based on the dispersive characteristics of lame wave in sheet. Chin J Solid Mech S1:283–287 16. Sai Y, Jiang M, Sui Q et al (2015) Acoustic emission location based on FBG array and MVDR algorithm. Opt Precision Eng 23(11):3012–3017 17. Sedlak P, Hirose Y, Khan SA et al (2009) New automatic localization technique of acoustic emission signals in thin metal plates. Ultrasonics 49(2):254–262 18. Ning JG, Chu L, Ren HL (2014) A quantitative acoustic emission study on fracture processes in ceramics based on wavelet packet decomposition. J Appl Phys 8(116) 19. Wang Z, Ren H, Ning J (2018) Acoustic emission source location based on wavelet transform de-noising. J Vib Shock 4(37):226–232, 248 20. Xu Y, Zhao G, Ma C, Zhang J (2015) New denoising method based on dual-tree complex wavelet transform and nonlinear time series. J Vib Shock 16(34):135–140

Acoustic Emission-Based Weld Crack In-situ Detection and Location …

73

21. Fan B, Wang H, Xu B, Zhang Y (2015) Present research situation of the extraction and processing of weak acoustic emission signals under strong background noise. J Vib Shock 16(34):147–155 22. Teng P, Lombard A, Kellermann W (2010) Disambiguation in multidimensional tracking of multiple acoustic sources using a gaussian likelihood criterion. In: 2010 IEEE international conference on acoustics speech and signal processing. pp 145–148

The Research of Real-Time Welding Quality Detection via Visual Sensor for MIG Welding Process Junfeng Han, Zhiqiang Feng, Ziquan Jiao, and Xiangxi Han

Abstract In this paper, aiming at the online monitoring of weld quality in welding process, a real-time monitoring system based on visual sensing technology was proposed. Based on the SDM method, the edge features of molten pool image are extracted, and the features of molten pool area, width, half-length, and back width are further obtained. Based on the extracted image features and welding process parameters, a stochastic forest penetration model was built to classify and identify the penetration status in the welding process and to predict the weld back penetration width regression. The classification and recognition rate of random forest penetration model is 89.8%. Keywords Automatic welding · Visual sensor · Prediction model

1 Introduction As one of the basic engineering technologies of modern manufacturing industry, welding technology is the key technology of modern ship and ocean engineering manufacturing. During the construction of ship and ocean engineering, the welding time accounts for about 30–40% of the total construction time of structure. The welding quality of ship is an important index to evaluate the quality of ship construction. Especially since the twenty-first century, the large number of new ships and marine engineering construction and the wide use of new materials have brought great challenges to ship welding technology, which has an important impact on ship construction quality. Using digital technology to manage welding process, welding parameters, environmental conditions, welding materials, and construction

J. Han · Z. Feng (B) · Z. Jiao (B) · X. Han School of Mechanical and Marine Engineering, Beibu Gulf University, Qinzhou 535011, China e-mail: [email protected] Z. Jiao e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_4

75

76

J. Han et al.

personnel has become an important means and method to improve the quality of ship construction and conduct scientific management of ship construction. Robot automatic welding is inevitable trend for the future development of welding manufacture, but in the actual robot welding process prone to thermal deformation and clearance change and the artifact itself factors such as processing and assembling error may result in the welding process in advance teaching path there is a deviation with actual welding position, affect the welding quality. Through the use of visual sensor and sound sensor, it is easy to collect image and sound and welding process and control the robot implement rectification movement. Finally, it is important to achieve the quality control. Because the robot has the ability to perceive the external information and feedback control. The research play an important role for the development of intelligent manufacturing and improve the welding process automation and intelligent [1–4]. Visual information refers to the welding process of welding visual information from the welding area, the visible light contains a rich amount of information, can reveal the molten weld pool and the dynamic changes in the welding process, after the noise reduction and removal of arc interference, can extract characteristic information, and status of molten pool, the arc shape, weld position, form and weld joint conditions make effective judgment. The hardware required to obtain visual information in welding is simple, the system is easy to build and maintain, so the prediction of welding quality through visual information is widely used [5–7].

2 Experimental Establishment Figure 1 shows welding experiment system schematic diagram. The welding test using tungsten argon arc welding method is referred to as gas tungsten arc welding (GTAW), mainly by the welding robot, electric cabinet, and other major equipment and industrial computer, graphics and sound sensor to protect cylinders, tanks, power supply, wire feeding machine, the workpiece two-dimensional platform and other auxiliary equipment. The welding robot is responsible for the welding operation; graphics and sound sensor welding dynamic process are responsible for acquisition of image and sound signals and transfer to the industrial control. The IPC is responsible for adjusting welding parameters and transmitting the welding parameters command to the robot in welding operation through the electric control box.

2.1 Weld Locating and Tracking Sensor A high-performance vision sensor based on linear-structured light has been chosen for this system. The main material of the sensor shell is aluminum alloy, and the internal structure is carefully designed, so that the overall volume of the sensor can

The Research of Real-Time Welding Quality Detection …

77

Fig. 1 Framework of the experiment system

be kept within the range of 104 × 29 × 112 mm, while the weight is around 585 g, which can be stable when installed in the front of the robot (Fig. 2). Welding seam feature information extraction based on laser structural light is a method of image feature recognition directly based on fringe deformation. After calibration, accurate 2D data and relatively simple 3D data can be calculated and obtained. The detection principle is shown in Fig. 3. Laser emitting linear-structured light from above the diagonal to the welding surface will appear a certain width of the Fig. 2 Weld tracking sensor

78

J. Han et al.

Fig. 3 Schematic of structured light imaging

laser line, as a result of the CCD camera imaging plane and laser form a certain angle. So the line shape of the laser line in the CCD imaging deformation happens, the degree of the deformation with the weld width, depth and weld to produce different shape changes, these changes can be identified after image processing. And combining with the calibration information of image and video camera, the position of the robot state information, you can get the current moment deviation status of the robot. Finally, the robot is controlled to perform corrective action to realize seam tracking [8, 9].

2.2 Tracking Process Image sensors and other hardware devices are the basic components of the whole system, and the operation of the whole system requires corresponding supporting software to integrate and process these hardware devices. The tracking control software is responsible for the information acquisition and processing of the whole system and the sending of control instructions. Operation process of the whole system, images of the real-time acquisition are sent to the master computer, the software received the processed image which have be processed according to the different image processing algorithms. the error extracted between torch and seam is the deviation for the robot, and through adjusting the deviation to the robot trajectory. It is ensure to control the the quality of weld forming, such as shown in Fig. 4.

The Research of Real-Time Welding Quality Detection …

79

Fig. 4 Welding seam tracking process

The welding seam tracking process is mainly divided into the following steps according to the sequence: image acquisition and processing, extraction of weld feature information and robot tracking control. Image acquisition is mainly to obtain the image of laser fringe of weld from the outside. The quality of image is related to the accuracy of subsequent image processing and weld quality. Image processing is an important link in the weld tracking process. How to effectively and quickly extract the weld position deviation from the weld image and convert it into motion control instructions according to the current robot position is related to the effect of weld rectifying control and the final weld forming quality. Robot control is mainly based on the previous image processing steps to obtain the weld position information into the actual robot trajectory information, and sent to the robot at the appropriate time. The above three steps are the main flow of weld tracking control. These steps will be discussed in detail below. The workflow is shown in Figs. 5 and 6.

Fig. 5 Workflow of this system

80

J. Han et al.

Fig. 6 The interface of the experiment system

2.3 Robot Communication Seam tracking control software is installed on the industrial PC, and through the gigabit Ethernet cable communication between robots, using TCP/IP communications protocol, camera control instruction and image data, the robot coordinate information and location register variables are transmitted through the Internet, such as the whole system with industrial computer as the center for data transmission. Image sensor in gathering information will be lost to the computer, with image processing software for processing by weld deviation information, then combined with the image calibration results are the current robot position deviation, welding seam tracking system based on revised robot trajectory deviation values, then sent to the robot controller performs path deviation rectification movement, as shown in Fig. 7. The welding seam tracking control software can communicate with ABB, while retaining the interface to communicate with other robots, facilitating the expansion of the whole program.

The Research of Real-Time Welding Quality Detection …

81

Fig. 7 Workflow communication diagram of weld tracking system

3 Image Processing 3.1 Welding Seam Feature Extraction The main process of image processing algorithm for weld locating and tracking is shown in Fig. 8, and some image processing algorithms with clear mathematical definitions are already provided in the OpenCV library, and only need to be called through the corresponding interface. For some image processing algorithms closely related to the actual application environment, such as region of interest (ROI) acquisition, adaptive threshold algorithm and centerline extraction algorithm need to be programmed according to requirements. For a typical groove weld, the laser band is projected to the surface of the workpiece and then reflected to the sensor for imaging, and the shape features of the

Fig. 8 Image processing flow

82

J. Han et al.

Fig. 9 Image processing result of butt joint

laser fringe formed are not the same. Therefore, different image feature extraction algorithms should be adopted to process according to the characteristics of different weld fringes, so as to ensure the best effect (Fig. 9).

3.2 Welding Pool Feature Extraction In the pulse process of aluminum alloy, AC power is selected to ensure the welding quality and image quality, which can not only ensure the heat input on the welding material, but also make full use of aluminum alloy welding cathode to break the oxide film on the surface of aluminum alloy. Pulse waveform is selected for power supply, enough heat input can be ensured in the welding process at the time of pulse peak value, and the arc at the time of pulse base value is weak, which is convenient for direct observation of molten pool. As for image mining, at peak time, the arc brightness is large and the molten pool will be completely covered by arc light, so it is not suitable for image mining. At the initial stage of the base moment, the pool has not recovered from the peak high current, and the surface of the pool is prone to waviness under the arc blowing force. In the previous research showed that the feature from the peak time period were much obviously than the base time period. All the researchers were tend to focus on the peak time peroid of pulse. It is not only good for extracting the feature and also could delete the redundant information. The visual sensing system is installed on the position shown as Fig. 10). The visual information of the aluminum alloy weld pool intuitively conveyed the dynamic characteristics of the liquid weld pool during the welding process, and the positive and negative visual images obtained were real-time detection or prediction of

(a) The groove type Fig. 10 Experiment designs

(b) The acquisition methods

The Research of Real-Time Welding Quality Detection …

83

Fig. 11 Image of weld pool

typical weld surface forming defects, such as surface oxidation pores, weld beds, and weld leakage defects. Therefore, it is very important to reduce the noise, enhance the image and extract the edge to obtain the geometric characteristic parameters reflecting the shape of the molten pool (Fig. 11). The objective function of supervised descent method (SDM) algorithm is mean square error and is a nonlinear least squares function. In other words, SDM algorithm will obtain the direction of gradient descent and the scale in this direction through multiple iterations, and finally get a regression model, so that the objective function converges to the minimum value with a faster convergence speed, cleverly avoiding the problem of solving Jacobian matrix and Hessian matrix in Newton’s method [26]. Both SDM and Newtonian methods are based on regression. For an image, an initial shape is given and the initial shape is returned to a position close to or even equal to the real shape through continuous iteration. That is, based on a given initial shape x 0 , return x 0 to the correct shape X* by means of regression, that is, find the smallest x that makes the following f (x 0 + x). f (x0 + x) = |h|(d(x0 + x))|−φ∗ |22

(1)

where h(d(x 0 )) represents the nonlinear feature extraction function of the initial shape. In this paper, 128-dimension SIFT feature is used. * = h (d) (x*), said the picture real feature points extracted by the SIFT features; X0 is the initial shape, generally the average shape of the real shape of all the training samples. x0 =

N 1  x∗i N i=1

(2)

4 The Establishment of Predicted Model Random forest is an integrated learning method for regression, classification, or other machine learning tasks. In the process of retraining, multiple decision tree models are

84

J. Han et al.

Fig. 12 Flowchart of random forest method

constructed, and each decision tree model votes all the results or obtains the average value of all the results as the final result of random forest. The stochastic forest algorithm is developed on the basis of decision tree and can correct the overfitting caused by inductive preference of decision tree. Decision trees are widely used in building machine learning models. Hastie et al. believe that “tree learning” is the method that most meets the requirements of data mining among existing methods. It can maintain invariance under the conditions of scaling of eigenvalues and other transformations of eigenvalues, and at the same time contains irrelevant features to a large extent and generates verifiable models with high accuracy. The decision tree contains one root node, several internal nodes, and several leaves. The root node contains the complete set of samples. Each internal node contains a sample set, which is divided into the children of this node according to the test results of the attributes of the sample set. Leaves represent decision results. The path from the root to each leaf represents a decision sequence. The goal of decision tree learning is to obtain a learning model with strong ability to deal with unknown instances in advance (strong generalization ability). When the weld back fusion width is less than 3 mm, it is set to be no fusion; when the back melt width is between 5 and 6 mm, it is considered as excessive penetration; the weld penetration samples are determined by the weld surface, back molding and whether the weld pool image leaks light. The remaining samples were classified as full penetration. The corresponding label values of the four types of samples are 1, 2, 3, and 4, respectively. Figures 4, 5 and 6 are the flowchart of the random forest. Based on the random forest method, training samples are conducted to obtain the classifier model and regression model, and then the model is tested with the data of the sample space [10–12].

The Research of Real-Time Welding Quality Detection …

85

Fig. 13 Predicted result

Figure 12 is the flowchart of random forest. Based on the random forest method, the training samples are trained, the classifier model and the regression model are obtained, and then the model is tested with the data of sample space. Figure 13 shows the classification prediction results of the test sample space based on the random forest classification model. For the random forest classification model, the accuracy rate is 89.8%. Thus, based on the 7 class synchronous extraction features of the welding parameters and image, the weld voltage, current and positive area, front length, positive molten pool of molten pool half width and back fusion width and penetration state characteristics, a random forest model is built. Among them, the training sample set used in the model training is the 7 types of synchronous characteristics of weld 1, and the test sample set is the 7 types of synchronous characteristics of weld 2. The random forest learning model obtained by training realized the classification of weld penetration state and the regression prediction of weld pool width on the back of welding seam. It could work for the online weld quality recognition.

5 Conclusion Aiming at the online real-time monitoring of weld quality in the traditional welding process, a real-time welding quality prediction scheme based on visual sensor was proposed, and a new way of feature extraction for welding seam image was designed. Based on the ROI visual attention mechanism, images of the front and back of the pool were extracted, and features of the edge of the pool were extracted based on the SDM method, further features of the pool area, width, semi-length, and back width are obtained. A random forest special fusion model based on weld parameters and image features was constructed, which realized the recognition of weld penetration status classification and the regression prediction of weld back penetration width. The recognition rate of the fusion status classification model was 89.8%.

86

J. Han et al.

Acknowledgements This work was supported by the National Natural Science Foundation of China (No. 51969001), the Guangxi Major Science and Technology Projects of China (No. GuikeAA17204030), the Guangxi Natural Science Foundation of China (No. 2018GXNSFAA138080), and the Guangxi Science and Technology Base and Talent Project of China (No. GuikeAD18281007).

References 1. Chen SB, Lv N (2014) Research evolution on intelligentized technologies for arc welding process. J Manuf Process 16(1):109–122 2. Lv N et al (2014) Real-time control of welding penetration during robotic GTAW dynamical process by audio sensing of arc length. Int J Adv Manuf Technol 74(1–4):235–249 3. Lv N et al (2017) Automated control of welding penetration based on audio sensing technology. J Mater Process Technol 250:81–98 4. Lv N et al (2014) Real-time control of welding penetration during robotic GTAW dynamical process by audio sensing of arc length. Int J Adv Manuf Technol 74(1–4):235–249. [18] Wang Y, Zhao P (2001) Plasma-arc welding sound signature for on-line quality control. ISIJ Int 41(2):164–167 5. Zeng J, Chang B, Du D et al (2016) A visual weld edge recognition method based on light and shadow feature construction using directional lighting. J Manuf Process 24(Part 1):19–30 6. Xu Y, Fang G, Lv N et al (2015) Computer vision technology for seam tracking in robotic GTAW and GMAW. Robot Comput-Integr Manuf 32:25–36 7. Zhang G, Wu CS, Liu X (2015) Single vision system for simultaneous observation of keyhole and weld pool in plasma arc welding. J Mater Process Technol 215:71–78 8. Gao J, Qin G, Yang J et al (2011) Image processing of weld pool and keyhole in Nd: YAG laser welding of stainless steel based on visual sensing. Trans Nonferrous Metals Soc China 21(2):423–428 9. Fan C, Lv F, Chen S (2009) Visual sensing and penetration control in aluminum alloy pulsed GTA welding. Int J Adv Manuf Technol 42(1–2):126–137 10. Wu D et al (2017) VPPAW penetration monitoring based on fusion of visual and acoustic signals using t-SNE and DBN model. Mater Des 123:1–14 11. Zhou ZF (2005) Welding metallurgy: weldability of metal. China Machine Press, Beijing, pp 125–129 12. Ma H, Wei S, Lin T et al (2010) Binocular vision system for both weld pool and root gap in robot welding process. Sens Rev 30(2):116–123

A Weld Bead Profile Extraction Method Based on Scanning Monocular Stereo Vision for Multi-layer Multi-pass Welding on Mid-thick Plate Zhen Hou, Yanling Xu, Runquan Xiao, and Shanben Chen

Abstract In the shipbuilding and marine industry, multi-layer multi-pass welding (MLMPW) on mid-thick plate plays an important role, but most of the actual production is still completed by manual welding. It is a big challenge to improve its intelligence level. In view of the current multi-layer multi-pass planning (MLMPP), a lot of simplifications are carried out. Limited to simplified weld bead shapes. MLMPP is difficult to implement more accurate intelligent welding technology. In the field of Intelligent Welding Manufacturing (IWM), Intelligent Robotic Welding Technology (IRWT) is the key of the IWM. Improving IRWT for MLMPW is of great value. The extraction of weld bead information is very important for more accurate MLMPP and its online correction. In this paper, scanning monocular stereo vision for MLMPW is used to reconstruct the weld bead. Through the slicing and filtering of the point cloud data, the profile of the weld bead surface is obtained, which provides a solid foundation for MLMPP and its online correction. Keywords Laser vision · Point cloud · 3D reconstruction · Feature extraction

1 Introduction Welding is the essential process for manufacturing industries, especially in the shipbuilding and marine industry. MLMPW on mid-thick plate occupies a large number of man-hours. But its intelligence level is very low, and most of them are still completed by manual welding. High labor intensity and harsh working environment push the Z. Hou · Y. Xu (B) · R. Xiao · S. Chen (B) Intelligentized Robotic Welding Technology Laboratory, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] S. Chen e-mail: [email protected] Shanghai Key Laboratory of Materials Laser Processing and Modification, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_5

87

88

Z. Hou et al.

pace of automation and intelligentized welding instead of manual welding. IRWT for MLMPW on mid-thick plate is of great value [1–3]. MLMPW is widely used in mid-thick plate welding [4–7]. Usually, an offline planning was done before execution. The planning weld bead is highly simplified into triangle, trapezoid, and parallelogram [8]. There must be a bias between the real shape and the planning shape. Extracting the actual profile of the weld bead is the basement of planning and online correction of MLMPW. Zhan et al. [4] use metallographic photos to see the weld section and try to describe boundary of the MLMPW, but this method is not suitable for online inspection. The weld bead would remelt the former one and break the shape of the former weld bead, only retains the upper surface of the uppermost layer (covering layer). Therefore, A more direct and flexible method needs to extract the weld bead profile in the multi-layer multi-pass welding process. Visual sensing is undoubtedly the most direct way to obtain the shape of the weld bead. The passive sensing method can obtain a large amount of information, but there are two disadvantages to reconstruct the information of the weld bead area, especially the section direction. One is that the similarity of different positions of the weld bead will make it difficult to extract matching points, and the other is that the stereo vision requires multiple shooting, which increases the complexity of the operation. The laser vision sensing method can use the laser plane to form stereo vision [9–12], obtain the 3-D coordinates of a 1-D point set in a single image, move at a constant speed above the weld bead and collect the image. Obtain the point cloud description of the upper surface of the weld bead. In this paper, a laser vision sensor system is designed and manufactured, which can not only execute real-time weld tracking during the welding process [13], but also perform point cloud scanning of the weld bead during the return trip of the torch. The system consists of a sensor, a power and switch module and a main control computer. The sensor is installed at the end of the 6th axes of the robot and forms a fixed positional relationship with the welding torch. The point cloud data is obtained through the formulated scanning strategy, and the profile of the weld bead is extracted through a series of operations such as slicing and filtering the point cloud data. The rest of this paper is organized in the following manner. The system configuration is described in Sect. 2. Data processing including image processing, point cloud data processing, and coordinate transformation is discussed in Sect. 3. Result of weld joint profile extraction on MLMPW is presented in Sect. 4. The paper is concluded in Sect. 5.

2 System Configuration In this paper, a set of intelligentized robotic welding system (IRWS) is set up. The IRWS consists of two main parts: robot welding system and laser vision sensing system. As Fig. 1 shows, the robot welding system includes industrial welding robot, welding power supply and ancillary facilities. FANUC m-10ia robot with 6 axes is

A Weld Bead Profile Extraction Method Based on Scanning …

89

Laser vision based Intelligent Robot welding system Laser vision sensing system (LVSS)

Robot welding system

Ethernet

Power and switch module

Welding power source

Gas in

Ethernet

planning

Industrial Process Center

sensing

Laser sensor

Robot cabinet

execuƟng

weldment

Shielding gas

Internal connecƟon

Robot body

Fig. 1 Intelligentized robotic welding system

used as the main executor of welding motion, Fronius TPS 400i welding power paired with Fronius WF 60i wire feeder is used in this paper. Welding wire with designation of SY-MG56 meet the criterion GB/T8110 and Ar mixed with 20% CO2 is used as the protection gas. The laser vision sensing system used in this paper is laser vision sensing system (LVSS) 2.0 as shown in Fig. 2, which is developed by intelligentized robotic welding technology laboratory (IRWT lab) Shanghai Jiao Tong University. LVSS is comprised of laser vision sensor, power and switch module and main control computer. The main functional parts in industrial CCD camera and a line structured laser generator. Table 1 shows their main parameters. Table 1 Main parameters of the laser vision sensor Camera (MER-200-14GM)

Laser generator

Resolution

1628 (H) × 1236 (W)

Frame rate

14 Hz

Line width

0.8 mm

Pixel size

4.4 × 4.4 µm

Wave length

660 nm

Sensor size

1/1.8”

Focal length

50 mm

Power

200 mw

90

Z. Hou et al.

3 Point Cloud Reconstruction and Weld Bead Profile Extraction The entire process can be divided into three parts, scanning and image acquisition, image process and point cloud data collection, and point data processing. Figure 3 is the flowchart of the proposed method.

3.1 Scanning and Image Acquisition In MLMPW, welds start at one side and extinction at the other side of the weldment, then the robot move the torch back to the start side and then offset the preset value, preparing for the next weld, the return trip of the torch can be used to scan the

camera

sensor torch

laser

laser Stripe/plane weldment

Fig. 2 Laser vision sensor [13]

Weld bead profile extraction Scanning and image acquisition

Image processing and point cloud reconstruction

Point cloud processing and weld bead profile extraction

Scanning strategy

Image processing

Slice cut

Image acquisiton

Coordinate transform

Outlier filter

Coordinate record

Coordinate record

Profile parameter

Fig. 3 Flowchart of weld bead profile extraction

A Weld Bead Profile Extraction Method Based on Scanning …

91

Fig. 4 Schematic of point cloud scanning

welded groove. Usually, joint mode is used during the return trip motion. As the consideration of accuracy, motion mode needs to be changed into world mode. Parameters associating to the scanning is scanning speed vs and scanning height hs . Figure 4 is the schematic of the scanning. When one pass of weld ends, the torch moves away from the weldment for safety movement, according to, coordinate of point C is calculated according to the structured light plane function and the position of arc-end point B. Image acquisition starts while the torch arrived point C, the scanning movement toward point D begins at the same time. A series of images with timestamp and a series of robot coordinate are recorded.

3.2 Image Processing and Point Could Reconstruction Considering the scanning image is free from arc and splash interference, and the point cloud data accuracy along the welding direction is lower than in the cross-section, centroid method meets the requirement of image processing the pixel that presents the laser stripe located at column V can be calculated as Eq. (1).  i · I (i, j) cj =  I (i, j)

(1)

92

Z. Hou et al.

where cj is the stripe center at jth column, I(i, j) is the image pixel binarization value. Then, the 2-D matrix image dimensionality reduction into 1-D set of image coordinates. Figure 5 is the schematic of the coordinate transform, stripe points in the image coordinate [u, v] transform into camera coordinate with the intrinsic parameter matrix A with Eq. (2). ⎤ α γ u0 C = Ac with A = ⎣ 0 β v0 ⎦ 0 0 1 ⎡

(2)

where C = [xc yc z c ]t , is the camera coordinate form, c = [u v 1]t, is the augmented image coordinate form. A is the intrinsic matrix. Then C transformed into M = [xt yt zt] which present coordinate in TCP coordinate system with hand-eye relationship: M = H C˜ with H =



R T 0T 1

 (3)

where C˜ = [xc yc z c 1]t , is the augmented camera coordinate form. H is the hand-eye relation matrix. In this paper, the intrinsic matrix and the extrinsic matrix listed in Table.2.

Fig. 5 Schematic of coordinate system

A Weld Bead Profile Extraction Method Based on Scanning …

93

Table 2 Parameters of intrinsic matrix and extrinsic matrix Parameter

Expression

Calibration result

Laser plane

[A, B, C, D]

R T H= 0T 1

[−3281.414 − 66.789 − 491.162] ⎡ ⎤ 7.30 × 10−1 1.87 × 10−2 −6.83 × 10−1 132.30 ⎢ ⎥ ⎢ −1.75 × 10−2 9.99 × 10−1 8.66 × 10−3 −10.46 ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 6.83 × 10−1 5.64 × 10−3 7.31 × 10−1 −38.91 ⎦ 0 0 0 1.00

 132.30 −10.46 −38.91 0.44 −43.07 −1.37

Hand-eye relationship

Sensor frame

x y z w pr



A series of point sets are obtained by repeating the image processing process and combining with the robot coordinates when the image is captured. These sets of points constitute the point cloud data.

3.3 Point Cloud Data Processing As can be seen in Fig. 6, the point cloud data reconstructed the weld morphology well, small feature such as fish scales and welding slag can be recognized. When defects like undercut happened, the reconstruction can tell reliable. Considering to eliminate the error in extracting laser stripe in single image, an angle between the laser plane and the weld cross-section is set. So that points from one image belong to different cross-section. Taken a piece of cloud data along the welding direction. The main interferences in point cloud data are double-reflect stripes, as Fig. 7 shows, the outlier is discontinuous. Use the initial groove profile as the base, new weld profiles and outlier have longer distance from the base profile than threshold, and inside the groove area. Points that have the larger distance extracted in set B, cluster B and fit the subset points extend the fit line to intersect the existed boundary, then remove all subset data when:

a)

Fig. 6 Weldment and point cloud data reconstruction

b)

94

Z. Hou et al.

a) cluster

b) subset fit

Fig. 7 Initial groove profile

1. 2. 3.

Only one cross point. Nearest distance over threshold d 2 . Area larger than threshold S t .

As Fig. 7b shows, the upper subset is removed for its fitted line has only one cross point with existed boundary. The lower subset is kept for none of the three conditions are met. Fitted the rest candidate points, and calculate the average of absolute error e, and remove points have larger absolute error than s times e, these points usually come from weak double reflect or rise and fall of the weld bead. Repeat to fit the rest points and calculate the average absolute error, till all points have less absolute error than threshold or the max value less than threshold d3. Figure 8 shows the fitted line and absolute error. The profile of the new weld bead is extracted, as Fig. 9 shows, parameters such as weld height h, weld width, surface height hs, and area of weld bead are defined.

4 Experiment and Result Figure 10 is the weld bead point cloud data after filtering. Figure 11 is the weld bead profile of the whole MLMPW. Table 3 is the weld bead information and its welding parameters.

5 Conclusion Weld bead profile extraction is very important, not only for more accurate MLMP planning and online correction, deformation angle and defects such as undercut,

A Weld Bead Profile Extraction Method Based on Scanning …

95

b) Error distribuƟon

a) FiƩed line

d) Error distribuƟon2

c) FiƩed line2 Fig. 8 Fitted line and absolute error

Fig. 9 Schematic of weld bead profile

Surface height

height

width

area

pores can also be detected. For extracting the welded bead information, a set of hardware and implement method is proposed in this paper. Concluded as follows. 1. 2.

Scanning method including image capture and scanning strategy is proposed. To ensure the accuracy without affecting the efficiency of MLMPW. The source of point cloud data is analyzed, and the filtering method is proposed.

96

Fig. 10 Weld bead point cloud data after filtering

Fig. 11 Weld bead profile of MLMPW

Z. Hou et al.

A Weld Bead Profile Extraction Method Based on Scanning …

97

Table 3 Weld bead information and its welding parameters Weld bead

Area

Height

Surface h

Width

1

38.1

7.3

1.8

9.6

2

34.1

5.8

4.1

3

25.2

5.1

1.1

4

27.5

4.6

5

15.8

6

19.1

7

Posture angle

Weld speed

Wire feed

0

4

5

11.3

10

4

5

9.2

−10

4

5

4.3

11.9

10

4

5

3.9

2.9

8.8

−10

4

4.5

3.2

0.6

10.3

0

4

4.5

27.5

4.6

4.4

13.9

10

4

5

8

28.1

4.9

4.6

12.5

−10

4

5

9

10.1

3.7

3.7

5.6

5

4

4.5

10

20.2

3.3

0.5

9.5

0

4

5

11

28.9

3.6

3.4

12.5

10

4

5

12

10.9

3.5

3.3

10.5

−10

4

4.5

13

16.2

5.2

4.8

13.7

5

4

5

14

11.8

4.0

3.6

11.1

−5

4

4.5

15

34.8

3.9

1.1

16

0

4

5

Acknowledgements This work is partly supported by the National Natural Science Foundation of China (No. 61873164, 61973213), and the Shanghai Natural Science Foundation (No.18ZR1421500).

References 1. Chen S (2015) On intelligentized welding manufacturing. In: The Advances in intelligent systems and computing, vol 363. Springer Verlag, pp 3-34 2. Chen S, Lv N (2014) Research evolution on intelligentized technologies for arc welding process. J Manuf Processes 16:109–122 3. Chen S (2007) On the key intelligentized technologies of welding robot. Lect Notes Control Inf Sci LNCIS 362:105–116 4. Zhan X, Zhang D, Liu X et al (2017) Comparison between weave bead welding and multi-layer multi-pass welding for thick plate Invar steel. Int J Adv Manuf Technol 88(5–8):2211–2225 5. Liu W, Lu F et al (2016) Special zone in multi-layer and multi-pass welded metal and its role in the creep behavior of 9Cr1Mo welded joint. Mater Des 108:195–206 6. Zhan X, Liu X et al (2017) Numerical simulation on backward deformation of MIG multi-layer and multi-pass welding of thick Invar alloy. Int J Adv Manuf Technol 92(1–4):1001–1012 7. Fang JX, Li SB et al (2019) Effects of phase transition temperature and preheating on residual stress in multi-pass and multi-layer laser metal deposition. J Alloys Compd 792:928–937 8. Zhang H, Lu H et al (2011) Robot path planning in multi-pass weaving welding for thick plates. Robotic welding, intelligence and automation. Springer, Berlin, Heidelberg, pp 351–359 9. Yin Z, Xiong J (2020) Stereovision measurement of layer geometry in wire and arc additive manufacturing with various stereo matching algorithms. J Manuf Process 56:428–438

98

Z. Hou et al.

10. Wells, LJ, Shafae M, Camelio J (2015) Automated surface defect detection using high density point clouds. J Manuf Sci Eng 138(7) 11. Wang, Z., Deguchi, Y., Shiou, F., Yan, J., Liu, J., (2016). Application of laser-induced breakdown spectroscopy to real-time elemental monitoring of iron and steel making processes. ISIJ International. ISIJINT-2015 12. Chu H-H, Wang Z-Y (2016) A vision-based system for post-welding quality measurement and defect detection. Int J Adv Manuf Technol 86(9–12):3007–3014 13. Hou Z, Xu Y, Xiao R et al (2020) A teaching-free welding method based on laser visual sensing system in robotic GMAW. Int J Adv Manuf Technol 109(5):1755–1774

The Intelligent Methodology for Monitoring the Dynamic Welding Quality Using Visual and Audio Sensor Zhiqiang Feng, Ziquan Jiao, Junfeng Han, and Weiming Huang

Abstract Aiming at the problem of online real-time monitoring of weld quality in traditional welding process, in this paper, a real-time welding quality prediction scheme based on multi-information fusion was proposed. Firstly, in view of the collected arc sound signals, a feature extraction method of short-time average energy and MEL frequency cepstrum (MFCC) is proposed to characterize the energy, timing characteristics and Merle frequency domain characteristics of sound; the image characteristics of weld are analyzed and the image processing method is designed. For straight welds, boundary method should be adopted for segmentation. Centerlines of weld seams can be extracted by means of median filtering, operator edge detection and least square straight line fitting; Based on ROI visual attention mechanism, the front image of the molten pool was extracted, and the edge features of the molten pool were extracted based on SDM method, so as to further obtain the features of the molten pool area, width, and semi-length. The results showed that the arc sound and visual information could support to each other. Finally, combine these two features could achieve online welding quality monitoring. Keywords Welding robot · Visual sensor · Welding arc sound

1 Introduction The quality control of welding process is always the key of intelligent welding technology. For humans, vision is the main thing. For the welder, the welding status information obtained during the welding process all comes from vision. Visual feature detection has large information content, high measuring accuracy, no contact between sensor and workpiece, and anti-electromagnetic interference ability. Strong strength Z. Feng · Z. Jiao (B) · J. Han (B) · W. Huang School of Mechanical and Marine Engineering, Beibu Gulf University, Qinzhou 535011, China e-mail: [email protected] J. Han e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_6

99

100

Z. Feng et al.

and other advantages, has a broad application prospect. Visual sensing method is adopted to detect the welding process in real time. The relationship between welding quality and welding parameters is studied, and the welding process is closed loop. Control, this has become an important research direction. For some special situation, arc sound signal can also be the important auxiliary information. It is better to combine two sensory technologies to achieve the welding quality monitoring [1–5].

2 Experiment Design The welding experiment system schematic diagram is shown in Fig. 1. The welding test using tungsten argon arc welding methods, referred to as “GTAW Welding” (Gas Tungsten Arc Welding), mainly by several parts, the welding robot, electric cabinet, and other major equipment and industrial computer, graphics and sound sensor to protect cylinders, tanks, power supply, wire feeding machine, the workpiece two-dimensional platform, and other auxiliary equipment. The welding operation is achieved by the welding robot, visual, and sound sensor welding dynamic process are responsible for acquisition of image and sound signals and transfer to the industrial control. The computer is responsible for adjusting welding parameters and transmitting the welding parameters command to the robot in welding operation through the electric control box.

Fig. 1 Framework of the experiment system

The Intelligent Methodology for Monitoring the Dynamic …

101

Fig. 2 Weld tracking sensor

A high-performance vision sensor based on linear structured light has been chosen for this system. The main material of the sensor shell is aluminum alloy, and the internal structure is carefully designed, so that the overall volume of the sensor can be kept within the range of 104 × 29 × 112 mm, while the weight is around 585 g, which can be stable when installed in the front of the robot (Fig. 2). MP201 type 1/2 inch prepolarized free-field microphone from Beijing Prestige Sensor Co., LTD is adopted, as shown in Fig. 3, and its specific parameters and indicators are shown in Table 1. 16MnQ345B, a low carbon alloy structural steel with a size of 3 × 50 × 300, is used in the pulse fusion electrode mixed gas protection welding experiment. Its composition is shown in Table 2. In order to monitor the welding dynamic process based on the sound signal, the following welding experiment is designed in this paper. Heat accumulation occurs in the welding process by increasing the heat input, so as to generate heat and burn through in the welding process, so as to study whether the welding sound signal can identify this dynamic change. In the unitized controlled welding mode, we only need to input two variables of welding speed and wire feeding speed, and other welding parameters are built into the welder and controlled by its expert system. Because this paper does not involve welding parameters in quantitative analysis, and the research is focused on the welding dynamic process Fig. 3 Visual image

102

Z. Feng et al.

Table 1 Major parameters of microphone

Type

MP201

Sound field type

Free field

Materials

Nickel film, nickel alloy

Open circuit sensitivity (dB(50 mV/Pa))

−26 ± 2

Frequency response (Hz)

20–20 k

Polarization voltage (V)

0

Dynamic range (dB, 3% distortion)

>146

Background noise (dBA)

I j2 0 otherwise

(4)

In formula (4), I j1 , I j2 is the gray value of the j-th pair of pixels. According to the random fern classifier, each category b corresponds to the regression output δSb . Then, the calculation formula of the shape increment δSb corresponding to type b is shown in Formula (5).        δSb = arg max  Si − ( S i + δS) δS

(5)

iεk

The overall shape index feature (i.e., random fern split feature selection) selection process is as follows: (1) (2) (3) (4) (5)

Randomly select P points and combine them in pairs to get P(P − 1)/2 grayscale pixel differences. Project the regression target to a random direction to get a scalar. Among the P(P − 1)/2 features, select the feature with the highest correlation with the scalar obtained in step (1). Repeat steps (2) and (3) 5 times to get five different features. Combine five features and five random values into a random fern.

The algorithm complexity is O(P2) to calculate these P(P − 1)/2-pixel differences directly. This calculation formula is (6). corr(Yproj ; pm − pn ) =

cov(Yproj , pm ) − cov(Yproj , pn )  σ (Yproj )σ ( pm − pn )

In Formula (6), σ ( pm − pn ) can be obtained by Formula (7).

(6)

Identification and Penetration Prediction of Aluminum Alloy …

σ ( pm − pn ) = cov( pm , pm ) + cov( pn , pn ) − 2cov( pm , pn )

139

(7)

Yproj is the projection in a random direction, and pm and pn are the gray values of pixels m and n. In order to reduce the time complexity of the algorithm, you can first calculate cov(Yproj , pm ), cov(Yproj , pn ), cov( pm , pn ), and convert this process into the covariance of a scalar and P pixels. Then, the time complexity is O(P). However, the time to find the feature points for each image of the two-level cascade regression algorithm is about 0.5 s, and the calculation is relatively slow. It needs further optimization. Therefore, when selecting the feature value, the selection of the pixel points using the gray difference as the feature is selected, and the random point of the ROI (region of interest) is no longer selected, but the points with a distance of 50 pixels around the feature point in the training data set are used. During training, 800 pictures whose molten pools are not blurred (as shown in Fig. 8) are taken as the training set, and 200 pictures are taken as the test set.  N  . The number of features The hypothetical sample in training is Ii , Sˆi i=1

extracted is M. First, load the sample data, normalize the real shape Sˆi of N samples, ˆ Then enter the first layer of regressor R t , where and calculate the average shape S. the value of t is 1, 2, 3, …, T. For each regressor R t , the last regression shape of each sample in N training images is Sit−1 , and the true shape of each sample is Sˆi . For each

Fig. 8 Pictures of the molten pool

140

Y. Luo et al.

Table 5 Test results of different cascade numbers of the advanced two-layer shape cascade regression algorithm Number of first-level regressors T

1

5

10

20

Number of second-level regressors K

500

100

50

25

Normalized mean error

0.1125

0.0632

0.0703

0.0724

Average time of processing each picture/ms

7

8

10

12

sample based on Sit−1 , randomly select P pixels within 50 pixels of the key point to calculate the difference between all two pixels, and calculate the difference between the current shape of each sample and the target shape value. δSi = Sˆi − Sit−1 , cast it in a random projection direction to get a scalar meter, calculate the phase relationship between P * (P − 1)/2 Gy difference and the variable. The M pairs of pixels with the largest correlation are selected as the shape index feature, that is, the local coordinates of the M pairs of pixels. Enter the second layer of regressor r k , where k ranges from 1, 2, …, K. For each regressor machine r k , a set of values is randomly set to calculate the gray value of M to the feature position obtained by the previous regressor. Divide all training samples into 2F categories, and obtain the output δSb of each fern node according to the calculation of Formula (5). Then, the current shape of each sample plus the node output of the category to which it belongs is used as the updated current shape: Sit = Sit + δSb . Every time K second-level regressors are completed, each regressor will increase by 1. Finally, the average shape obtained by training and the output of T * K regressors are stored in model.txt for using when testing models. When testing, load model.txt and calculate the normalized average error, and set a timer in the program to record the processing time of each picture. The results of multiple experiments to change the number of first layer regressors and second layer regressors are shown in Table 5. It is obviously from the results that when the number of cascade regressors in the first layer is 5 and the second layer is 100, the testing normalization average error is the smallest, and the testing processing time is also about 8 ms, which is much less than the frame rate of the CCD itself, which meets the real-time requirements. Through the advanced two-layer cascade regression algorithm, the feature points around the front molten pool are extracted. The process schematic diagram is shown in Fig. 9.

3.2 Extraction of Geometric Parameters of the Front Molten Pool The characteristic points around the obtained molten pool are connected with a smooth curve to obtain the outline of the front molten pool. The maximum distance

Identification and Penetration Prediction of Aluminum Alloy …

141

Advanced two-layer cascade regression algorithm

Fig. 9 Schematic diagram of the process of obtaining 29 points around the front molten pool image

Fig. 10 Schematic diagram of the characteristic parameters of the front molten pool

between the intersection of the line scanned along the edge of the weld and the outline of the weld pool. The schematic diagram is shown in Fig. 10, and the formula is shown in Eq. (8). At this time, the line of melting width is c1c2. Scan along the direction of c1c2, and the maximum distance between the intersection o and the contour intersection r1 of the line c1c2 is half length of molten pool L half . According to the law of cosines, the molten pool advancing angle α can be obtained. According to the OpenCV function contour area, the molten pool area S can be obtained. Wb = (Dc1c2 ) max

(8)

In Formula (8), Dc1c2 represents the linear distance between point c1 and point c2.

4 Models to Predict the Backside Width of Weld Pool The shape parameters of the front melt pool have been extracted through improved enhanced cascade regression, and the current value and voltage value through the Hall sensor constitute the model input space for the prediction of the backside width of weld pool. Backside width of weld pool is the model output space. An Xgboost

142

Y. Luo et al.

model is established to predict the backside width of weld pool. The backside width of weld pool is measured by a laser rangefinder to measure the width of each segment. The general model of the Xgboost model is an additive model as shown in Eq. (9). yi = φ(xi ) =

K 

f k (xi ), f k ∈ F

(9)

k=1

In Eq. (9), y represents the predicted value of the backside width of weld pool, and the input parameters represented by x include the front melt pool parameters and current value and voltage value. The loss function expression is shown in Formula (10). The loss function is divided into two parts, one is the training loss function, and the other is the complexity of the tree. L(φ =





l( y i , yi )) +

i



( f k ))

(10)

k



In Eq. (10), l yˆi , yi represents the difference between the predicted backside width of weld pool and the measured backside width of weld pool, and ( f k ) represents the complexity of the tree struct. Simultaneously consider linear regression, Bayesian ridge regression, GBR regression, SVR regression, and Xgboost model comparative analysis. Take 550 sets of data as the input space, and 110 sets of data as the test set. The results of the above regression models are shown in Table 6. The calculation method of mean square error (MSE) in Table 6 is shown in Eq. (11). The smaller the MSE value, the more accurate the prediction model. The mean absolute error (MAE) calculation method is shown in Eq. (12). The smaller the MAE value, the more accurate the prediction model. The R 2 calculation method is shown in Eq. (13). The closer the R-squared is to 1, the more reliable the model is. The calculation method of EV (explained_variance) is shown in Formula (14). The larger the EV, the better the interpretability of the model. m 1  ( f i − yi ) MSE = m i=1

(11)

Table 6 Different prediction results of back backside width of weld pool Regression metrics

EV

MAE

MSE

R2

Bayesian ridge

0.048105

0.181657

0.048654

0.048105

Linear

0.095233

0.175620

0.046249

0.095152

SVR

0.128772

0.171270

0.040570

0.118066

GBR

0.884569

0.061516

0.005900

0.884569

XGBR

0.991432

0.004395

0.000438

0.991432

Identification and Penetration Prediction of Aluminum Alloy …

143

m 1  | f i − yi | m i=1

(12)

M AE =

m

R2 = 1 −

i=1 m

( f i − yi ) (13) 

( y i − yi )

i=1 

Var{y − y} EV(y, y) = 1 − Var{y} 

(14)

In Eqs. (10)–(14), f i represents the predicted value of the backside width of weld pool of the i-th group of data, yi represents the measured value of the backside width of weld pool of the data of the i-th group, y i represents the average value of y, and Var represents the variance. Linear regression, Bayesian Ridge regression, GBR regression, SVR regression, and Xgboost error curves are shown in Fig. 11. Analyzing Table 6, Fig. 11, it is not difficult to know that, compared to the prediction results of linear regression, ridge regression, GBR regression, and SVR regression, the regression error of XGBR is smaller, the EV value, and the R-squared value are larger, which is interpretable. Stronger. The test results and analysis show that with the increase of the melting width of the back surface, the penetration state transitions from “not penetration” to “over penetration,” even to burn-through. With the decrease of the back-melting width, the penetration state transitions from “over penetration” to “not penetration”, in the

Fig. 11 Error curves of different models to predict the back-melt width

144

Y. Luo et al.

3 mm aluminum alloy TIG welding experiment, the back-melting width is less than 4 mm, generally incomplete penetration, and the back-melting width is greater than 10 mm is over-penetration. As a result, the back-melt width can be predicted more accurately through the shape of the front molten pool and the welding parameters, and the back-melt width can better reflect the state of penetration.

5 Welding Penetration Identification Model Convolutional neural network (CNN) is actually a special neural network for image recognition. The convolutional neural network mainly includes data input layer, convolution calculation layer, excitation layer, pooling layer, fully connected layer, and output layer. Among them, the convolutional layer, the excitation layer, and the pooling layer can be superimposed and connected to perform more detailed feature extraction on the image, and the fully connected layer of the network functions as a classifier (ordinary neural network). In fact, the feature extraction process is replaced by a convolution pooling layer, so that feature extraction is also regarded as training of weights, and automatic feature extraction is realized to make the discriminant model more accurate. Classic CNN network structures such as ResNet and InceptionNet require millions of samples of data. The number of layers and parameters is huge. If you use the classic CNN Net to train and generate the welding penetration state discrimination model, over-fitting will inevitably occur. Therefore, a smaller deep learning network should be established. This article divides the selected 2433 front molten pool images into four categories: penetration, unfused, burn-through, and over-penetration. Among them, there are 973 normal images, 634 unfused images, 388 burn-through images, and 438 welding offset images. Figure 12 is the penetration, unfused, burn-through, and overpenetration welding penetration state classification picture. The ratio of the test set to the training set is 8:2. A total of 4966 pictures are randomly rotated and augmented. Among them, 390 normal pictures are allocated in the test set, 254 are unfused, 156 burn-through pictures, and 176 welding offset images. The CNN structure built by Tensorflow used in this article can be divided into three parts. The first part is the input layer, and the second part is the convolutional layer, pooling layer, and downsampling layer. The third part is the fully connected layer and the perceptual classification layer. Convolutional layer and pooling layer constitute feature extraction. The fully connected layer and the softmax layer complete image classification and recognition. The specific framework is shown in Fig. 13. The specific algorithm flow is as follows: 1. 2.

Input: 1280 × 1024 size picture, three-channel color picture, in order to reduce the amount of calculation, the picture is converted into a 224 × 224 picture; The augmentation of the training set expands 2483 4 types of pictures to 4966 pictures through random rotation;

Identification and Penetration Prediction of Aluminum Alloy …

145

Class1: unfused

Class2: penetration

Class3: overpenetration

Class0: burnthrough

Fig. 12 Classic welding penetration state classification picture

Fig. 13 Schematic diagram of the CNN model

3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

The first layer conv1:16 20 × 20 convolution kernels; The first pooling layer: 8 × 8 max-pooling and the feature map becomes 110 × 110 × 16 16 as the number of channels; The second layer conv2:32 20 × 20 convolution kernels; The second layer max-pooling: 8 × 8 core, the feature map becomes 53 × 53 × 32; The third layer conv3:64 20 × 20 convolution kernels; The third layer max-pooling: 8 × 8 core; The fourth layer conv4:128 12 × 12 convolution kernels; The fourth layer max-pooling: 8 × 8 core; The fifth layer: 4 × 4 convolution kernel; Fully connected: turning the output of the fifth layer into a one-dimensional vector as the input of this layer.

146

Y. Luo et al.

Fig. 14 Loss of the model decreases with the number of global steps

The declining curve of the loss function for training and testing of the CNN model constructed by this article is shown in Fig. 14, and the increasing curve of accuracy is shown in Fig. 15. The final various accuracy rates are shown in Table 7. The accuracy rate is higher, the loss function value drops faster, and the model is more reasonable.

Fig. 15 Accuracy of the model increases with the number of epochs

Table 7 CNN model accuracy Classification

The number of test images

Number of accurately classified images

Accuracy (%)

Normal

390

378

96.92

Welding offset

254

249

98.03

Lack of fusion

156

145

92.95

Burn-through

176

173

98.30

Identification and Penetration Prediction of Aluminum Alloy …

147

6 Conclusion It can be known from the second chapter of this article that the improved enhanced cascade regression model constructed can extract the contour of the weld pool in real time and accurately. At the same time, the Xgboost regression model predicts the back-melt width ratio linear regression model based on the shape parameters of the front molten pool and the current and voltage, the ridge regression model, the SVR model, and the GBR model have smaller errors, better reliability, and interpretability. Through the data analysis in Chapter 3, we can know that the CNN model can accurately predict the welding state of aluminum alloy TIG welding through the front molten pool image. The generalization of the model is very good, and the accuracy is very high. Acknowledgements This work is partly supported by the National Natural Science Foundation of China (No. 61873164).

References 1. Chen SB, Lv N (2014) Research evolution on intelligentized technologies for arc welding process. J Manuf Process 16:109–122. (Invited papers for Special Issue on Recent Developments in Welding Processes) 2. Chen SB (2015) On intelligentized welding manufacturing. In: Keynote speaking at 2014 international conference on robotic welding, intelligence and automation (RWIA’2014), Shanghai, P. R. China, October 25–27, 2014. The Advances in Intelligent Systems and Computing, vol 363. Springer Verlag, pp 3–34 3. Chen SB (2001) Current status and development of intelligent welding technology. In: Proceedings of the tenth national welding conference. Heilongjiang People’s Publishing House, pp 84–96 4. Wikle HC (2001) Infrared sensing techniques for penetration depth control of the submerged arc welding process. J Mater Process Technol 113(6):228–233 5. Zhao DB, Chen SB, Wu L (2001) Intelligent control for the shape of the weld pool in pulsed gtaw with filler metal. Weld J 80(11):253–260 6. Kannatey-Asibu E (1997) Milestone developments in welding and joining processes. ASME J Manuf Sci Eng 119(11):801–810 7. Ma H (2011) Research on MLD modeling method of robot aluminum alloy pulse TIG welding process based on vision sensing. Shanghai Jiao Tong University, Shanghai 8. Ushio M, Mao W Sensors for arc welding: advantages and limitations. Trans JWRI 9. Itti L, Koch C, Niebur E (1998) A model of saliency-based visual attention for rapid scene analysis. IEEE Trans Pattern Anal Mach Intell 20(11):1254–1259 10. Cai M, Lv N, Zhong J, Lin T, Chen S (2012) Image process for aluminium alloy weld pool based on ROI detection. J Shanghai Jiao Tong Univ 46:103–105 11. Zhang Y, Lv N, Huang Y-M, Chen S-B (2014) Feature characters extraction with visual attention method based on three-path vision sensing of Al alloy GTAW welding. Trans China Weld Inst 12. Chong J (2008) Research on visual feature acquisition and intelligent control of variable gap aluminum alloy pulse GTAW melt pool. Shanghai Jiao Tong University, Shanghai 13. He Y (2017) Research on autonomous welding and predictive control of thick plate robot based on MLD modeling. Shanghai Jiao Tong University, Shanghai

148

Y. Luo et al.

14. Fan CJ, Lv FL, Chen SB (2009) Visual sensing and penetration control in aluminum alloy pulsed GTA welding. Int J Adv Manuf Technol 42:126–137 15. Zhong J (2017) Research on welding seam tracking and forming quality control of robot aluminum alloy pulse GTAW. Shanghai Jiao Tong University, Shanghai 16. Wu D, Chen H, Huang Y, Chen S (2018) Online monitoring and model-free adaptive control of weld penetration in VPPAW based on extreme learning machine. IEEE Trans Ind Inf 15(5):2732–2740 17. Cao X, Wei Y, Wen F et al (2014) Face alignment by explicit shape regression. Int J Comput Vis 107:177–190

Research on Welding Transient Deformation Monitoring Technology Based on Non-contact Sensor Technology Ziquan Jiao, Zhiqiang Feng, Junfeng Han, and Weiming Huang

Abstract With shipbuilding engineering structural optimization and energy saving in the direction of development, metal welding structure gets more and more application in engineering, metal welding structure of the welding deformation problems can become a hot research topic in the field of welding and the general character, how to implement effective for accurate detection of welding deformation and control, become an important research direction. This paper tried to analyze the research on detection of welding deformation in different sensory technology. The results shown that they could get good result in different application environment for different sensing technology. The non-contact detection could get more intelligent and more accuracy and the contact detection are more reliable and widely used in industrial. However, according to automation and intelligent requirement of modern industrial, it is the trend to develop more non-contact detection method in shipbuilding processing. Keywords Welding deformation · Non-contact detection · Contact detection

1 Introduction In the shipbuilding process, the welding deformation seriously affects the quality of the welding components due to the large size of the work-piece and the complex manufacturing process. This has become one of the key problems restricting the development of ship welding automation. In view of this, technicians have done a lot of research on the prediction and control of welding deformation and have made great progress [1–3]. The welding process is a complex, multi-dimensional, and multi-parameter nonlinear process. It is difficult to accurately predict the stress and Z. Jiao (B) · Z. Feng (B) · J. Han · W. Huang School of Mechanical and Marine Engineering, Beibu Gulf University, Qinzhou 535011, China e-mail: [email protected] Z. Feng e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_9

149

150

Z. Jiao et al.

deformation of welding structures, especially the deformation behavior of complex structures, by simple empirical formulas [4–7]. The most effective method to predict welding deformation is through the combination of experiment and numerical simulation. The accuracy of numerical calculation model in welding process usually needs to be verified by test results. Therefore, effective and accurate welding deformation detection technology is the premise of realizing welding deformation control. The development direction of Traditional Chinese medicine in the field of welding application is to accurately detect welding deformation through experimental means, understand its evolution mechanism, establish accurate prediction model of welding deformation, and then realize active control of welding deformation. Welding deformation control can be achieved by welding structure design, optimization of welding process, and technological parameters. However, as the first step of accurate control, accurate detection of welding deformation must be realized. This paper makes a comprehensive analysis and summary of welding deformation detection technology in recent years, summarizes the advantages and limitations of relevant research, and further forecasts the future development direction of this detection technology. It lays a theoretical foundation for further realizing automatic intelligent welding deformation control.

2 Welding Deformation Detection Technology According to the time correlation, welding deformation detection technology can be divided into static and transient measurement. Static measurement is to measure the welding deformation in actual production by using such tools as tape measure, square meter, micrometer or three-dimensional coordinate instrument or by non-contact optical interferometry by measuring the change of the marking distance before and after welding. Although the static deformation measurement method is simple and fast, it cannot reflect the transient information of welding deformation. In recent years, with the development of computer and sensing technology, the measurement of welding deformation begins to develop toward automation and intelligence, which makes it possible to measure the transient state of welding deformation [8–10]. Transient measurement is to collect the change of welding deformation in real time through displacement sensing element in the welding process to reflect the change rule of welding deformation with time. Figure 1 shows the technical classification of welding transient deformation measurement. Contact and optical non-contact measurement techniques are widely used in transient deformation measurement. The application of these two methods in welding transient deformation measurement is mainly discussed here.

Research on Welding Transient Deformation Monitoring Technology …

151

Fig. 1 Classification of transient deformation measurement techniques

2.1 Non-contact Transient Deformation Detection Non-contact transient deformation measurement system includes electronic speckle interferometry system, laser contour measurement system, laser displacement sensing measurement system, optical imaging measurement system, etc., and under certain circumstances these measurement systems can be used together to realize the measurement of welding transient deformation. Non-contact optical imaging measurement is mainly based on the basic principles in the optical field, and a certain amount of physical simulation is converted into coordinate points on the specimen surface through appropriate algorithms. For example, strip light and raster light are projected on the surface of welding parts, and CCD camera and other imaging systems are used to quickly and accurately capture the image (surface point cloud data) reflected by the light of welding parts, and 3D coordinates of pixel points within the measurement range are calculated through computer-aided design, so as to reflect the transient deformation of welding components. The method can effectively reduce the manual measurement planning in the measurement process and simplify the measurement process. Wang et al. [11] measured the microscopic deformation of CO2 laser bending specimens in real time using a laser beam reflection amplification system. A camera with a frequency of 24 Hz is used to record the motion of the spot during the interaction between the laser and the material. The measurement results show that this method can reflect the change rule of micro-deformation in laser machining process in real time and accurately. Based on the results of deformation detection, the thermophysical process of laser bending is studied, and the influence of processing parameters on micro-deformation is further analyzed. Shanghai Jiao Tong University proposed an online measurement method based on laser displacement sensor for the deformation of high-speed railway roof profile after welding, shown in Figs. 2 and 3. The triangular laser ranging method was used to measure the roof shape variable, so as to realize the online and rapid detection

152

Z. Jiao et al.

Fig. 2 Application of digital detection system

Fig. 3 Error analysis and comparison between self-designed detection system and traditional detection method [12]

of the roof with high precision and stability. At the same time, the error source is analyzed by using the visual positioning error compensation method, and the error compensation model is established by using the unitary linear regression method. The standard parts system is designed for the calibration and calibration of the measuring system [12]. Literature [13] used CCD camera combined with narrow-band filter to record the transient deformation of work-piece in the laser forming process of Aluminum alloy in real time. The results show that the measurement accuracy of transient deformation can reach ±12 um. Gokri et al. were trying to use a digital camera, obtained the image of the transient deformation of the pulse laser welding of low carbon steel sheet at a measured speed of 20 amplitudes per second. This method can conduct real-time monitoring of the deformation in the welding process and record the overall morphology of the instantaneous deformation of the test plate.

Research on Welding Transient Deformation Monitoring Technology …

153

The composition of the measurement system and the deformation morphology of the test plate after welding are shown in Fig. 4. Dr. Yassine presented a full pose measurement technique based on laser triangulation and monocular vision scheme, shown in Figs. 5 and 6. The method utilizes laser triangulation ranging with parallel optical axis between the camera and the laser. The proposed scheme can realize measurement of 6 degrees of freedom (DOF) including the depth z, and the angles φyaw, φpitch. The full pose parameters of non-flat objects can be determined using a synthetic marker; then, the three degrees of freedom including horizontal, vertical, and rolling angle can be obtained. The

Fig. 4 A transient deformation measurement system based on optics

Fig. 5 The prototype of 3D sensor system

154

Z. Jiao et al.

Fig. 6 Feature matching using SURF

proposed measurement technique can be operational in several fields and it allows measurement of 6 degrees of freedom of an object with high accuracy. The optimized sensor functioned with repeatability within 0.5 μm and standard deviation within 0.17 μm [14]. Dai et al. [15] tried to use the magnetic field strength and magnetic induction intensity of the weld joint in the process of leakage magnetic field to detect and analyze the deformation measurement for the magnetizing material. In theory, by using the magnetic dipole model of the welding defects calculation and analysis of leakage magnetic field, and different magnetic dipole model was applied to study different types of welding defects, the theoretical analysis of the distribution of magnetic charge factor, and according to the requirements of actual test to different types of welding defect shape modeling, through the research to study the deformation of welded joint signal of leakage magnetic field and depth, angle and width of the relationship, shown in Fig. 7. The non-contact transient deformation measurement system can accurately measure the transient deformation information of welding parts and has high measurement accuracy. However, the system structure design is complex, the equipment cost is high, the data collection volume is huge, causes the corresponding data processing process to be tedious. In addition, the mass data obtained by the overall measurement and it is generally scattered, which requires the corresponding software for data shading and feature extraction. In addition, welding arc is very disturbing to optical measuring system. Therefore, the application of non-contact optical transient deformation measurement method in welding deformation measurement is limited.

2.2 Contact Transient Deformation Detection Method The contact transient deformation measurement system can measure the transient deformation in welding process in real time and accurately. The measured results can be used as the basis of the numerical model to predict the thermal behavior of welding process.

Research on Welding Transient Deformation Monitoring Technology …

155

Fig. 7 The alternating detection system of weld defects

Camillier et al. [16, 17] proposed a finite element algorithm suitable for engineering applications, which can carry out welding deformation accurately and efficiently predictions. In order to verify the accuracy of the finite element algorithm, a displacement sensor is used to measure the transient deformation of the welded test plate. The consistency of the results proves that the finite element algorithm is accurate and reliable. Reference [18] obtained the curve of transient strain changing with time (indirectly reflecting deformation changing with time) through real-time detection of the transient strain of the gird weld of laser welded pipeline, and analyzed the welding process with the combination of three-dimensional finite element simulation technology. In this paper, the optimization strategy of welding process is put forward to control the deformation of welding, and the method is applied in this paper industrial production. This research group developed a set of general system which can measure the thermal cycling parameters and the transient deformation process in welding process synthetically. In this system, the inductance displacement sensor is used as the sensing element to measure the welding transient deformation of the welding test plate by means of contact, which realizes the multipath transient deformation measurement in the welding process. Wang et al. used this measuring system to measure the transient deformation of aluminum alloy welding, discussed the transient angle deformation characteristics of welding process, defined the characteristic points of the dynamic deformation curve as the maximum point of down-warping, up-warping inflection point, and up-warping equilibrium point (Fig. 3), and discussed the relationship between the characteristic values and welding parameters.

156

Z. Jiao et al.

According to the analysis of the measurement results of transient angular deformation in Refs. [19, 20], the transient angular deformation curves can be divided into two types: angular deformation curves with and without inflection points. According to the characteristics of these two types of curves, an accurate prediction model of welding angular deformation is established. On this basis, the welding angle deformation behaviors of aluminum alloy and aluminum alloy sheet under the same welding condition are compared. The result shows the measurement results of transient angular deformation of aluminum alloy and aluminum alloy under the same welding conditions. The researchers developed a set of general system which can measure the thermal cycling parameters and the transient deformation process in welding process synthetically. In this system, the inductance displacement sensor is used as the sensing element to measure the welding transient deformation of the welding test plate by means of contact, which realizes the multipath transient deformation measurement in the welding process. Wang et al. used this measuring system to measure the transient deformation of aluminum alloy welding, discussed the transient angle deformation characteristics of welding process, defined the characteristic points of the dynamic deformation curve as the maximum point of down-warping, up-warping inflection point, and up-warping equilibrium point (Fig. 8), and discussed the relationship between the characteristic values and welding parameters. According to the analysis of the measurement results of transient angular deformation in Refs. [19, 20], the transient angular deformation curves can be divided into two types: angular deformation curves with and without inflection points. According to the characteristics of these two types of curves, an accurate prediction model of welding angular deformation is established. According to this, the welding angle deformation behaviors of aluminum alloy and aluminum alloy sheet under the same welding condition are compared. Figure 9 shows the measurement results of transient angular deformation of aluminum alloy and aluminum alloy under the same welding conditions. In order to reflect the contact state of welding fixture and welding parts, the finite element model (multi-body coupling model) is an effective method to predict the temperature, stress, and deformation of welding process. Reference [21] discusses the applicability of different models by analyzing the calculation results of angle deformation of two-dimensional finite element model and three-dimensional finite element model. Transient angular deformation is measured during welding test, and the accuracy of finite element calculation model is verified by test results (Fig. 10). Liu et al. [22] has also established a more complex multi-body coupling model of welding process (including welding platform, welding test plate, jig, and fixture tooling), and combining the transient angular distortion measuring results, the analysis of transient angular distortion in welding process and the relationship between the external constraint, discuss the binding position, and the size of the binding, binding release and limited torque effect on the deformation of welding transient angle (Fig. 11).

Research on Welding Transient Deformation Monitoring Technology …

157

(a) Characteristic curve of transient angular deformation of aluminum alloy with inflection point

(b) Characteristic curve of transient angular deformation of aluminum alloy without inflection point Fig. 8 Transient angular deformation characteristics of aluminum alloy Fig. 9 Comparison of transient angular deformation curves of aluminum alloy and titanium alloy without inflection point

158

Z. Jiao et al.

Fig. 10 Transient angular deformation measuring device

Fig. 11 Relationship between transient deformation and binding force and constraint position

With the development of detection technology, the detection of welding transient deformation has gradually developed from laboratory measurement to the measurement of welding transient deformation of large engineering parts. Literature [23] reported based on transient deformation contact detection method, based on the large thick wall stainless steel pipe (pipe specifications for the 680 × 70 mm, 320 ~ respectively two docking the tube length and 330 mm) welding temperature and welding deformation for real-time monitoring, complete collected in the process of welding temperature, transient deformation, as shown in Fig. 12. Through data analysis, the result of the transient deformation is proposed in the process of welding adopt corresponding continuous welding process can achieve control of welding deformation is proposed. The analysis results show that the

Research on Welding Transient Deformation Monitoring Technology …

159

Fig. 12 Engineering application of transient angular deformation measurement

contact transient deformation detection method suitable for engineering application, the measurement results are accurate and reliable, which has important theoretical and engineering significance for improving the reliability analysis of large welding structures in engineering applications. Wang et al. introduced the composition, working principles of a high accuracy detection system for hydraulic large steel structure. The system is used to monitor the displacement of stay ring in time during welding process of the closured weld joint for the large hydraulic steel-penstock, and monitors the contraction of welding and the axial displacement of steel pipe with vernirevernier caliper and dial indicator. The results of measurement show the deformation meet the requirement of design and engineering [24].

3 Welding Deformation Control Method Welding deformation seriously affects the integrity and safety of the structure and greatly reduces the fatigue strength and anti-corrosion cracking ability of the material, leading to the fracture of the material under external load and affecting the service performance of the structure. Therefore, how to control the deformation of welding has always been a topic of great concern to welding workers. By discussing its formation process and influencing factors, people put forward measures to control and eliminate welding deformation from different perspectives, such as before welding [25, 26], in welding [27], and after welding [28]. The classification is shown in Fig. 13. No matter before, during or after welding, the key to reduce and control welding deformation is to effectively recover the compression plastic deformation caused by heating. Prestrain applied in the welding process can reduce and completely cancel the compressive plastic strain caused by welding heating, and the residual plastic strain of welding parts will be at a very low level in the cooling process, so as to realize the effective control of welding deformation. Literature [29] shows that transient prestrain control measures can reduce welding wave deformation, angle

160

Z. Jiao et al.

Fig. 13 Classification of welding deformation control methods [25, 26]

deformation, and bending deformation. By applying certain prestrain in the welding process, that is, appropriate clamping measures are adopted to generate plastic deformation of welding materials, so as to achieve the purpose of online control of welding deformation.

4 Conclusion Metal welding structure of the welding deformation problems can become a hot research topic in the field of welding and the general character because of the automation and intelligent requirement of modern industrial. It is necessary to implement effective for accurate detection of welding deformation and control. Welding deformation detection technology can be divided into static and transient measurement. Static measurement is realized by measuring the change of marking distance before and after welding by means of tape, square, micrometer, or three-dimensional coordinate instrument or by non-contact optical interference method. Transient measurement is to collect welding deformation in real time through displacement sensor during welding. According to automation and intelligent requirement of modern industrial, it is the trend to develop more non-contact detection method in shipbuilding processing. Acknowledgements This work was supported by the National Natural Science Foundation of China (No. 51969001), the Guangxi Natural Science Foundation of China

Research on Welding Transient Deformation Monitoring Technology …

161

(No. 2018GXNSFAA138080; No. 2016GXNSFAA380188), the Guangxi Major Science and Technology Projects of China (No. GuikeAA17292003), and Innovation Project of Guangxi Graduate Education (No. YCBZ2019050), and the Guangxi Science and Technology Base and Talent Project of China (No. GuikeAD18281007)

References 1. Brown S, Song H (1992) Finite element simulation of welding of large structures. J Eng Ind Trans ASME 114(4):441–445 2. Chen SB, Lv N (2014) Research evolution on intelligentized technologies for arc welding process. J Manuf Process 16(1):109–122 3. Lindgren LE (2001) Finite element modeling and simulation of welding part 3: efficiency and integration. J Therm Stresses 24(2):141–192 4. Song HS, Zhang YM (2007) Image processing for measurement of three-dimensional GTA weld pool surface. Weld J NY 86(10):323 5. Papazoglou VJ, Masubuchi K (1978) Analysis and control of distortion in welded aluminum structures. Weld J 57(9):251s–262s 6. Fan CJ, Lv FL, Chen SB (2009) Visual sensing and penetration control in aluminum alloy pulsed GTA welding. Int J Adv Manuf Technol 42:126–137 7. Deng DA (2009) FEM prediction of welding residual stress and distortion in carbon steel considering phase transformation effects. Mater Des 30(2):359–366 8. Markovits T, Takacs J, Szilagyi A et al (2008) Real-time monitoring of the laser bending process. Int J Microstruct Mater Prop 3(1):141–149 9. Xu G, Song S, Lv Y (2004) A new intelligent displacement temperature measurement system is presented. Ind Control Comput 02:44–46 10. Li H, Zhang Y (2002) A high accuracy multi-point transient temperature measurement system. J Nanchang Univ 26(2):191–194 11. Wang XF, Takacsb J, Krallicsb G et al (2002) Research on the thermos-physical process of laser bending. J Mater Process Technol 127(3):388–391 12. Chen X, Li Z, Liu K et al (2009) An online contour measurement and locating compensation method of the carriage roof for high-speed railway combining laser and vision. In: The 14th International symposium on measurement technology and intelligent instruments. Niigata, 2 September 2019 13. Reeves M, Moore AJ, Hand DP et al (2003) Dynamic shape measurement system for laser materials processing. Soc Photo—Opt Instrum Eng 42(10):2923–2929 14. Selami Y, Lv N, Tao W, Yang H, Zhao H (2020) Optimizing laser triangulation displacement sensor of 3D positioning and posture using coa based bpnn. Sens Rev 15. Gao X, Zhou X, Li Y (2019) Application of magneto-optical imaging magnetic flux leakage characteristics in the reconstruction of welding defects. Opt Precision Eng 27(8) 16. Camilleir D, Mollieone P, Gray TGF (2007) Computational methods and experimental validation of welding distortion models. Proc Inst Mech Eng, Part I: J Mater Des Appl 221(4):235–249 17. Camilleir D, Gary TGF (2005) Computationally efficient welding distortion simulation techniques. Model Simul Mater Sci Eng 13(8):1365–1382 18. Shiari H, Urushizaki M, Sawamoto S et al (2004) Analysis of bending deformation behavior during circumferential welding of cylindrical parts: study of deformation behavior at micron to sub-micron level of laser-welded automotive parts. Weld Int 18(8):626–634 19. Wang R, Liang Z, Zhang J (2008) Experiment investigation on out-of-plan distortion of aluminium alloy 5A12 in TIG welding. Rare Metal Mater Eng 37(2):1264–1269

162

Z. Jiao et al.

20. Wang R, Liang Z, Zhang J (2008) Dynamic process of angular distortion between aluminum and titanium alloys with TIG welding. Trans Nonferrous Metals Soc China 18(2):233–239 21. Liu C, Wang R, Zhang J (2007) Dynamic process test and numerical analysis of weld bead deformation. J Xi’an Jiaotong Univ 18(2):233–239 22. Liu C, Zhang JX (2009) Numerical simulation of transient welding angular distortion with external restraints [J]. Sci Technol Weld Joining 14(1):26–31 23. Xue B (2009). Study on dynamic deformation of full position TIG welding of stainless steel pipe with back wall. Xi’an Jiaotong University 24. Wang Q, Zhu Y, Mei Y, Zhou B (2020) Application of a high accuracy detection system in welding deformation of closured weld joint. Power Gener Air-Conditioning S1:12–14 25. Qiao G, Wu Q, Shao Y (1986) Study on predeformation process. J Aviat 4:5–10 26. Qiao G (1979) Welding stress and deformation of titanium alloy thin-walled member. J Aviat 2:5–10 27. Guan Q, Zhang CX (1994) Dynamic control of welding distortion by moving spot heat sink. Weld World 33(4):309–313 28. Shao X (2004) Control measures for welding deformation of low-pressure cylinder in Huaneng Yuhuan power plant. J Mech Eng 40(8):87–90 29. Schenk T, Richardson IM, Kraska M et al (2009) A study on the influence of clamping on welding distortion. Comput Mater Sci 45(4):999–1005

Binocular Stereo Vision and Modified DBSCAN on Point Clouds for Single Leaf Segmentation Chengyu Tao, Na Lv, and Shanben Chen

Abstract Nowadays, the task of segmenting individual leaves is essential in some specific computer vision applications, for instance the recognition and localization of plant leaves in agricultural robotic applications. Compared with 2D-image-based algorithms, the segmentation algorithms dealing with 3D point clouds of plants always have higher performance in these tasks due to the utilization of depth information, which provides more intuitive but powerful features for distinguishing and separating individual leaves. In this paper, we propose a real-time sparse stereo match algorithm which combines two basic cost functions, sum of absolute differences (SAD) and zero-mean normalized cross-correlation (ZNCC) and generates multiple depth hypotheses. However, the reconstructed sparse point clouds are corrupted by a noticeable amount of noise because of the extremely weak texture of leaves. Then, we denoise the raw point clouds and evaluate the affinities between points using Tensor Voting, which is a well-known algorithm for inferring underlying structure of sparse and noisy data. Finally, a modified density-based spatial clustering of applications with noise (DBSCAN) algorithm based on the above new-defined distance metric is used to cluster the refined point clouds. Individual leaves can be represented by these clusters. Experiments on simulation and real sceneries show that the proposed algorithm could be able to segment individual leaves well. This algorithm is the core algorithm of tea leaves localization of our self-developed tea picking robot. Keywords Stereo match · Multiple depth hypothesizes · Tensor voting · Affinity metric · DBSCAN · Binocular stereo vision

C. Tao · S. Chen (B) School of Materials and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] N. Lv (B) School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_10

163

164

C. Tao et al.

1 Introduction The application of computer vision technology in the field of agricultural automation has become a hot research topic in recent year. Many studies make use of these techniques to monitor plant growth to improve crop yields [1], or to identify and classify plants [2, 3], or to perform some picking or harvesting tasks on agricultural robots [4–6] and so on. As leaves contain important information about the identity and growth status of a plant, such as texture, color, and shape, leaves also play an important role in some special agricultural applications [7, 8]. In these tasks, automatic leaves segmentation and recognition are essential but challenging at the same time. For instance, the detection of the tea bud sometimes could only be judged by its morphology in some specific tea species, which needs to segment a single leaf to acquire its shape representation. However, the complicated spatial structure and the color and textural similarity of blades make it much difficult to design an efficient image segmentation algorithm. Furthermore, the universality of the segmentation algorithm requires it to use the most essential and common characteristics of leaves. In this paper, each leaf is represented by a manifold embedded in the 3D space, which is a more natural property of blades compared with specific colors or textures. According to the type of input data, the current single leaf segmentation algorithms can be divided into two categories, one is based on 2D images, the other is based on 3D point clouds. Segmentation methods on 2D images can be further classified into region-based algorithms, edge-based algorithms, and other algorithms such as deep learning. Pape et al. [9] create two 3D color histograms to separate foreground and background, and then a Euclidean-distance-map-based method and a regiongrowing method is used to detect leaves centers and label various leaves. Xu et al. [10] extract color and texture features by some methods such as percent intensity histogram, percent differential histogram, Fourier transform, and wavelet packet. However, these algorithms based on 2D images have relatively poor performance in dealing with real sceneries with a large area of intersection and occlusion between leaves and tremendous illumination changes, which can cause lots of specious or ambiguous boundaries. Many algorithms based on 3D point cloud can effectively solve the above problems since the depth information can provide the hierarchical structure of plants intuitively. Itakura et al. [11] use the Euclidean-distance transformation map in the top view of plant point clouds to extract seeds, which are used to initiate 3D region-growing algorithm. Li et al. [12] propose an algorithm including point clouds pre-processing, facet over-segmentation, and facet region growing. In their subsequent work [13], they define a new 3D joint filtering operator to denoise and separate the plant point clouds. However, this method will perform badly in the case of sparse point clouds and large area of overlapping. Teng et al. [14] reconstruct plant point clouds by optic flow algorithm and then use 3D Euclidean distance and color differences to define the affinities jointly. Spectral Clustering based on normalized cut algorithm and Lazy snapping are applied to separate and refine point clouds, respectively. Unluckily, there may be no enough distinct feature points in weak textural leaves, which may

Binocular Stereo Vision and Modified DBSCAN...

165

make this algorithm produce large errors or even drop into failures. Moreover, 3D Euclidean distance cannot reflect the geometric property accurately if two points are close to each other in Euclidean space but located in two different manifolds. Hu et al. [15] propose a method based on manifold distance estimated by local principal component analyzers (PCA), which is not well suitable to sparse and noisy point clouds. The aim of this work is to design a real-time sparse 3D reconstruction for weak textural plants by binocular stereo vision and to realize the individual leaves segmentation. This task is a key component of the visual guidance of our self-developed tea picking robot, which could localize the picking points of tea leaves. The main contributions of this work contain two aspects: (1) We propose a new binocular stereo match algorithm which combines two basic cost function SAD and ZNCC to generate multiple depth hypotheses. This algorithm can be run in real time and obtain reliable sparse plant point clouds. (2) We propose a modified DBSCAN based on a new affinity matrix defined by Tensor Voting to cluster different manifolds, which could represent individual leaves.

2 Sparse Plant Point Clouds Acquisition 2.1 Overview of the Whole Procedure to Obtain Plant Point Clouds In this section, the objective of the algorithm is to compute reliable sparse point clouds in real time using two close-view images. The overview of this algorithm is shown as the following flow chart (Fig. 1). There are two main steps consist of the quasi-Euclidean rectification of input images and binocular stereo match process. In the latter process, the confidence of two cost volumes of SAD and ZNCC is utilized to obtain a combined cost volume. Next, the cost volume will be fused with other weighted cost volumes in the same segmented region, and then a criterion is set to judge whether multiple depth hypothesizes need to be generated. Finally, we will acquire the final output sparse point clouds. The algorithm can be executed in real time implemented in parallel on GPU.

2.2 Quasi-Euclidean Stereo Rectification Stereo rectification is usually used as the first step of stereo matching algorithm. In order to get the relative geometric relationship correctly, stereo rectification algorithm must meet the relationship that the disparity is proportional to the true depth in space, but only Euclidean rectification methods [16] have this property while the general projective rectification algorithms [17, 18] do not hold it. However, for some

166

C. Tao et al.

Fig. 1 The flow chart of acquisition process of sparse point clouds

applications, camera intrinsic parameters may be unknown. Therefore, we use a special rectification algorithm [19] called quasi-Euclidean reconstruction to solve this problem. The essence of this method is to self-calibrate the inner parameter f x of the camera assuming that the principle point being located in the center of image and the pixel being square, which is a common and reasonable assumption in computer vision. After rectification, the epipolar lines have been rectified to horizontal, which means that the line joining the two image points projected by a same object is parallel to the x-axis of the image plane. A pair of rectified images is shown as follows (see Fig. 2).

Fig. 2 A pair of rectified images of a plant in real scenery

Binocular Stereo Vision and Modified DBSCAN...

167

2.3 Binocular Stereo Matching In the above section, we have obtained a pair of rectified images. According to the taxonomy in [20], the stereo correspondence algorithms can be divided into three categories: window-based local algorithms, global algorithms, and semi-global algorithms. Among these algorithms, the local algorithms always have the lowest computational complexity. Because of the high real-time requirement in the objective of this paper, we only consider local algorithms. There are numerous local algorithms considering the intensity, color, texture, and edge comprehensively [21, 22]. Here, we only consider the color differences due to its low computation. Furthermore, sum of absolute differences (SAD) and zero-mean normalized cross-correlation (ZNCC) are considered as cost functions. Note that, the reason for selecting them rather than constructing a more complicated cost function is that the former not only have a relatively high accurate but also could be executed more quickly thanks to the parallel implementation on GPU. The cost functions of SAD and ZNCC are defined in Eqs. (1) and (2), respectively. CSAD (x, y, d) =





|Iil (x, y) − Iir (x, y, d)|

(1)

(a,b) ∈N (x,y) i∈{R,G,B}

CZNCC (x, y, d) = 



(a,b) ∈N (x,y),i∈{R,G,B} Z Vil (x, y)Z Vir (x, y, d)

(a,b)∈N (x,y),i∈{R,G,B} (Z Vil (x, y))

2

2 (a,b)∈N (x,y),i∈{R,G,B} (Z Vir )(x, y − s))

(2)

where ZV(x, y) meets the following definition:   ZVi (x, y) =  Ii (x, y) − I i (x, y), i ∈ {R, G, B}

(3)

In the general stereo correspondence algorithms, the disparity is always defined as the point with the lowest cost, which is called the winner-take-all. However, since the plant leaves usually have weak or sometimes repetitive textures, the true disparity may be ignored by the strategy. To avoid this trouble, some researchers try to generate multiple hypotheses [23–25]. More recently, some works focus on the confidence of each disparity in the cost volume [26, 27], which can provide a criterion to judge whether the multiple depth hypotheses are necessary. However, it is difficult to construct a simple criterion. We are inspired by the multiple depth cues fusion in monocular vision [28], so we decide to combine the two basic cost volumes’ confidence to make a relatively simple criterion possible. We use the exponential model [26] to define the confidence of SAD cost volume. The fused cost volume can be formulated as: e−CSAD (x,y,d)/μ Ccombined (x, y, d) =  −C (x,y,s)/μ · CZNCC (x, y, d) s e SAD

(4)

168

C. Tao et al.

A simple criterion, which is about the ratio of the second highest peak to the highest peak of confidence, is used to decide whether we should generate multiple disparities. If the ratio exceeds a predefined threshold, the point corresponding to the top two highest peaks will be regarded as possible true disparities. In addition, even without the multiple depth hypotheses, the result will still be better than that of using single cost function. The effect of the fusion process can be saw in Fig. 3. In fact, the disparity map could be further refined. An assumption that two close points in the same surface of foreground object share the approximate equal disparity

Fig. 3 a–c The figures in the first column are disparity maps generated by SAD, ZNCC, and their combination, respectively (the size of window equal 7). d–i The second and the last column show the cost volumes of these three methods of the points in red and yellow box respectively. f shows that the true disparity could be obtained by our proposed combination method. i proves that multiple depth hypotheses and its criterion could be effective in this case

Binocular Stereo Vision and Modified DBSCAN...

169

Fig. 4 a The result of the left image in Fig. 2 by SLIC (the number of clusters equal 190). b final disparity map produced by our proposed method. c and d two sparse point clouds images from different perspectives

is reasonable [22]. Here, we use SLIC [29] to over-segment the 2D image at the beginning (See Fig. 4a). The cost volume of a point will be further fused with those of other points in the same region by the following formula (Eq. (5)), where R represents the   over-segmented region, ωi = exp −di /σw2 i , N (0, σconv ) is a 1D normal distribution and the operator  ∗ means convolution. Note that di is the distance between the two points. Cfusion = Ccombined +



ω · N (0, σconv ) ∗ Ccombined, i

(5)

i∈R

The refined disparity map is shown in Fig. 4b. In fact, the multiple depth hypotheses do not be generated until finishing this step. The output sparse plant point clouds can be obtained by the final disparity map (See Fig. 4c and d).

3 Point Clouds Segmentation The obtained sparse point cloud is noisy, which is caused by the weak texture of plant leaves and occlusion between foreground and background. In this section, we use DBSCAN algorithm based on a new affinity matrix defined by Tensor Voting to segment the input point clouds. This procedure including four main steps: Firstly,

170

C. Tao et al.

denoising input point clouds based on Token Refinement, then computing the affinities between each inlier points which quant the contribution of voters to votees in the second Tensor Voting process and clustering refined point clouds by a modified DBSCAN algorithm based on the defined affinities, which has an ability to separate intersected manifolds. The last step is to reproject each cluster into raw 2D image.

3.1 Tensor Voting Tensor Voting can be able to infer underlying structure including surfaces, curves, and junctions in 3D space from a set of noisy and sparse data [30–32]. Each tensor in 3D space encode three structure types: stick-shaped normal space, ball-shaped normal space, and ball-shaped normal space. Tensor T can be decomposed as shown in the following formula. T = (λ1 − λ2 )e1T e1 + (λ2 − λ3 ) (e1T e1 + e2T e2 ) + λ3 (e1T e1 + e2T e2 + e1T e1 ) (6) where λi and ei are eigenvalues and eigenvectors, respectively, and λ1 − λ2 , λ2 − λ3 and λ3 represent the saliency of surface s1 , curve s2 , and junction s3 respectively. The contribution tensor A p (x) of a voter p to its votee x follows the below formulas: A p (x) =

3  d 

p

p

sd Sd,i (s)

(7)

d=1 i=1 p

where Sd,i (x) denotes the i th stick tensor vote. The following formula defines the stick Tensor Voting process S p (x) = ω(r, θ ) vˆ c vˆ cT

(8)

In above formula, vˆ c is the implied votee normal generated by the voter normal, ω(r, θ ) is a weight related to the voter normal and the geometric relationship of votee with respect to the voter. The detailed information about Tensor Voting can be seen in [33].

3.2 Denoising Process As shown in Fig. 4c and d, there is a noticeable amount of noise in the input cloud, so the first step is to exclude the interference of noise. We use the maximum eigenvalue λ1 as a criterion to judge the outliers. In fact, this criterion is similar to the Radius-based Outlier Filter [13], but the former is a distance-weighted version. In

Binocular Stereo Vision and Modified DBSCAN...

171

Fig. 5 a The denoised plant point clouds. b the normal (vn ) of each inlier point after second Tensor Voting process. c The result of modified DBSCAN. The points having same colors belong to a same cluster and the black points represent noise points. d The result of 2D image segmentation which is obtained by reprojecting these space clusters into raw image. Background area is excluded by color filter. The result is approximately equal to the ground truth

our experiment, we set a simple threshold τd to filter outliers. If λ1 is less than τd , the point will be regarded as an outlier. The result of denoising is shown in Fig. 5a.

3.3 Affinity Matrix The construction of affinity matrix is the core of our algorithm which endows the DBSCAN algorithm an ability to cluster intersected manifolds. Wang et al. [34] firstly introduce the manifold distance to spectral clustering. However, they use mixtures of probabilistic principal component analyzers, which is not well suitable to sparse point clouds. In our proposed algorithm, we define a new affinity metric, which consider not only the relationship of tangent space but also the distance between points. After denoising process, we execute Tensor Voting again. The formula of the new-defined metric can be seen as below.   Aff(i, j) = ω(i, j) · vˆ cT (i, j)vn ( j) if vˆ cT vn  < ct otherwise Aff(i, j) = 0

(9)

where ω(·) is defined in the Eq. (8) and vn is the estimated normal of the point j after the second Tensor Voting. The normal of each inlier point estimated by Tensor Voting can be seen in Fig. 5b. This definition of affinity matrix is more suitable to our plant point clouds since we need to consider a wider range of distance caused by holes or gaps in individual blade point cloud. The former definition of affinity

172

C. Tao et al.

matrix in [33, 34] is based on an assumption that observed points are approximately uniformly sampled from the underlying manifolds. Finally, the affinity matrix should be made symmetrical.

3.4 Modified DBSCAN DBSCAN is a well-known density-based clustering algorithm. There are two global parameters in this algorithm—radius or Eps, which defines the size of neighborhood of a point and MinPts, which is the minimal number of other points in a core point’ neighborhood. There are two main advantages of using DBSCAN to cluster plant point clouds: (1) The algorithm does not need a predetermined number of clusters. (2) It can identify outliers [35, 36]. In addition, we can exclude some points in the intersection area by the ratio of their surface saliency to curve saliency (a threshold τs ). The final result is shown in Fig. 5c.

3.5 2D Image Segmentation Reprojecting these clusters into the raw 2D image, we can obtain several sets of cluster pixels. Some regions over-segmented by SLIC which correspond to the pixels belonging to a cluster will be combined into an individual segmentation area. However, the result of final may contain some noise data which is related to background. It is reasonable to use green color filter to remove some non-leaf over-segmented regions. In some cases, few points in the boundary will be labeled incorrectly, we recommend to use the Surface Boundary Filter [13] to remove these edge points.

4 Experiment and Discussion In this section, we did some experiments to show the performance and accuracy of the proposed algorithm. These experiments were implemented by Python in persona laptop with 2.4 Ghz Intel Core I5.

4.1 Simulation of Manifolds Segmentation Firstly, we did simulations about segmenting two kinds of intersection, the first situation is that the tips of two blades intersect with each other and the second one is about an intersection between the apex of a leaf and the middle body of

Binocular Stereo Vision and Modified DBSCAN...

173

Fig. 6 a and b the point clouds images of the first type of intersection from two perspectives. c the segmentation of this situation. d–f the same information about the second type of intersection. Note that, the black points in yellow elliptical regions in e and f are points belonging to the intersection area, which is judged by the curve saliencies of their tensors. The brown point in the red circular area is a point which should be marked black but does not actually

another one (See Fig. 6). The occurrence of these intersections will make the traditional DBSCAN [36] ineffective. However, our proposed method based on manifold distance could be able to segment these intersected point clouds. In this simulation, we found that marking and then excluding these points in the intersection area is important. Usually, the ratio τs is set to 1.2, if the ratio of a point exceeds this threshold, it will be labeled as intersection point. The MinPts and Eps of DBSCAN are set to 5 and 0.2, respectively.

4.2 Experiments on Real Sceneries 4.2.1

Experiments on Herbs

We did four other experiments about herbs in the real natural sceneries. In the procedure of binocular reconstruction, we uniformly sampled pixels in the images, the interval is step to 10 pixels in our experiments except for 7 pixels in the last experiment since the depth of central leaf has a steep change, which would cause an undesirable discontinuity if sampling too densely. The window size of ZNCC and SAD ranges from 7 * 7 to 15 * 15 depending on the texture of leaves. In the fusion procedure of two types of cost volumes, the σconv and σwi are set to 5 and 2di respectively, where di is the distance between the target point and the fused point. The number of SLIC is set to around 200 in all experiments. In the clustering, we set the

174

C. Tao et al.

scope of the neighborhood of a point to 70 in Tensor Voting and ct in Eq. (9) to 0.717. The τd in the first token refinement by Tensor Voting is related to the density of input point clouds and it is usually equal to 7 in our experiments when processing an input set sampled in an interval of 10 pixels. The Eps of DBSCAN is set to 0.5 while the MinPts depends on the sample rate (usually 5 in most of experiments). We use a green color filter and a depth filter to delete the background clusters. The following Fig. 7 shows our results including the raw images, disparity maps, clustering outputs of point clouds, and final segmentation results. These four cases are complicated due to a large amount of green background which will make the boundaries ambiguous. As shown in Fig. 7h, the boundaries of most segmentation regions are extremely rough. There are two main reasons contributing to this phenomenon: (1) the poor performance of reconstruction in occlusion area and (2) the similar color in background and interested areas. The former one could be alleviated by the left–right consistency examination or simply removing the external boundary of the segmentation region iteratively to certain extent. However, the two methods would enforce us to use the region-growing method to render the peripheral area, which may bring other troubles like the ambiguity of the identities of the intersection area when two leaves to be segmented overlap largely. The latter factor is much more complicated and trickier. It is a very challenging segmentation task in computer vision.

4.2.2

Experiments on Tea Trees

The experimental platform used in this project is based on our self-designed tea picking robot. The base is a bipedal tracked robot and the executive agency is a SCARA manipulator on which an industrial CMOS image sensor is mounted (See Fig. 8). We captured three pairs of tea leaves which all contain several tender leaves. Due to the influence of uneven illumination and the different growth condition of these tea leaves, using general criterions of segmentation like color and textural patterns in this situation is relatively ineffective. The more feasible method is to recognize the tender leaves by their shapes with color and texture as auxiliaries. Moreover, the accurate computation of picking position requires segmenting single leaves from the complicated leaves clusters. The parameters in these experiments on tea leaves are similar to those in the above section, so we do not reiterate them again. The results of tea leaves segmentation are shown as follows. There are some minor errors in the final segmentation results, the most of which appear in the intersection of different leaves and the occlusion area. For instance, see Fig. 9f and h, the intersection area between two leaves corresponding to the gray and brown area, respectively, is labeled incorrectly due to the ambiguous boundary of leaves. The reasons are similar and could be seen in the above section.

Binocular Stereo Vision and Modified DBSCAN...

175

Fig. 7 Results of our proposed algorithm in four different herbs images. Each row of the chart is the result of a specific real scenery. The four columns represent raw images, disparity maps obtained by the proposed binocular reconstruction method, the clusters of point clouds computed by modified DBSCAN (black points are related to the background region, which are filtered by the depth filter) and the final segmentation by reprojecting these clusters into 2D images

5 Conclusion Considering the difficulty of segmenting 2D plant images captured in complicated natural environment, we propose a new algorithm which is based on binocular stereo reconstruction and clustering of sparse and noisy point clouds. The combination of

176

C. Tao et al.

Fig. 8 The experimental platform. a our self-developed tea picking robot. b SCARA robot arm. c Tea shearer. d CMOS sensor

two cost volumes of SAD and ZNCC makes the computed disparity more accurate. In addition, since the possibility of implementing the reconstruction algorithm in parallel, our reconstruction method can be executed around 1.2 s. The fusion within a same over-segmented SLIC region refine the disparity furtherly. The Tensor Voting theory can help us infer the three types of structure including surface, curve and junction and endow our proposed algorithm an ability to cluster intersected manifolds. The simulations have rectified this statement. We tested total eight examples about real sceneries (one in Fig. 5, four in Fig. 7, and three in Fig. 9), the results show that our algorithm could segment the interested individual leaves in all examples, while there are some small errors and defects in the final segmentation results. Reiterating again, since the similarity of color and texture of different leaves, these algorithms based on 2D images perform relatively poorly. Our proposed algorithm makes use of the depth information which is lack in 2D images and the method can solve some complicated and challenging segmentation tasks more easily since depth information is a natural indicator to distinguish different objects.

Binocular Stereo Vision and Modified DBSCAN...

177

Fig. 9 Results of our proposed algorithm in three tea leaves images. The figures a, e, and i are the original 2D images about tea leaves. The figure b, f and j are the disparity maps and figures c, g, k represent reconstructed point clouds, respectively. The figures d, h, and l in the fourth column are the final segmentation results. The results show that individual leaves are segmented successfully though there are some errors in the boundaries

Acknowledgements This work is partly supported by the National Natural Science Foundation of China (No. 61873164, 51975367), and the Guangxi Natural Science Foundation of China (No. GKAD18281007).

References 1. Wang Q, Nuske S, Bergerman M et al (2012) Design of crop yield estimation system for Apple orchards using computer vision. Dallas, Texas, July 29–August 2012 2. Lee SH, Chang YL, Chan CS et al (2018) HGO-CNN: Hybrid generic-organ convolutional neural network for multi-organ plant classification. In: 2017 IEEE International Conference on Image Processing (ICIP). IEEE, 2018 3. Razavi S, Yalcin et al (2017) Using convolutional neural networks for plant classification. Sig Process Commun 2017 4. Mehta SS, Burks TF (2014) Vision-based control of robotic manipulator for citrus harvesting. Comput Electron Agric 102:146–158

178

C. Tao et al.

5. Ji W, Qian Z, Xu B et al (2016) Apple tree branch segmentation from images with small gray-level difference for agricultural harvesting robot. Optik—Int J Light Electron Opt 127(23):11173–11182 6. Scarfe AJ, Flemmer RC, Bakker HHC et al (2009) Development of an autonomous kiwifruit picking robot. In: International conference on autonomous robots and agents. IEEE 7. Li L, Zhang Q, Huang D (2014) A review of imaging techniques for plant phenotyping. Sensors 14(11):20078–20111 8. Zhu F, Thapa S, Gao T, Ge Y, Walia H, Yu H (2018) 3D reconstruction of plant leaves for high-throughput phenotyping. 2018 IEEE International conference on big data. Seattle, WA, USA, pp 4285–4293 9. Pape JM, Klukas C (2014) 3-D histogram-based segmentation and leaf detection for Rosette plants. In: European conference on computer vision. Springer, Cham 10. Xu G, Zhang F, Shah SG, Ye Y, Mao H (2011) Use of leaf color images to identify nitrogen and potassium deficient tomatoes. Pattern Recogn Lett 32:1584–1590 11. Itakura K, Hosoi F (2018) Automatic leaf segmentation for estimating leaf area and leaf inclination angle in 3D plant images. Sensors 18(10):3576 12. Li D, Cao Y, Tang X et al (2018) Leaf segmentation on dense plant point clouds with facet region growing. Sensors 18(11):3625 13. Li D, Cao Y, Shi G et al (2019) An overlapping-free leaf segmentation method for plant point clouds. IEEE Access 7:129054–129070 14. Teng CH, Kuo YT, Chen YS (2011) Leaf segmentation, classification, and three-dimensional recovery from a few images with close viewpoints. Opt Eng 50(3):037003 15. Hu C, Pan Z, Li P (2019) A 3D point cloud filtering method for leaves based on manifold distance and normal estimation. Remote Sens 11(2):198 16. Fusiello A, Trucco E, Verri A (2000) A compact algorithm for rectification of stereo pairs. Mach Vision Appl 12(1):16–22 17. Hartley RI (1999) Theory and practice of projective rectification. Int J Comput Vision 35(2):115–127 18. Mallon J, Whelan PF (2005) Projective rectification from the fundamental matrix. Image Vision Comput 23(7):643–650 19. Fusiello A, Irsara L (2008). Quasi-euclidean uncalibrated epipolar rectification. In: 2008 19th International conference on pattern recognition. IEEE, pp 1–4 20. Scharstein D, Szeliski R (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vision 47(1–3):7–42 21. De-Maeztu L, Villanueva A, Cabeza R (2011) Stereo matching using gradient similarity and locally adaptive support-weight. Pattern Recogn Lett 32(13):1643–1651 22. Shi H, Zhu H, Wang J et al (2016) Segment-based adaptive window and multi-feature fusion for stereo matching. J Algorithms Comput Technol 10(1):3–11 23. Dima C, Lacroix S (2002) Using multiple disparity hypotheses for improved indoor stereo. In: Proceedings 2002 IEEE International conference on robotics and automation (Cat. No. 02CH37292), vol 4. IEEE, pp 3347–3353 24. Bhalerao RH, Gedam SS, Buddhiraju KM (2017) Modified dual winner takes all approach for tri-stereo image matching using disparity space images. J Indian Soc Remote Sens 45(1):45–54 25. Campbell NDF, Vogiatzis G, Hernández C et al (2008) Using multiple hypotheses to improve depth-maps for multi-view stereo. European conference on computer vision. Springer, Berlin, Heidelberg, pp 766–779 26. Brandao M, Ferreira R, Hashimoto K et al (2015) On stereo confidence measures for global methods: evaluation, new model and integration into occupancy grids. IEEE Trans Pattern Anal Mach Intell 38(1):116–128 27. Hu X, Mordohai P (2012) A quantitative evaluation of confidence measures for stereo vision. IEEE Trans Pattern Anal Mach Intell 34(11):2121–2133 28. Han CH, Lee SW, Kang HS (2013) Low-complexity depth map generation for real-time 2D-to3D video conversion. In: 2013 IEEE International conference on consumer electronics (ICCE). IEEE, pp 185–186

Binocular Stereo Vision and Modified DBSCAN...

179

29. Achanta R, Shaji A, Smith K et al (2012) SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 34(11):2274–2282 30. Tang CK, Medioni G (1998) Inference of integrated surface, curve and junction descriptions from sparse 3D data. IEEE Trans Pattern Anal Mach Intell 20(11):1206–1223 31. Lee MS, Medioni G (1998) Inferring segmented surface description from stereo data In: Proceedings. 1998 IEEE computer society conference on computer vision and pattern recognition (Cat. No. 98CB36231). IEEE, pp 346–352 32. Lee MS, Medioni G, Mordohai P (2002) Inference of segmented overlapping surfaces from binocular stereo. IEEE Trans Pattern Anal Mach Intell 24(6):824–837 33. King BJ (2008) Range data analysis by free-space modeling and tensor voting. Rensselaer Polytechnic Institute 34. Wang Y, Jiang Y, Wu Y et al (2011) Spectral clustering on multiple manifolds[J]. IEEE Trans Neural Networks 22(7):1149–1161 35. Schubert E, Sander J, Ester M et al (2017) DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Trans Database Syst (TODS) 42(3):1–21 36. Ester M, Kriegel HP, Sander J et al (1996) A density-based algorithm for discovering clusters in large spatial databases with noise. Kdd 1996 Proc 96(34):226–231.

Short Papers and Technical Notes

Teaching-Free Intelligent Robotic Welding of Heterocyclic Medium and Thick Plates Based on Vision Hu Lan, Huajun Zhang, Jun Fu, Libin Gao, and Liang Wei

Abstract Aiming at the complicated programming of traditional welding robot and the inability to adapt to the diversity of the geometric shapes and size of the lifting lug components of the port cranes, a composite sensing method using 2D macro-view panoramic photo coarse positioning and 3D laser vision precise measurement guidance was proposed, breaking through some difficulties in robotic autonomous sensing, decision-making and welding technical, such as contour recognition of randomly placed arc, full circle, ellipse and other heterocyclic medium and thick plates, groove angle and depth measurement, welding sequence arrangement, multilayer and multi-path and parameter planning, realized teaching-free intelligent welding of the lifting lug component of the port crane. This provides technical support for the “intelligent manufacturing” upgrade of China’s high-end marine engineering equipment and has important value in engineering promotion. Keywords Machine vision · Image processing · Welding seam tracking · Intelligent robot

1 Introduction Large offshore engineering cranes play an important role in the fields of marine development, energy construction, major projects, logistics and trade and are indispensable key supporting equipment for national strategies [1]. Among them, port cranes account for more than 70% of the global market. Affected by the global climate, national regulations, and shipping routes, the equipment is large, heavy, special, non-standard, and highly customized. According to statistics, each port crane H. Lan (B) · H. Zhang · J. Fu · L. Gao · L. Wei Technology Department, Shanghai Zhenhua Heavy Industries Co., Ltd, Shanghai 200125, China e-mail: [email protected] H. Lan College of Engineering, Zhejiang Normal University, Jinhua 321004, Zhejiang, China

© Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_11

183

184

H. Lan et al.

is composed of thousands of parts. In the process of equipment manufacturing and delivery, frequent lifting, transportation and lashing operations are required, resulting in a large number of prefabricated lifting lugs and craft lifting lugs in the product structure. The traditional manufacturing method of above-mentioned lug members completely relies on labor, the production process is backward, a large number of welders are needed to ensure the progress, and the welding quality is unstable. In particular, labor costs have been increasing year by year, and the situation of “Difficult to recruit and labor shortage” for high-risk jobs such as welders and grinders has become increasingly prominent. There is an urgent need to use automated equipment to improve quality and efficiency. Robotic automation and intelligent welding are breakthroughs. It is well known that the programming process of traditional teaching-reproducible welding robot is cumbersome [2] and cannot adapt to the changes of geometry and size of workpiece, how to achieve independent welding of heterocyclic lifting lug component based on traditional robot is a production pain point that enterprises need to solve urgently. In many practical application scenarios, the robot needs to have more advanced functions such as recognition, analysis, and processing. That is to say, it is necessary to install a pair of “fire eyes” for the robot to simulate the human eye for measurement and judgment. Machine vision is equivalent to attaching “eyes” to welding robot, allowing them to clearly and tirelessly identify weld seams, and perform detection and guidance functions of human eye, which is particularly important in flexible and efficient welding production [3–10]. To this end, a composite sensing method using 2D macro-vision panoramic photo recognition + 3D laser vision accurate measurement and tracking is proposed. Through the model matching of workpiece contour and bevel size, as well as planning parameters such as movement path, welding sequence, welding bead arrangement, process regulations, the teaching-free intelligent welding of arc-shaped, full circle, elliptical and other ring-shaped lifting lug components is realized.

2 Characteristics of Lifting Lug Figures 1, 2 and 3 are common forms of port crane lifting lug components, which are welded by one ear plate and two heavy plates, the material is Q345 low alloy high strength steel, the weld form is 2FG, including circle (with straight edge), full circle and oval double-sided fillet welds. The dimensions of the ear plate and heavy plate are shown in Table 1. When the thickness of the heavy plate of the lifting lug exceeds 12 mm, a 45 ± 3° bevel will be opened with a depth of 8–24 mm.

Teaching-Free Intelligent Robotic Welding of Heterocyclic …

185

Fig. 1 Circular lifting lug components

Fig. 2 Full circular lifting lug component

Fig. 3 Oval lifting lug component

Table 1 Geometric dimensions of heavy plate of lifting lug Parameter

Thickness T1 (mm)

Internal radius ϕ (mm)

Outer radius ϕ (mm)

Straight edge l (mm)

Size range

10–60

60–250

80–650

80–160

3 Composition of the Intelligent Robotic Welding System of the Lifting Lug Figure 4 shows a teaching-free intelligent robotic welding workstation for a hete-

186

H. Lan et al.

Fig. 4 Intelligent robotic welding system for lifting lug’s medium and thick plate based on compound visual sensing

rocyclic lifting lug component. The workstation is equipped with a 2D wide-angle panoramic vision recognition and positioning system, and a 3D laser vision acquisition and tracking system. Two sets of vision systems are installed in the middle position of the beam of the gantry frame and on the visual inspection axis. The workstation adopts the 8-axis linkage mode of the gantry robot, including the robot body axis (6 axes) and the visual detection axis (2 axes). The X/Y /Z axis of the hybrid robot realizes the spatial positioning of the welding torch through bilateral synchronous servo drive, and the spatial orientation of the welding torch through two rotation axes (U/V) and a radius axis (R) of the wrist; detection axis(R1/Z1) used for positioning and tracking adjustment of laser vision sensor. The control system adopts a multi-axis CNC system based on an industrial computer, and the bus control mode is used to realize the real-time linkage control of the multi-axis all-digital AC servo motor and stepper motor by the lower computer (multi-axis motion control card). The architecture of the intelligent robotic welding system of the lifting lug is shown in Fig. 5.

4 Intelligent Robotic Welding of Lifting Lug Based on Compound Visual Sensing The autonomous welding workflow of the port crane lifting lug component robot is shown in Fig. 6. Before welding, the gantry robot moves to the top of the work surface (3 × 3 m), 2D macro-vision takes overall pictures of any lifting lug members

Teaching-Free Intelligent Robotic Welding of Heterocyclic …

187

Fig. 5 Structure of intelligent robotic welding system for lifting lug’s medium and thick plate

Fig. 6 Structure of intelligent robotic welding visual system of heterocyclic medium and thick plate

188

H. Lan et al.

Fig. 7 2D panoramic visual contour recognition

to be welded placed on the work table, and the outline and the center point (coarse positioning) of the lug component are extracted by computer image processing. Then compared them with the geometric parameters such as the inner diameter, outer diameter and straight edge length of the mathematical model of the lug component stored in the control system. If the recognition is successful, the system will independently plan the movement path (outer outline of the heavy plate) and the welding sequence of the lifting lug, the priority order follows the red, yellow, and green colors, as shown in Fig. 7. After the system finishes planning the welding sequence of the lifting lugs, the gantry robot moves to the top of the first welding lifting lug (red mark), the system reads the overall thickness of the workpiece, and then the detection axis carries the 3D laser vision to accurately collect the thickness and bevel dimensions (bevel angle and bevel depth) of the heavy plate of the lifting lug based on the previously planned motion path, as shown in Fig. 8. At the same time, the obtained geometric parameters of the heavy plate of lifting lugs are compared with the thickness and the size of the groove of the mathematical model stored in the control system. If the recognition is successful, the system will independently plan the welding bead filling sequence and welding path, and automatically call the process parameters in the welding expert database (taking a groove depth of 10 mm as an example, see Table 2) to generate a welding task program. After the laser vision identified the center position of the weld (point C in Fig. 8) by advance zone scan, the robot torch at the end of the robot’s wrist is guided to automatically weld along the center of the weld until the welding of the lug component is completed, as shown in Fig. 9. When multiple workpieces are placed on the workbench, the workstation will be automatically transferred to the next lifting lug to be welded, laser vision acquisition, identification, guidance, welding, etc. will be in sequence until all welding is completed.

Teaching-Free Intelligent Robotic Welding of Heterocyclic …

189

Fig. 8 Collection of visual information based on 3D laser

5 Conclusion (1) (2)

A multi-axis linkage system of hybrid gantry robot was built around the welding tasks of diverse geometry and parameters of the port crane’s lifting lug. Using composite sensing method with 2D macro-vision panoramic photo recognition and 3D laser vision accurate measurement and tracking, it broke through the technology difficulties in autonomous perception, decision-making and execution of robotic welding of medium and heavy plate, such as contour recognition, bevel angle and depth measurement, welding sequence and multilayer multi-path planning. Teaching-free intelligent welding of heterocyclic lifting lug component has been realized.

Welding current (A)

270

270

275

Head

1

2

3

31

31

31

Arc voltage (V)

360

230

280

Speed (mm/min)

11.5

16.0

4.0

Radial distance (mm)

43.0

35.0

−2.0 5.5

60.0

Torch angle (°)

1.5

Vertical distance (mm)

Table 2 Welding process parameters (taking groove depth of 10 ± 1 mm as an example)

0.30

0.30

0.0

Swing cycle (s)

0.15

0.15

0.0

Swing above (mm)

0.10

0.0

0.0

Residence above (s)

0.0

0.15

0.0

Swing below (mm)

0.0

0.0

0.0

Residence below (s)

190 H. Lan et al.

Teaching-Free Intelligent Robotic Welding of Heterocyclic …

(a) Laser guided welding

191

(b) Effect of multilayer and multi-pass welding

Fig. 9 Robotic autonomous welding of heavy plates with lifting lugs

References 1. Liu L, Zhou YL, Zhang HJ et al (2017) (2017) Research on robotic bottom welding without back chipping technique of T-joint of thick plate of shore container crane. Hoisting Conveying Mach 03:101–104 2. Lan H, Tao ZW, Jian XX (2012) Arc welding robot welding technology for typical joints of construction machinery. Res Explor Lab 31(02):15–18 3. Chu HH, Wang ZY (2017) Research on weld seam forming dimension measurement and defect recognition based on active vision. Hot Working Technol 46(21):206–209 4. Luo X, Wang ZY (2016) Method of welding image fusion based on multi-exposure. Hot Working Technol 45(07):232–237 5. Zou YB, Chen T (2018) Laser vision seam tracking system based on image processing and continuous convolution operator tracker. Opt Lasers Eng 105(01):141–149 6. Zou YB, Wang YB, Zhou WL et al (2018) Real-time seam tracking control system based on line laser visions. Opt Laser Technol 103(01):182–192 7. Luo X, Wang ZY, Chu HH et al (2017) Welding pool image segmentation method based on shape priori active contour. Trans China Weld Inst 38(11):113–118 8. Zhang K, Chen YX, Gui H et al (2018) Identification of the deviation of seam tracking and weld cross type for the derusting of ship hulls using a wall-climbing robot based on three-line laser structural light. J Manuf Process 35(01):295–306 9. Fan JF, Jing FS, Yang L et al (2019) A precise seam tracking method for narrow butt seams based on structured light vision sensor. Opt Laser Technol 109(01):616–626 10. Luo X, Wang ZY (2015) Bright arc welding vision system based on multi-exposure. Trans China Weld Inst 36(09):91–94

In-Process Visual Monitoring of Penetration State in Nuclear Steel Pipe Welding Liangrui Wang, Shu’ang Wang, Weihua Liu, Yuefeng Chen, and Huabin Chen

Abstract Pipe and tube products are widely used within a nuclear power plant. Orbital pipe and tube welding have been used in the nuclear power generation industry. This paper developed a set of welding penetration visual monitoring system for the backside molten pool and back-weld bead. Due to the constraints of pipe installation and construction, it is difficult to observe the back-weld bead formation directly. Based on that, a pipe inner inspection robot equipped with CMOS sensor and laser scanner was developed. It can directly acquire back-weld bead formation by the synchronous rotation with the welding head. In order to visualize bead formation in real time, we developed a welding penetration state visual monitoring software using multi-thread method. Finally, we carried out a series of verification experiments to prove the effectiveness of monitoring system, which has an important impact on the quality control of the all-position pipe welding process. Keywords Pipe welding · Process monitoring · Inner pipe · Backs pool · Bead profile

1 Introduction As a supplement to thermal and hydroelectric power generation, nuclear power has the advantages of low carbon, environmental protection, low cost, and safety, and has broad prospects for development. During the construction of a nuclear power plant, the welding of nuclear power pipe becomes increasingly indispensable for the installation of nuclear power equipment. The quality of pipe welding is directly related to the quality and progress of nuclear power plant. Currently, the automatic all-position TIG welding of narrow-gap thick-walled nuclear pipes can realize the L. Wang · H. Chen (B) Shanghai Key Laboratory of Materials Laser Processing and Modification, Shanghai Jiao Tong University, Shanghai 200240, China e-mail: [email protected] S. Wang · W. Liu · Y. Chen China Nuclear Industry Fifth Construction Co., Ltd, Shanghai 201512, China © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5_12

193

194

L. Wang et al.

automatic remote welding of thick-walled alloy pipes such as main steam pipes, which greatly saves both time and labor. Acquiring the formation of back-weld bead in real time is critical to improve nuclear pipe welding quality. However, the welder lacks real-time visual sensing of weld pool in all-position pipe welding due to limitations of piping installation, making it difficult to seasonably adjust the welding parameters before the defect occurs. So, it is necessary to monitor welding penetration of nuclear pipe. In recent years, the research on pipe welding monitoring system emerges endlessly. Bae [1], Baskor [2], and Kamo [3], collected the front image of molten pool through a CCD camera placed at the front of the welding gun during GTAW pipe welding, and monitored the welding process in real time. Fujita [4] developed a thick-walled pipe quality monitoring system, including welding parameter acquisition, front-side vision sensing system, bead forming monitoring, and LUT internal defect detection. In order to characterize three-dimensional morphology of molten pool, Zhang [5], Wang [6], and Andrew [7] developed different three-dimensional reconstruction algorithm of the molten pool surface, visually displaying the surface morphology of the molten pool. During nuclear pipeline welding, unfused and concave occur frequently in overhead and vertical up welding (6–9 o’clock) which seriously affects welding quality. The front vision sensor cannot immediately acquire the formation of back-weld bead, so it is necessary to develop real-time back welding monitoring of pipe welding. Baskoro [8] used a mirror rotated at the same speed as the welding head to project back molten pool into a CCD camera to obtain the image of the back molten pool. Afterward, a fixed hyperboloid lens was used instead of the mirror to reflect the image [9]. These methods can obtain back molten pool image, but the image clarity is very low, and the molten pool details were lost. Aiming to acquire the welding penetration of pipes all-position welding in real time, this paper designed a pipe inner inspection robot equipped with vision sensors. The rotating disk at the front of robot rotates synchronously with the welding head to realize in-time collection and processing of the back molten pool image. At the same time, a laser profile scanner is added to the sensing system, which obtains threedimensional formation of the back solidified weld bead by processing profile data. A welding penetration monitoring software has been developed to synchronize display and process this penetration information, which can monitor the back-penetration status in real time to improve the quality of nuclear power pipeline welding.

2 Orbital Pipe Welding Penetration Monitoring System As shown in Fig. 1, the thick-walled all-position narrow-gap welding monitoring system includes automatic narrow-gap welding equipment, pipe inner inspection robot, and dual-sensor signal acquisition systems. Automatic welding system consists of welding power source, automatic wire feeder, front vision panel, parameter control panel, shielding gas transmission, welding head, and other components. The pipe inner inspection robot is a self-developed travel mechanism which is used to carry

In-Process Visual Monitoring of Penetration State …

195

Fig. 1 Schematic of monitoring system

sensors and travel to the specific welding position. The dual-sensor signal acquisition system consists of two parts. The CMOS welding camera is used to collect back molten pool image and laser profile scanner is used to collect back-weld bead profile. The signal acquisition system is fixed on the rotating disk at the front of the walking robot, and can rotate 360° with the disk. The welding power source used in experiment is Gold Track VI welding machine produced by Liburdi. The welding head is Polycar 60–2 orbital welding power, which is equipped with electric arc pressure control and welding torch swing control module. The welding system is equipped with a front vision sensor, which can observe front molten pool in real time during welding. Meanwhile, this system is equipped with a parameter adjustment panel to adjust the welding parameters based on front molten pool. Aiming at the real-time monitoring of back-weld bead formation, a pipe inner inspection robot was designed, as shown in Fig. 2a. A visual sensing system was installed on the rotating disk at the front of the walking robot. The robot is displayed in Fig. 2b. The pipe inner inspection robot developed in this paper is suitable for the pipe inner diameter ranging from 300 to 500 mm. The walking robot is φ380 × 350 mm, with a diameter of 338 mm when fully opened and a diameter of 288 mm when fully retracted. Electrically controlled components such as drivers and controllers are placed at the rear of the robot. Sensing system including CMOS camera and profile scanner is placed at the front of the robot. The robot size parameters are shown in Table 1.

196

L. Wang et al.

(a) CAD model

(b) Pipe inspection robot

Fig. 2 Pipe inner inspection robot

Table 1 Pipe inspection robot parameters Size/mm

Suitable diameter/mm

Walk speed (mm·min−1 )

Rotational angular velocity (°/s)

Weight (kg)

Load weight (kg)

φ 380 × 350 mm

280–350 mm

50–278

0.06–6.84

8

10

Subject to the image acquisition in a narrow space, the CMOS camera should have a small size and weight, and the camera must ensure clear and stable imaging within the focal distance range of 100–150 mm. Similar to the vision sensor, the profile scanner should have a small weight and volume, and ensure the stable acquisition of the bead profile within the focal distance range of 50–120 mm. Based on that, a high-speed CMOS welding camera and laser profile scanner are selected.

3 Welding Penetration Monitoring Software This paper developed a thick-walled all-position pipe welding monitoring software for the real-time welding penetration monitoring. The main modules include pre-weld scanning, back-pool vision sensing, back bead profile scanning, visual information processing, and feedback evaluation module. This software is written in VS2017, and developed using C# language and OpenCV image processing algorithm library. The software interface can be divided into parameter setting area, image acquisition area, image processing area, and information feedback area. Software interface is shown in Fig. 3. The monitoring software is built in a multi-thread. The software system consists of main thread, welding parameter sub-thread, bead profile processing sub-thread,

In-Process Visual Monitoring of Penetration State …

197

Fig. 3 Monitoring software interface during welding

back-pool image processing sub-thread, and forming feedback sub-thread, as shown in Fig. 4. The functions of each thread are as follows: (1)

Main thread: Realize the display of software interface, the initial setting of CMOS/scanner/welding parameters, and display of penetration characteristics.

Fig. 4 Monitoring software flowchart

198

L. Wang et al.

(2)

Thread 1: Communicate with welding power source, collect real-time current/voltage during welding, and display on monitoring software. Thread 2: Communicate with laser profile scanner, collect back bead profile in real-time and display on monitoring software; use profile feature point extraction algorithm to calculate profile characteristics, and feed back to interface. Thread 3: Communicate with CMOS sensor, initialize camera parameters, collect the back-pool image and display, analyze and process the molten pool image, obtain current pool characteristic, and feed it back to software interface. Thread 4: Establish a mapping relationship between welding parameters, forming characteristics and bead forming quality through previous process tests, and provide welding parameter adjustment for current forming characteristics.

(3)

(4)

(5)

3.1 Functions of Welding Monitoring System The monitoring software interface during welding is shown in Fig. 3, which implements the following functions: (1)

(2)

(3)

Pre-scan before welding: Rotating disk can be rotated for one revolution to obtain the groove profile data before welding, then calculate groove characteristic through a developed groove profile processing algorithm, and show their change regular. Visual information sensing: We use CMOS camera and profile scanner SDK package for secondary development. The software has taken pictures from the buffer and display on software at a specific frequency. Penetration characteristic extraction: As for molten pool image, we use image threshold segmentation to extracting the molten pool area, then we use ellipse to fit back-pool profile and obtain pool characteristic, feed it back to software interface.

As for weld bead profile, we use flat line fitting method to extract back bead characteristics. We fit left and right flat plates with a straight line, then locate the furthest point from the fitted line as reinforcement point, and locate bead width point based on Z-axis differences, calculating bead characteristic and feedback to software. (4)

Feedback evaluation: We use fuzzy set rules to analyze the current bead characteristics, determine penetration state, and display it on the monitoring software.

4 Result and Discussion Based on the monitoring software and hardware platforms, we have conducted a series of verification experiments. We start all-position narrow-gap welding at the 1

In-Process Visual Monitoring of Penetration State …

199

o’clock of the work piece (near flat welding). The welding head and rotating disk rotate at the same speed in the clockwise direction, which return to the arcing point after 360° rotation, and rotate 2–3° to turn arc extinct, as shown in Fig. 5. We can obtain back molten pool image and back-weld bead profile from welding monitoring system, as shown in Fig. 6. It can be seen from Fig. 6a that the back molten pool is brighter than base material plate, and the size of molten pool presents welding penetration state. Figure 6b is back-weld bead profile, whose left and right lines are parent metal, and the irregular curve near the center is back-weld bead, which characterizes bead formation. Bead profile shows convex curve when the penetration is normal. When the contour line becomes straight or even concave, it indicates poor penetration. Fig. 5 Experimental platform Wire

Torch

Gas Camera

Scanner

Steel

(a) Back molten pool

(b) Back weld bead profile

Fig. 6 Visual information collected from monitoring system

200

L. Wang et al.

5 Conclusions In this paper, a real-time welding monitoring system for all-position pipe back penetration based on machine vision is proposed. According to experiment result, following conclusion can be drawn: (1)

(2)

(3)

Designed a thick-walled pipe welding penetration monitoring system based on pipe inner inspection robots. By rotating the rotating disk carried with CMOS sensor and laser scanner synchronously with welding head, real-time collection of back molten pool and back bead profile is realized. Developed a thick-walled pipe welding penetration monitoring and feedback software based on multi-thread, realizing real-time display and process of back molten pool and back bead profile. It can obtain the sound back molten pool and back bead profile images stably in actual orbital pipe welding.

References 1. Bae KY, Lee T-H, Ahn K-C (2002) An optical sensing system for seam tracking and weld pool control in gas metal arc welding of steel pipe. J Mater Process Technol 120(1–3):458–465 2. Baskoro AS, Erwanto, Winarto (2011) Monitoring of molten pool image during pipe welding in gas metal arc welding (GMAW) using machine vision. In: 2011 International conference on advanced computer science and information systems 3. Kamo K et al (2004) Development of automatic GTAW technology using visual sensor in narrow gap all position. Weld World 48(9):20–27 4. Fujita Y et al (2011) Development of a welding monitoring system for in-process quality control of thick-walled pipe. Weld World 56(11):15–25 5. Zhang WJ, Wang X, Zhang Y (2013) Analytical real-time measurement of a three-dimensional weld pool surface. Measur Sci Technol 24(11):115011 6. Wang X, Li R (2014) Intelligent modelling of back-side weld bead geometry using weld pool surface characteristic parameters. J Intell Manuf 25(6):1301–1313 7. Neill AM, Steele JPH (2016). Modeling and simulation of three dimensional weld pool reconstruction by stereo vision. In: 2016 IEEE international conference on advanced intelligent mechatronics (AIM) 8. Baskoro AS, Kabutomori M, Suga Y (2008) Monitoring of backside image of molten pool during aluminum pipe welding using vision sensor. 580-582:379–382 9. Baskoro AS et al (2009) An application of genetic algorithm for edge detection of molten pool in fixed pipe welding. Int J Adv Manuf Technol 45(11):1104

Information for Authors

Aims and Scopes Transactions on Intelligent Welding Manufacturing (TIWM) is authorized by Springer for periodical publication of research papers and monograph on intelligentized welding manufacturing (IWM). The TIWM is a multidisciplinary and interdisciplinary publication series focusing on the development of intelligent modelling, controlling, monitoring, and evaluating and optimizing the welding manufacturing processes related to the following scopes: • Scientific theory of intelligentized welding manufacturing

• Planning and optimizing of welding techniques

• Virtual & digital welding /additive manufacturing

• Sensing technologies for welding process

• Intelligent control of welding processes and quality

• Knowledge modeling of welding process

• Intelligentized robotic welding technologies

• Intelligentized, digitalized welding equipment

• Telecontrol and network welding technologies

• Intelligentized welding technology applications

• Intelligentized welding workshop implementation

• Other related intelligent manufacturing topics

Submission Manuscripts must be submitted electronically in WORD version on online submission system: https://ocs.springer.com/ocs/en/home/TIWM2017. Further assistance can be obtained by emailing the Editorial Office of TIWM, Dr. Yan ZHANG: [email protected], or anyone of the Editors-in-chief of TIWM. © Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5

201

202

Information for Authors

Style of Manuscripts The TIWM includes two types of contributions in scopes aforementioned, the periodical proceedings of research papers and research monographs. Research papers include four types of contributions: Invited Feature Articles, Regular Research Papers, Short Papers and Technical Notes. It is better to limit the full-length of Invited Feature Articles in 20 pages; Regular Research Papers in 12 pages; and Short Papers and Technical Notes both in 6 pages. The cover page should contain: Paper title, Authors name, Affiliation, Address, Telephone number, Email address of the corresponding author, Abstract (100–200 words), Keywords (3–6 words) and the suggested technical area.

Format of Manuscripts The manuscripts must be well written in English and should be electronically prepared preferably from the template “splnproc1110.dotm” which can be downloaded from the website: https://rwlab.sjtu.edu.cn/tiwm/index.html. The manuscript including texts, figures, tables, references, and appendixes (if any) must be submitted as a single WORD file.

Originality and Copyright The manuscripts should be original, and must not have been submitted simultaneously to any other journals. Authors are responsible for obtaining permission to use drawings, photographs, tables, and other previously published materials. It is the policy of Springer and TIWM to own the copyright of all contributions it publishes and to permit and facilitate appropriate reuses of such published materials by others. To comply with the related copyright law, authors are required to sign a Copyright Transfer Form before publication. This form is supplied to the authors by the editor after papers have been accepted for publication and grants authors and their employers the full rights to reuse of their own works for noncommercial purposes such as classroom teaching etc.

Author Index

C Chen, Chao, 115, 131 Chen, Huabin, 193 Chen, Shanben, 3, 87, 115, 131, 163 Chen, Shaojie, 35 Chen, Yuefeng, 193

Q Qin, Rui, 49

F Feng, Zhiqiang, 75, 99, 149 Fu, Jun, 183

T Tao, Chengyu, 163 Tao, Wei, 35

G Gao, Libin, 183

W Wang, Liangrui, 193 Wang, Shu’ang, 193 Wei, Liang, 183 Wen, Guangrui, 49

H Han, Junfeng, 75, 99, 149 Han, Xiangxi, 75 Hou, Zhen, 3, 87 Huang, Weiming, 99, 149

J Jiang, Zisheng, 115, 131 Jiao, Ziquan, 75, 99, 149

L Lan, Hu, 183 Liu, Weihua, 193 Luo, YiLei, 131 Lv, Na, 35, 115, 163

R Ren, Wenjing, 49

X Xiao, Runquan, 3, 87 Xu, Fengjing, 3 Xu, Yanling, 3, 87

Y Yang, Zhe, 49 Yuan, Yujiao, 49

Z Zhang, Huajun, 3, 183 Zhang, Zhifen, 49 Zhao, Hui, 35

© Springer Nature Singapore Pte Ltd. 2021 S. Chen et al. (eds.), Transactions on Intelligent Welding Manufacturing, Transactions on Intelligent Welding Manufacturing, https://doi.org/10.1007/978-981-33-6502-5

203