152 10 13MB
English Pages 185 [177] Year 2023
Conference Proceedings of the Society for Experimental Mechanics Series
Javad Baqersad Dario Di Maio Editors
Computer Vision & Laser Vibrometry, Volume 6
Proceedings of the 41st IMAC, A Conference and Exposition on Structural Dynamics 2023
Conference Proceedings of the Society for Experimental Mechanics Series Series Editor Kristin B. Zimmerman Society for Experimental Mechanics, Inc., Bethel, CT, USA
The Conference Proceedings of the Society for Experimental Mechanics Series presents early findings and case studies from a wide range of fundamental and applied work across the broad range of fields that comprise Experimental Mechanics. Series volumes follow the principle tracks or focus topics featured in each of the Society’s two annual conferences: IMAC, A Conference and Exposition on Structural Dynamics, and the Society’s Annual Conference & Exposition and will address critical areas of interest to researchers and design engineers working in all areas of Structural Dynamics, Solid Mechanics and Materials Research.
Javad Baqersad • Dario Di Maio Editors
Computer Vision & Laser Vibrometry, Volume 6 Proceedings of the 41st IMAC, A Conference and Exposition on Structural Dynamics 2023
Editors Javad Baqersad Kettering University Flint, MI, USA
Dario Di Maio MS3 University of Twente ENSCHEDE, Overijssel, The Netherlands
ISSN 2191-5644 ISSN 2191-5652 (electronic) Conference Proceedings of the Society for Experimental Mechanics Series ISBN 978-3-031-34909-6 ISBN 978-3-031-34910-2 (eBook) https://doi.org/10.1007/978-3-031-34910-2 © The Society for Experimental Mechanics, Inc. 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
Computer Vision and Laser Vibrometry represents one of ten volumes of technical papers presented at the 41st IMAC, a Conference and Exposition on Structural Dynamics, organized by the Society for Experimental Mechanics, held February 13–16, 2023. The full proceedings also include volumes on Nonlinear Structures and Systems; Dynamics of Civil Structures; Model Validation and Uncertainty Quantification; Dynamic Substructures; Special Topics in Structural Dynamics and Experimental Techniques; Dynamic Environments Testing; Sensors and Instrumentation and Aircraft/Aerospace Testing Techniques; Topics in Modal Analysis and Parameter Identification; and Data Science in Engineering. This volume of proceedings shares advances in the area of computer vision, laser vibrometry, digital image correlation, photogrammetry, and optical technique and applications of these techniques for dynamic measurements, structural dynamics, and structural health monitoring. The organizers would like to thank the authors, presenters, session organizers, and session chairs for their participation in this track. Flint, MI, USA Overijssel, The Netherlands
Javad Baqersad Dario Di Maio
v
Contents
1
Optical Motion Magnification: A Comparative Study and Application for Vibration Analysis . . . . . . Tymon Nieduzak, Nicholas A. Valente, Christopher Niezrecki, and Alessandro Sabato
1
2
To the Moon! Space Launch System Modal Testing with Video and Motion Magnification . . . . . . . . . . Justin G. Chen, Raul Rios, Kevin E. Franks, and Eric C. Stewart
9
3
Effects of Image-Pair Processing Styles on Phase-Based Motion Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . Sean Collier and Tyler Dare
17
4
Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alessandro Zanarini
27
5
Measurements of Panel Vibration with DIC and LDV Imaged Through a Mach 5 Flow . . . . . . . . . . . . . Marc A. Eitner, Yoo-Jin Ahn, Noel T. Clemens, Jayant Sirohi, and Vikrant Palan
39
6
Experimental Quantification of Sensor-Based Stereocameras’ Extrinsic Parameters Calibration . . Fabio Bottalico, Christopher Niezrecki, Kshitij Jerath, Yan Luo, and Alessandro Sabato
49
7
Toward Camera-Based Monitoring of Abdominal Aortic Aneurysms (AAAs). . . . . . . . . . . . . . . . . . . . . . . . . Max Gille and Daniel J. Rixen
57
8
Measuring 3D Vibrations Amplitude with a Single Camera and a Model of the Vibrating Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Franck Renaud, Stefania Lo Feudo, and Jean-Luc Dion
63
Risk tolerance mapping in dynamically loaded structures as excitation dependency by means of full-field receptances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alessandro Zanarini
69
Lightweight Internal Damage Segmentation Using Thermography with and Without Attention-Based Generative Adversarial Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rahmat Ali and Young-Jin Cha
83
A Novel Framework for the Dynamic Characterization of Civil Structures Using 3D Terrestrial Laser Scanners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Khalid Alkady, Christine E. Wittich, and Richard L. Wood
91
9
10
11
12
Orthorectification for Dense Pixel-Level Spatial Calibration for Video-Based Structural Dynamics David Mascareñas and Andre Green
13
Digital Twins for Photorealistic Event-Based Structural Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Allison Davis, Edward Walker, Marcus Chen, Moises Felipe, David Mascareñas, Fernando Moreu, and Alessandro Cattaneo
97
vii
viii
Contents
14
Multi-path Vibrometer-Based Strain Measurement Technique for Very High Cycle Fatigue (VHCF) Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Kilian Shambaugh, Arend von der Lieth, Joerg Sauer, and Vikrant Palan
15
Measurement of Airborne Ultrasound Using Laser Doppler Vibrometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Zihuan Liu, Xiaoyu Niu, Yuqi Meng, Ehsan Vatankhah, Donghwan Kim, and Neal A. Hall
16
Modal Identification of a Turbine Blade with a Curved Surface Under Random Excitation by a 3D CSLDV System and the Extended Demodulation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Ke Yuan and Weidong Zhu
17
Operational Modal Analysis of a Rotating Structure Using Image-Based Tracking Continuously Scanning Laser Doppler Vibrometry via a Novel Edge Detection Method . . . . . . . . . . . . . 141 L. F. Lyu, G. D. Higgins, and W. D. Zhu
18
Detection of Missing Rail Fasteners Using Train-Induced Ultrasonic Guided Waves: A Numerical Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Chi Yang, Korkut Kaynardag, and Salvatore Salamone
19
Dynamic Mode Decomposition for Resonant Frequency Identification of Oscillating Structures . . . 155 Nicholas A. Valente, Celso T. do Cabo, Zhu Mao, and Christopher Niezrecki
20
Time-Inferred Autoencoder for Construction and Prediction of Spatiotemporal Characteristics from Dynamic Systems Using Optical Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Nitin Nagesh Kulkarni, Nicholas A. Valente, and Alessandro Sabato
21
Rotational Operating Deflection Shapes Analysis with High-Speed Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Peter Bogatikov, Daniel Herfert, and Maik Gollnick
Chapter 1
Optical Motion Magnification: A Comparative Study and Application for Vibration Analysis Tymon Nieduzak, Nicholas A. Valente, Christopher Niezrecki, and Alessandro Sabato
Abstract Optical motion magnification (OMM) is a non-contact monitoring technique that is gaining popularity in the vibration analysis and structural health monitoring (SHM) communities. Using ordinary video of a targeted structure, OMM algorithms can extract imperceptible motion and identify issues that may lead to structural failure. This technique can produce not only qualitative magnified videos revealing exaggerated displacements but also quantitative assessments of displacements from which frequency content can be obtained. The objective of this work is to investigate the performance of OMM as a vibration monitoring technique that has application for analysis of large-scale structures. Therefore, a comparative study between two commercially available OMM systems and one open-source algorithm is performed to determine the accuracy of those techniques compared to traditional contact-based measurements. Extensive laboratory tests performed on a structure oscillating at a known amplitude and frequency showed that OMM can measure displacements as low as three microns with 95% correlation in the time domain when compared to traditional contact-based sensing. An example of the potential of OMM to be a cost-effective, non-invasive, and quick condition monitoring technique for large structures (e.g., wind turbines and bridges) is discussed. By using OMM, the issues inherent in standard testing/monitoring procedures can be eliminated, allowing the time and cost of SHM to be significantly reduced. Keywords Motion Magnification · Computer Vision · Structural Health Monitoring · Vibration and Displacement Monitoring
1.1 Introduction The ability to accurately measure a structure’s motion with relative ease and without physically accessing the targeted system could revolutionize the field of vibration analysis and structural health monitoring (SHM). This goal may potentially be achieved by using optical techniques such as phase-based motion magnification (PMM), which provide full-line-of-sight measurements. Even large structures with low-frequency oscillations at very small magnitudes (e.g., bridges, wind turbines, machinery, and large-scale structures) can be precisely monitored by utilizing PMM which can successfully amplify subtle motion [1]. Non-contact sensing has been expanded upon significantly in the past decades, moving away from the costly and invasive traditional condition monitoring techniques. The conventional inspection and excavation methods of wind turbine foundations are costly, destructive, and often inconclusive [2, 3]. Usually, contact-based transducers such as accelerometers, linear variable differential transformers (LVDT), and strain gauges are used to measure the structure’s response. These standard sensor methods produce accurate and dependable results, but are limited as they require considerable setup time and only produce results at the points of contact [4, 5]. Other non-destructive testing techniques like ultrasonic testing or ground-penetrating radar (GPR) have substantial shortcomings due to the complexity of the equipment and setup to gather the data correctly [6]. Optical measurements have the potential to provide solutions to several of these issues, utilizing a relatively simple apparatus and producing evident results that can be successfully visualized. This work focuses on assessing the measurement accuracy of optical motion magnification (OMM) in quantifying subtle displacement that cannot be seen with the naked eye. In particular, two commercially available OMM systems (i.e., Motion Amplification by RDI Technologies and MEscope by Vibrant Technologies) are compared with an open-source PMM algorithm developed by Wadhwa et al. [1]
T. Nieduzak () · N. A. Valente · C. Niezrecki · A. Sabato Department of Mechanical Engineering, University of Massachusetts Lowell, Lowell, MA, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_1
1
2
T. Nieduzak et al.
combined with a phase motion estimation (PME) algorithm. The efficacy of each procedure is empirically determined and the potential for applications is discussed.
1.2 Background OMM can be utilized in a variety of different ways to synthetically amplify subtle motion seen in video. PMM has been effectively employed for SHM applications and is therefore the method focused on in this chapter [7, 8]. This algorithm stems from linear motion magnification which was first introduced by Liu et al. [9]. The technique was expanded on by Wadhwa et al. [1] to include the manipulation of phase, allowing noise to be translated instead of being amplified alongside the motion of interest. These methods have been consecutively modified and improved (in terms of speed and noise) [10–12] and used in a broad range of structural dynamics applications. Structures experiencing subtle oscillations are of particular interest due to PMM having the ability to identify resonant frequencies and obtain full-line-of-sight operating deflection shapes (ODSs) [8, 13–18]. The robust nature of this non-invasive technology has been demonstrated in several studies [7, 19–21], some of which already investigate wind turbine components [22]. Work has been done to enable quantitative measurements to be extracted from the magnified videos [23], and a comparison between the quantization methods of three different systems (open-source PMM-PME algorithm, Motion Amplification, and MEscope) is detailed in this study. Further information on the fundamentals and theoretical background of PMM can be found in the previously cited references.
1.2.1 Open Source OMM Algorithm The first OMM system tested is an open-source motion magnification algorithm from Wadhwa et al. [1] quantified with a PME algorithm [24–26]. This code accomplishes PMM by first employing a complex steerable pyramid and magnifies subtle motion within a particular sub-band of frequencies. The algorithm produces a qualitative motion magnified video based on the original video file given the specified sampling rate (i.e., frame rate) of the camera. Also, for the algorithm to work properly, the user must specify the low and high cutoff frequencies and the magnification factor. This magnification factor can be adjusted for each experiment’s spatial wavelength based on the motion magnitude in relation to the frame size as discussed in the work by Valente et al. [23]. In order to generate quantitative data, a PME algorithm using an orientationspecific kernel is employed. These steps are visualized in Fig. 1.1. Multiple commercial OMM software have been developed, namely the Motion Amplification and MEscope systems. These software generate quantitative non-magnified time waveform and frequency datasets, although they operate using different calculation methods.
1.2.2 RDI Motion Amplification The RDI Motion Amplification software requires the selection of a region of interest (ROI) as seen in the next section’s Fig. 1.4a, generating results based on the averaged motion of pixels in the ROI. This is done by calculating the difference in pixel intensity from one reference frame to all other frames in sequence [27]. These differences are stored in a matrix summed in quadrature, yielding the total difference in pixel intensities between the frames. This process is performed on every frame of the video until the frames that generate the highest difference values are identified. The pixels with the greatest differences are selected and their intensities are tracked throughout the video to identify the periodicity of the motion. The locations
Fig. 1.1 Open-source motion magnification algorithm flowchart
1 Optical Motion Magnification: A Comparative Study and Application for Vibration Analysis
3
Fig. 1.2 RDI Motion Amplification algorithm flowchart
of peaks/maxima in the motion’s waveform can now be detected, terminating the first pass of the algorithm. A second pass is performed to align the initial frame with a peak and the comparison frames with a frame that is ½ of a waveform apart. This second pass makes the process adaptive. Displacements are then calculated using the working distance as a conversion factor. The Motion Amplification system also integrates a frequency-based filtering system during post-processing to isolate and magnify the frequencies of interest. The algorithm’s procedure is detailed in Fig. 1.2.
1.2.3 Vibrant Technologies MEscope The Vibrant Technologies MEscope OMM method is currently using the calcOpticalFlowFarneback method from OpenCV 4.3 [28]. Firstly, a single polynomial expansion is employed to approximate the neighborhoods of each pixel. An ideal global translation is applied on these calculated polynomials to estimate the displacement. The single polynomial is then replaced with local polynomial approximations and expansion coefficients are calculated. Deformations are then calculated at each pixel using multi-scale displacement estimation [29]. This is accomplished by starting with a coarse scale, generating a rough displacement estimate, and then propagating through finer scales to obtain estimates that are progressively more accurate. Since 1920x1080 HD video generates over 4 million measurement points, the video frame is mapped onto a rectangular surface and only a portion of the pixels are saved. Using Nvidia GPUs that are Compute Unified Device Architecture (CUDA) enabled, the computational process can be ran 5–10× faster than using a CPU [28]. Operating Deflection Shape Frequency Response Functions (ODSFRFs) can be computed using tri-spectrum averaging to reduce measurement noise. The transmissibilities are multiplied by the magnitude of the reference auto power spectra which delivers a measurement of the true response with phases relative to the reference measurement. However, in this research, the measurements are output only and therefore we rely on the fast Fourier transform (FFT). MEscope uses a point-based tracking system instead of an ROI. The software generates a separate time waveform for every point identified in the video. The datasets of each point are based on a group of pixels, the size of the group being a function of the number of points applied on the video frames. In order to generate the motion of a region, the exported point specific time waveform datasets must be manually averaged. The flowchart of this procedure is presented in Fig. 1.3.
1.3 Experiment Design The performances of the three approaches described in Sect. 2 (i.e., Open-source, Motion Amplification, and MEscope) are evaluated by assessing the minute motion of a structure oscillating at a known frequency and amplitude. A piezoelectric accelerometer with a sensitivity of 1000 mV·g−1 is attached to the structure to collect reference data. All three OMM algorithms analyze the same video file recorded using the Iris MX high-speed camera produced by RDI. The displacement time histories gathered from each OMM system are plotted alongside each other, and the Time Response Assurance Criterion (TRAC) values are used to determine the accuracies of the respective OMM algorithms with respect to the reference measurement.
4
T. Nieduzak et al.
Fig. 1.3 Vibrant Technologies MEscope algorithm flowchart
Fig. 1.4 Experimental setup used to evaluate the accuracy of the three OMM algorithms: (a) camera view of the setup detailing the operational principles of the algorithms using either ROI or points to extract the sub-pixel motion of the targeted structure; (b) oblique view of setup detailing the experimental components and working distance
These experiments include observation and analysis of the vertical motion of a small 3D-printed cube attached to a shaker as shown in Fig. 1.4. The Motion Amplification system can only utilize video recorded by RDI proprietary cameras, the Iris MX model being used in this case. This is a 5.3-megapixel, high-speed camera set to shoot at the maximum sampling frequency (without cropping from default) of 467 fps. For the data to be usable by MEscope and the open-source code, the video must be exported at 1 fps and zero magnification. As seen in Fig. 1.4a, both the Motion Amplification and the opensource quantifications are based on an ROI, while the MEscope quantification is based on a series of points. Shown in Fig. 1.4b, a field of view (FOV) having diagonal size of ~1 m is generated by using a camera’s working distance of 0.67 m. A 25 mm focal length lens was utilized as it generated a suitably sized FOV with respect to the size of the structure of interest. The accelerometer is glued to the top surface of the cube and used both as a reference measurement and as target for the OMM videos. In plane with the front face of the cube, a ruler and a calibration target are positioned to provide the scale that is needed to generate displacement time histories for the MEscope and open-source algorithms. At a distance of 0.67 m, the optical target generates a calibration factor of 0.269 mm/pixel. Here, the factor is a function of the specific target found in the field of view. This is utilized as a conversion factor to generate displacement data in real units, permitting the quantification of the qualitative video data and promoting definitive comparisons between the results of other OMM systems. An equally spaced grid is placed in the background of the FOV to exhibit the effect of amplified motion on static structures.
1 Optical Motion Magnification: A Comparative Study and Application for Vibration Analysis
5
During the tests, the targeted object was displaced using a 20 Hz sine wave at different amplitudes (i.e., 15.6, 11.6, 8.0, 6.2, 4.6, and 3.0 μm, these values being defined by whole number voltage inputs to the shaker), all of which were undetectable by the naked eye. The high-speed camera records the vibration for 5 seconds and, afterward, the video files are processed using the three selected algorithms.
1.4 Analysis Because the OMM algorithms provide results in the form of displacement, the data measured with the accelerometer were converted into displacement using Eq. (1): d(t) =
.
−a(t) ω2
(1.1)
where d(t) is the displacement of the cube at time t corresponding to the acceleration a(t), ω is the excitation frequency (i.e., 20 Hz in the performed tests), and the negative compensates for the shift in phase. Due to the Iris MX camera having a max sampling rate of 467 fps without cropping the field of view, the displacement time histories obtained from the OMM analyses had to be upsampled to 500 fps to correlate with the accelerometer’s data. The reference displacement waveform and the optical displacement waveforms are then aligned and plotted; TRAC values were recorded at the six different displacement magnitudes. Figure 1.5 shows the four aligned signals (three obtained from the optical data and one recorded with the accelerometer) for the six tested displacements. Figure 1.5 shows that the correlation between optical and accelerometer datasets generally decreases as the displacement magnitude decreases. This is likely due to the change in pixel intensities getting progressively smaller while the sensitivity of the optical sensor remains the same, thus producing more inaccuracies at reduced displacements. At 15.6 μm, the time waveforms overlap very well and all OMM algorithms have TRAC values above 98%. As the displacement magnitude reduces, the discrepancies between the waveforms increase, the MEscope waveform showing the most substantial differences as the displacement is lowered to 3.0 μm. Upon resampling and aligning each optical dataset with the accelerometer’s reference data, the TRAC values were computed and plotted against the displacement magnitudes. As displayed in Fig. 1.6, the Motion Amplification system performed the best overall with TRAC values consistently above 95% even at displacement of 3 μm. The open-source method produced very similar results to Motion Amplification but had slightly less correlation to the reference data at the lowest amplitude (i.e., TRAC dropping to 89.9% at 3.0 μm). MEscope performs with similar accuracy as the other two algorithms at the larger displacements, but the correlation started to decrease for displacements below 6.2 μm and reached a low of 77.2% at 3.0 μm. The reason why MEscope did not operate as well as the other systems is likely due to how the pixel values at the location of interest are averaged in this comparative study. In MEscope, averaging the motion of a region is not performed by the
Fig. 1.5 Displacement time histories obtained with the three OMM algorithms and integrating the accelerometer data for the nominal displacements: (a) 15.6 μm, (b) 11.6 μm, (c) 8.0 μm, (d) 6.2 μm, (e) 4.6 μm, and (f) 3.0 μm
6
T. Nieduzak et al.
Fig. 1.6 TRAC analysis of the three selected OMM algorithms as a function of displacement magnitude
software (i.e., like Motion Amplification) and the results are based on the waveforms measured at each point. In this study, the user must manually select all the points in the ROI and average the results in post-processing. This method left room for error with some of the points included in the ROI possibly being background points that do not contain the displacement of interest. This lab-scale experiment assesses the capabilities of three OMM systems. The study supports that optical measurements can be accurately quantified for displacements as low as 3 μm, with the Motion Amplification system being the most reliable for this application. Even though many vibrating components exhibit very small deformations at low frequencies, these techniques are sensitive enough to detect extremely minute displacements. The vibrational movement of most machinery is imperceptible to the naked eye, but through the use of a 5.3-megapixel high-speed camera, the slightest displacement can be perceived, quantified, and magnified. This would allow for inexpensive, non-intrusive, and fast condition monitoring of active machinery in operation (e.g., wind turbines) as well as individual components.
1.5 Conclusion The measurement accuracy of three different OMM algorithm is validated against a traditional contact-based technique. Each OMM algorithm, while utilizing different calculation methods, all produce relatively accurate data with displacements as low as 3 μm. The Motion Amplification software has the best overall performance, maintaining a TRAC value over 95% for every displacement magnitude tested. The PMM-PME open-source algorithm performs similarly to Motion Amplification with slightly less correlation to the reference dataset overall. Within this experiment, MEscope does not operate as consistently as the other systems, likely due to the lack of an optimized displacement waveform averaging executable. This work demonstrates how optical measurements are a very promising solution to some of the problems faced by the SHM community. Large-scale dynamic structures, like bridges and wind turbines, require consistent monitoring and maintenance to ensure the health and safety of the operation. This upkeep is typically quite expensive, but non-contact based techniques like OMM have the potential to significantly reduce the time, cost, and effort of SHM while producing more meaningful data that can be more effectively visualized. Acknowledgments This paper is based upon work partially supported by the National Science Foundation under grant numbers 1362022, 1362033, 1916715, and 1916776 (I/UCRC for Wind Energy, Science, Technology, and Research) and from the members of WindSTAR I/UCRC. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or the sponsors.
1 Optical Motion Magnification: A Comparative Study and Application for Vibration Analysis
7
References 1. Wadhwa, N., et al.: Eulerian video magnification and analysis. Commun. ACM. 60(1), 87–95 (2016) 2. Drewry, M.A., Georgiou, G.: A review of NDT techniques for wind turbines. Insight-Non-Destruct. Test. Cond. Monitor. 49(3), 137–141 (2007) 3. Perry, M., McAlorum, J., Fusiek, G., Niewczas, P., McKeeman, I., Rubert, T.: Crack monitoring of operational wind turbine foundations. Sensors. 17(8), 1925 (2017) 4. McAlorum, J., Perry, M., Fusiek, G., Niewczas, P., McKeeman, I., Rubert, T.: Deterioration of cracks in onshore wind turbine foundations. Eng. Struct. 167, 121–131 (2018) 5. He, M., Bai, X., Ma, R., Huang, D.: Structural monitoring of an onshore wind turbine foundation using strain sensors. Struct. Infrastruct. Eng. 15(3), 314–333 (2019) 6. Non-destructive testing of turbine foundations. https://www.windsystemsmag.com/non-destructive-testing-of-turbine-foundations/ (accessed February 2, 2022) 7. Shang, Z., Shen, Z.: Multi-point vibration measurement and mode magnification of civil structures using video-based motion processing. Autom. Constr. 93, 231–240 (2018) 8. Sarrafi, A., Mao, Z., Niezrecki, C., Poozesh, P.: Vibration-based damage detection in wind turbine blades using Phase-based Motion Estimation and motion magnification. J. Sound Vib. 421, 300–318 (2018) 9. Liu, C., Torralba, A., Freeman, W.T., Durand, F., Adelson, E.H.: Motion magnification. ACM Trans. Graph. (TOG). 24(3), 519–526 (2005) 10. Al-Naji, A., Lee, S.-H., Chahl, J.: An efficient motion magnification system for real-time applications. Mach. Vis. Appl. 29(4), 585–600 (2018) 11. Sushma, M., Gupta, A., Sivaswamy, J.: Semi-automated magnification of small motions in videos. In: International Conference on Pattern Recognition and Machine Intelligence, pp. 417–422. Springer (2013) 12. Verma, M., Ghosh, R., Raman, S.: Saliency driven video motion magnification. In: National Conference on Computer Vision, Pattern Recognition, Image Processing, and Graphics, pp. 89–100. Springer (2017) 13. Bao, Y., Seshadri, P., Mahadevan, S.: Motion magnification for mode shape determination. In: 58th AIAA/ASCE/AHS/ASC Structures Structural Dynamics, and Materials Conference, 2017, 2017 14. Dorn, C.J., et al.: Automated extraction of mode shapes using motion magnified video and blind source separation. In: Topics in Modal Analysis & Testing, Volume 10, pp. 355–360. Springer (2016) 15. Eitner, M.A., Miller, B.G., Sirohi, J., Tinney, C.E.: Operational modal analysis of a thin-walled rocket nozzle using phase-based image processing and complexity pursuit. In: Rotating Machinery, Optical Methods & Scanning LDV Methods, Volume 6, pp. 19–29. Springer (2019) 16. Hassoun, H., Hallal, J., Duhamel, D., Hammoud, M., Diab, A.H.: Modal analysis of a Cantilever beam using an inexpensive smartphone camera: motion magnification technique. Int. J. Mech. Mechatron. Eng. 15(1), 52–56 17. Molina-Viedma, A.J., López-Alba, E., Felipe-Sesé, L., Díaz, F.A.: Operational deflection shape extraction from broadband events of an aircraft component using 3D-DIC in magnified images. Shock. Vib. 2019, 1 (2019) 18. Valente, N.A., Mao, Z., Southwick, M., Niezrecki, C.: Implementation of total variation applied to motion magnification for structural dynamic identification. In: Rotating Machinery, Optical Methods & Scanning LDV Methods, Volume 6, pp. 139–144. Springer (2020) 19. Rohe, D.P., Reu, P.L.: Experimental modal analysis using phase quantities from phase-based motion processing and motion magnification. Exp. Tech. 45(3), 297–312 (2021) 20. Sarrafi, A., Mao, Z.: Mapping motion-magnified videos to operating deflection shape vectors using particle filters. In: Rotating Machinery, Optical Methods & Scanning LDV Methods, Volume 6, pp. 75–83. Springer (2019) 21. Yang, Y., et al.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2017) 22. Poozesh, P., Sabato, A., Sarrafi, A., Niezrecki, C., Avitabile, P., Yarala, R.: Multicamera measurement system to evaluate the dynamic response of utility-scale wind turbine blades. Wind Energy. 23(7), 1619–1639 (2020) 23. Valente, N.A., do Cabo, C.T., Mao, Z., Niezrecki, C.: Quantification of phase-based magnified motion using image enhancement and optical flow techniques. Measurement. 189, 110508 (2021) 24. Chen, J.G., Wadhwa, N., Cha, Y.-J., Durand, F., Freeman, W.T., Buyukozturk, O.: Modal identification of simple structures with high-speed video using motion magnification. J. Sound Vib. 345, 58–71 (2015) 25. Chen, J.G., Davis, A., Wadhwa, N., Durand, F., Freeman, W.T., Büyüköztürk, O.: Video camera–based vibration measurement for civil infrastructure applications. J. Infrastruct. Syst. 23(3), B4016013 (2017) 26. Chen, J.G., Wadhwa, N., Cha, Y.-J., Durand, F., Freeman, W.T., Buyukozturk, O.: Structural modal identification through high speed camera video: motion magnification. In: Topics in Modal Analysis I, Volume 7, pp. 191–197. Springer (2014) 27. Hay, J.R., Slemp, M.W.: Apparatus and method for visualizing periodic motions in mechanical components, United States (2016) 28. Schwarz, B., Richardson, S., Tyler, J., Spears, R., Richardson, M.: Post-Processing ODS Data from a Vibration Video 29. Farnebäck, G.: Two-frame motion estimation based on polynomial expansion. In: Scandinavian Conference on Image Analysis, pp. 363–370. Springer (2003)
Chapter 2
To the Moon! Space Launch System Modal Testing with Video and Motion Magnification Justin G. Chen, Raul Rios, Kevin E. Franks, and Eric C. Stewart
Abstract MIT Lincoln Laboratory and NASA Marshall Space Flight Center have been collaborating on using video camera measurements and motion magnification for modal testing of large aerospace components for several years. This presentation will discuss results from the Space Launch System Integrated Modal Test (IMT) and Dynamic Rollout/Rollback Test (DRRT) in support of the Artemis I mission. During the IMT, the data collection focused on operational mode shapes. In addition, the cameras were repositioned midtest to better understand the physics of a low-frequency torsion mode. The non-contact nature of video data capture allowed for the quick redeployment of the cameras while not causing any delay in test schedule, whereas traditional instrumentation would have required a pause in testing to attach the sensors to the test article. The motion magnification analysis was able to find the low-frequency operational mode shapes and help the test team better understand the physics of the torsion mode. Building upon the success of the IMT motion magnification work, a camera system was used during the DRRT to find operational mode shapes, if the physics of the low-frequency torsion mode remained with different boundary conditions, and relative deflection of the vehicle and ML tower during the roll. In this chapter, we will present operational mode shape results, discuss the physics of the torsion mode, and review experimental setup idiosyncrasies to help the community in designing video camera measurement systems. Keywords Optical · Motion magnification · Video · Aerospace · Artemis
2.1 Introduction The Artemis program is the NASA effort to land the first woman and next man on the moon. As part of that program, the Artemis I flight vehicle consists of the Space Launch System (SLS) and Orion Multi-Purpose Crew Vehicle (MPCV), and that system is launched from the mobile launcher (ML). To understand the dynamics of the flight system and the integrated ground configuration, a “building block” approach to modal testing was used [1]. In this building block approach, modal tests were performed on the major substructures of the system [2, 3], and the IMT [4] was performed on the ground configuration to validate the system-level response. The purpose of the modal testing is to validate the finite element models (FEMs) that are used to determine the vehicle loads and control stability of the vehicle. While the substructure modal tests validate the FEMs for those components, the interface stiffnesses between the substructures have high uncertainties [5] until the integrated structure is tested. Flight model uncertainties, including the interfaces, are critical to understand due to the tight uncertainty tolerances that are required by the loads and control stability analysis. While the flight model uncertainties are tight, the ground model uncertainties are even more restricted due to the relationship between the flight modes and the ground modes [6]. With these tight uncertainties, it is critical to understand and characterize the fundamental bending and torsion modes in the ground configuration. This modal testing of the ground configuration was undertaken in a set of tests with a full complement of wired accelerometers all around the vehicle to measure the vibration response to a set of shakers located all around the vehicle during the IMT and as a “test of opportunity” during DRRT to and from the launch pad. Since the testing occurs directly on
J. G. Chen () · R. Rios MIT Lincoln Laboratory, Lexington, MA, USA e-mail: [email protected] K. E. Franks · E. C. Stewart NASA Marshall Space Flight Center, Huntsville, AL, USA © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_2
9
10
J. G. Chen et al.
Fig. 2.1 Space Launch System at Pad 39B atop the mobile launcher (ML)
the flight hardware that will be going to the moon, a lot of careful labor is necessary for placing or removing these sensors; this corresponds to a lot of time and labor. This is a common difficulty with wired instrumentation for testing, and thus more researchers are turning to video cameras to provide quickly reconfigurable “virtual” sensor placements and more general situational awareness instead of point measurements [7]. Computer vision algorithms are used to extract useful information (e.g., displacements) from video cameras and recorded video; optical flow is typically used [8] as well as a related class of algorithms termed “motion magnification” [9, 10]. The key ability that motion magnification has is not only to measure small sub or single pixel displacements from video but also to visualize and amplify small motions in specific frequency bands, typically corresponding to operational modes of a structure. These techniques have been used for a variety of applications [11–13]. This extended abstract will describe motion magnification–assisted analysis of video collected during the SLS IMT and DRRT in support of the Artemis I mission. We will discuss background of the tests and methods involved in the instrumentation, and give a brief preview of the results. We also hope to describe specific experimental considerations for participating in large-scale modal and vibration tests with video cameras for the purpose of motion magnification (Fig. 2.1).
2.2 Background The IMT and DRRT provided opportunities to collect video to demonstrate and validate motion magnification alongside traditional instrumentation methods, refine the experimental methodology by participating in a real large-scale test, and potentially provide some benefit to the SLS engineering team. We briefly describe the tests themselves and the video camera experimental setup we used during the test campaign.
2.2.1 IMT – Integrated Modal Test The IMT is the modal test that measures the dynamic response of the Artemis I vehicle in the ground configuration. It consists of the ML, SLS vehicle – consisting of the core, boosters, Launch Vehicle Stage Adapter (LVSA), Interim Cryogenic
2 To the Moon! Space Launch System Modal Testing with Video and Motion Magnification
11
Propulsion Stage (ICPS), Orion Stage Adapter (OSA), and the Mass Simulator of Orion (MSO). The SLS subsystem components are all flight hardware, whereas the MSO is a metallic cylinder that closely represents the total mass and center of gravity (CG) of the Orion spacecraft. The MSO is designed to be nearly rigid with respect to the frequency range of interest in the IMT. However, the Orion spacecraft is used during the DRRT since it is critical hardware for the Wet Dress Rehearsal (WDR) test. There are two test configurations during the IMT. The 6-point modal test configuration is when the ML-SLS-MSO integrated stack is configured to sit atop six mount mechanisms that are positioned in six locations along the bottom of the ML. The other test configuration is the 10-point modal test, where the crawler-transporter (CT) is used to pick up the integrated stack to the point that only half of the test weight was resting on the mount mechanisms and the other half was on the CT. Two test configurations were used in an attempt to isolate and understand the dynamics of the Artemis I vehicle from the dynamics of the ML. The 10-point configuration, with the CT used to prop up the vehicle, is shown in Fig. 2.2a, and the 6-point configuration is shown in Fig. 2.2b. Figure 2.2c shows a picture of the ML-SLS-MSO integrated stack inside the Vehicle Assembly Building (VAB). The purpose of the IMT was to capture and understand the low-frequency dynamics of the vehicle that govern the flight control stability and loads analyses. In each configuration, five primary shakers were used to excite the vehicle and nearly 300 accelerometer locations were used to measure the dynamic response. Additional accelerometers were used on the CT to measure its motion during the 10-point modal test. There were 16 target modes in the 10-point configuration and 31 target modes in the 6-point configuration that were adequately captured by the wired test instrumentation. While all target modes were adequately captured during the IMT, the first low-frequency torsion mode was a drastically lower frequency than predicted in the pre-test analysis. During testing, examination of the load cells in the booster-core aft struts, seen in Fig. 2.3c, showed that they were not actively engaging at all levels of excitation. This aft strut free play decreased the torsional stiffness of the vehicle, which led to the low torsion frequency. While there were load cells in the aft struts, there were not accelerometers present on both sides of the aft struts to measure relative motion on either side of each strut. It was logistically infeasible to add accelerometers to the region of interest mid-test, so the cameras being used for motion magnification analysis of the integrated stack as in Fig. 2.3a, b were redeployed to focus on the local motion of the aft struts as in Fig. 2.3c.
Fig. 2.2 (a) IMT 10-point configuration, (b) IMT 6-point configuration, and (c) picture of the Artemis I IMT in the Vehicle Assembly Building
12
J. G. Chen et al.
Fig. 2.3 (a) Picture of the camera setup atop a tripod, (b) camera view of the SLS with MSO at the top of the stack, (c) camera view of aft struts connecting the booster to the core stage
2 To the Moon! Space Launch System Modal Testing with Video and Motion Magnification
13
Fig. 2.4 (a) DRRT ML 0-deck camera to record the aft SRB struts and generate vehicle motion, (b) DRRT ML tower cameras to record relative motion between the ML tower and vehicle, (c) field of view of the ML 0-deck camera, and (d) field of view of the ML tower camera
2.2.2 DRRT – Dynamic Rollout/Rollback Test The goal of the DRRT is to provide substantiating dynamic test data of the flight vehicle when it has different boundary conditions and excitation inputs from the IMT. The DRRT configuration consists of the ML sitting on top of the CT, the Artemis I flight vehicle sitting on the ML, and the umbilicals connecting the ML tower with the flight vehicle. The excitation of the system comes from the operation and rolling of the CT. The excitation levels of the DRRT were expected to be larger than the excitation experienced during the IMT. So, one focus of the DRRT is to determine if the first torsion mode has a much lower frequency than predicted in the pre-test analysis. Similar to the IMT, no accelerometers were placed at the ends of the aft struts to determine the relative motion of the booster and core during the torsion mode. Cameras were placed on the ML deck to measure the low-frequency bending modes during roll as seen in Fig. 2.4a, c. Another focus of the DRRT is the relative deflection of the flight vehicle with the ML tower during the roll. This is an important measurement, because the vehicle stabilization system (VSS) induces loads on the SLS based on how the vehicle and tower deflect during DRRT and stay at the launch pad. This relative deflection is difficult to measure accurately during the roll, so cameras were set up to use Motion Magnification, as seen in Fig. 2.4b, d.
2.3 Analysis There are potentially many difficulties when running a video camera as secondary and non-essential instrumentation to the main wired system in a modal test as well as a new set of challenges with large-scale structural testing (vs. laboratory testing). We outline some experimental guidelines, some which may be obvious, but others that are hopefully useful, for a successful experimental campaign. We also provide a preview of some of the results determined from the analyzed video collected during testing. The presentation at IMAC will go into more detail regarding the findings and show some motion magnified videos.
14
J. G. Chen et al.
We will briefly discuss a standard list of issues and differences from laboratory testing that one might encounter during real-world testing including power for instrumentation, lighting in outdoor or uncontrollable conditions, and triggering of cameras. Power is often a challenge during outdoor or large-scale tests as outlets may be scarce or far from the location that you want your cameras. Bringing a long extension cord or your own batteries is advised. We have found that GoPros are prone to overheat when being used with external battery packs or plugged in to wall power, so it is recommended to remove the internal battery while running with external power. Lighting can be a potential source of interference due to either strobing or AC driven lighting, or it can be slowly changing over the time period of the collect (e.g., sunset); in either case it may present issues. Dealing with changing lighting has always been a challenge for motion magnification; however, a new paper may offer a potential solution [14]. In drastically changing lighting conditions, such as at dusk, cameras should be set to a manual or fixed focus such that the autofocus system does not end up focusing on something erroneous due to the change in lighting. The best situation would be to bring your own lighting to ensure consistency, but this is often not possible. Camera triggering can be a complicated task, especially when combined with limited runtime limitations due to power or storage capacity, if it is not possible for an experimenter to simply hit the “record” button at test time. Consumer cameras or DSLRs photography cameras with video capabilities often have a 30-minute video length restriction due to European tax laws, so take care if you are using one or planning on purchasing a new video camera for especially long tests; consider a dedicated camcorder/video camera instead. Pay extra attention when using any delayed triggering/record functionality provided by an external timed remote shutter with any power saving or sleep features of your camera. A final recommendation is to fully test the exact experimental configuration you plan on using regarding power or triggering, in the exact way you are going to use it (e.g., delayed triggering, runtime) to make sure everything works as intended before the big day. If the test collection is a one-shot deal, and it is vital to get video, multiple backup systems are ideal.
2.3.1 Results Preview Here is a quick preview and example of some of the operational frequencies measured during the IMT and DRRT. Figure 2.5 shows spectrograms representative of the bulk operational modes of the vehicle due to sine sweep excitation from the shakers. Figure 2.6 shows spectrograms that are mostly representative of the excitation from the CT during Rollout as well as some operational modes of the vehicle.
Fig. 2.5 Spectrograms showing response of the vehicle due to sine sweep excitation from the IMT 10-point testing
2 To the Moon! Space Launch System Modal Testing with Video and Motion Magnification
15
Fig. 2.6 Spectrograms showing excitation and vehicle response from DRRT (Rollout) from the ML 0-deck camera
2.4 Conclusion We discussed an instrumentation and test campaign to collect video and use motion magnification analysis alongside a traditional accelerometer system during the IMT and DRRT in support of the Artemis I program. The cameras were able to accurately measure the operational mode frequencies of the vehicle in both tests, and motion magnification was able to pull out visualizations of operational mode shapes. We also provided recommendations for participating in large-scale real-world tests. Acknowledgments The authors would like to thank Dan Lazor and Tom Erdman for their invaluable assistance before and during the IMT and DRRT tests. The authors would also like to thank Katrina Magno for her contribution to the development of the camera system. Distribution Statement A Approved for public release. Distribution is unlimited. This material is based upon work supported by the Natl Aeronautics & Space Admin under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Natl Aeronautics & Space Admin.
References 1. Sills, J.W., Allen, M.S.: Historical review of “Building Block Approach” in validation for human space flight. In: Sensors and Instrumentation, Aircraft/Aerospace, Energy Harvesting & Dynamic Environments Testing, 7, pp.1–20 (2020) 2. Stasiunas, E.C., Parks, R.A., Sontag, B.D., Chandler, D.E.: Modal test of the NASA mobile launcher at Kennedy Space Center. In: Sensors and Instrumentation, Aircraft/Aerospace, Energy Harvesting & Dynamic Environments Testing, Volume 7, pp. 213–228. Springer, Cham (2021) 3. Stasiunas, E.C., Parks, R.A., Sontag, B.D., Chandler, D.E.: Green Run Modal Test of the NASA Space Launch System Core Stage. In: IMAC XXXIX (2021) 4. McMahan, T.: Vibration Tests for Moon Rocket Help Ensure Safe Travels on Road to Space. https://www.nasa.gov/exploration/systems/sls/ vibration-tests-for-moon-rocket-help-ensure-safe-travels-on-road-to-space.html, Aug 19, 2021 5. Alley, V.L., Leadbetter, S.A.: Prediction and measurement of natural vibrations of multistage launch vehicles. AIAA J. 1(2), 374–379 (1963) 6. Stewart, E.C., Bertaska, I.R., Zuo, Q.H., Swatzel, S., Hahn, S.R., Howsman, T.G.: The Flight Dynamics Risk Assessment of Artemis I. In: Proceedings of the 2022 AIAA SciTech Forum and Exposition, San Diego (2022) 7. Schumacher, T., Shariati, A.: Monitoring of structures and mechanical systems using virtual visual sensors for video analysis: fundamental concept and proof of feasibility. Sensors. 13(12), 16551–16564 (2013) 8. Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981) 9. Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phase-based video motion processing. ACM Trans. Graph. (TOG). 32(4), 1–10 (2013) 10. Wadhwa, N., Chen, J.G., Sellon, J.B., Wei, D., Rubinstein, M., Ghaffari, R., Freeman, D.M., Büyüköztürk, O., Wang, P., Sun, S., Kang, S.H.: Motion microscopy for visualizing and quantifying small motions. Proc. Natl. Acad. Sci. 114(44), 11639–11644 (2017) 11. Chen, J.G., Adams, T.M., Sun, H., Bell, E.S., Büyüköztürk, O.: Camera-based vibration measurement of the World War I memorial bridge in Portsmouth, New Hampshire. J. Struct. Eng. 144(11), 04018207 (2018)
16
J. G. Chen et al.
12. Yang, Y., Dorn, C., Mancini, T., Talken, Z., Kenyon, G., Farrar, C., Mascareñas, D.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2017) 13. Sarrafi, A., Mao, Z., Niezrecki, C., Poozesh, P.: Vibration-based damage detection in wind turbine blades using Phase-based Motion Estimation and motion magnification. J. Sound Vib. 421, 300–318 (2018) 14. Wang, Y., Hu, W., Teng, J., Xia, Y.: Phase-based motion estimation in complex environments using the illumination-invariant log-Gabor filter. Mech. Syst. Signal Process. 186, 109847 (2023)
Chapter 3
Effects of Image-Pair Processing Styles on Phase-Based Motion Extraction Sean Collier and Tyler Dare
Abstract Phase-based optical flow (PBOF) is a growing discipline in vibration extraction methods. Based on camera data, PBOF uses image gradients and image-pair differences to extract motion from video. Commonly, this technique is combined with the more ubiquitous digital image correlation, due in part to the restriction of PBOF to small, sub-pixel vibrations. In this chapter, work was done to delve deeper into the nominal upper bound and determine its relation to the chosen image-pair processing style. In particular, the upper bound was evaluated with respect to static (current frame-to-reference), dynamic (current frame-to-previous frame), and hybrid dynamic-static processing styles. Results include advantages of standard static and dynamic processing styles, leading to the presentation of the so-called region of goodness associated with the hybrid processing style. Discussion follows for how the hybrid processing style can be used for large-motion handling in simple structures, as well as modifications required for those that are heavily textured. The hybrid processing style is shown to extend the upper bound to the number of pixels between parallel edges within the structure, increasing the efficacy of PBOF to larger stationary vibrations. Keywords Static processing · Dynamic processing · Phase-based · Motion extraction · Vibration
3.1 Introduction Gradient-based optical flow operates under a linear, small motion assumption between two frames, such that the higherorder terms may be ignored. When moving to a phase-based approach, wrapping must also be taken into account. These two constraints form the small motion assumption for phase-based optical flow (PBOF). While the upper bound may be fixed in a purely mathematical sense, as will be shown, there are options to be considered with respect to the image pair in question. This work documents—within one collective reference—a study into the behavior of some well-known image processing styles and the effects of their use in PBOF, particularly in cases of super-pixel motion. Moreover, a processing scheme used in practice [1] but not explicitly discussed in the literature is presented with its associated effects and usability.
3.2 Object of Interest Here, comparisons between different processing styles will be done on a bumpy (textured) cube created in Blender [2]. The cube is exhibiting sub-pixel, purely vertical vibration at 0.75 px magnitude, while also undergoing super-pixel rotation with a radius of 50 px. The frequency of vibration is 8 Hz, and the sample rate for the rendered video is 500 frames per second. The rendered cube and described motion path are shown in Fig. 3.1.
S. Collier () · T. Dare Graduate Program in Acoustics, The Pennsylvania State University, State College, PA, USA e-mail: [email protected]; [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_3
17
18
S. Collier and T. Dare
Fig. 3.1 Illustration of the motion path (LEFT) prescribed for the bumpy cube (RIGHT)
3.3 Static, Frame-to-Reference, Image Processing Vibrations are inherently some deviation from a reference, with the elastic and inertial properties causing the object to oscillate about that fixed point. In this sense, it follows that a frame-to-reference—also called static—processing style provides a measurement that matches this behavior. Static processing schemes are typically what is used in PBOF when done for motion magnification [1]. Recall, the static processing can be represented as φ = φ(x, t) − φ(x, 0)
.
(3.1)
with .φ(x, t) being the phase at some time t, and .φ(x, 0) being the phase at some reference—typically at time zero, though any frame with the object at rest will do. Recall also that the phase difference needs to be less than or equal to .π so to avoid phase wrapping. With a filter of approx. .4 px, the small motion assumption for statically processed PBOF can be written as |φ| = 2–3 px.
.
(3.2)
When using this style of processing, it is then required that motions must be strictly less than or equal to this upper bound. Otherwise, super-pixel motions will become wrapped and can come across as sub-pixel. This effect is evident in Fig. 3.2, where motion extraction is done using a static processing scheme for the object of interest. Of course, this case is well-beyond the intended use of such a processing scheme, and its failure is well anticipated. Here, the static case serves as a basis for motivation in other processing schemes.
3.4 Extensions of Static Processing and the Ubiquitous Image Pyramid Clearly, we do not live in a world where all motions are sub-pixel always. Especially in structural acoustic applications, objects are often capable of exhibiting sub-pixel and super-pixel motions simultaneously. It is for this reason that the small motion assumption needed to be subverted for any sort of practical use. The easiest way to make the physical motions captured by the camera sub-pixel is to make the pixels themselves larger. Obviously, it is impractical to swap out image sensors for larger or smaller pixels. Instead, the images are downsampled by some factor, typically by half, in an iterative fashion until the motion becomes sub-pixel. This works since the camera is capturing the same physical scene with fewer (larger) and fewer (larger) pixels, meaning the motion is effectively smaller. As long as the amount of downsampling is known, it is a simple scaling by that factor to achieve the original scale of motion. This approach was suggested by Anandan [3] and has become a widely accepted rule of thumb in computer vision for handling videos with super-pixel motion. When all motions of interest are initially super-pixel (i.e., super-pixel at full resolution), then this approach works very well. In
3 Effects of Image-Pair Processing Styles on Phase-Based Motion Extraction
19
Fig. 3.2 Static motion extraction results for the object of interest, showing clear wrapping issues for super-pixel motions
addition to making the motions smaller, the downsampling procedure typically involves a smoothing step to avoid spatial aliasing. This in turn raises the SNR and leads to a cleaner motion extraction. However, the current trends in the application of this work to vibration measurements are to capture the initially sub-pixel motions. Caution should be taken here. Blindly applying the pyramid approach can prove detrimental in such cases, as well as in cases of simultaneous motions in both regimes [4]. The image pyramid (alone) is not the only form of large-motion handling suggested by the computer vision community. As has already been surmised, a pyramid approach can drastically reduce the spatial information provided the motions are incredibly super-pixel. As fine structures or large spatial resolution may be of interest, this reduction is non-ideal. Likewise, it is not that optical flow cannot be applied in such cases of super-pixel motion, only that the error contained within the higherorder terms will increase. Thus, two main iterative techniques were suggested to handle super-pixel motions of interest. The first technique uses the image pyramid in an informed way through a coarse-to-fine estimate [3]. In this approach, the images are downsampled until all motions are assured sub-pixel. Using the resulting flow field, motions are then predicted in the next highest level of detail, and the image pair is shifted by the predicted motion. Optical flow is then performed, and the initial guess is adjusted by the result (assumed to be the error captured by the higher-order terms and/or the error caused by the downsampling) and repeated until the error converges. This process is repeated until the image is back to full resolution. In theory, this allows for a capturing of all super-pixel and sub-pixel motions of interest, although this also implies optical flow is done at least once per layer in the image pyramid, which may be quite large depending on the original scale of motion and image size, and therefore, this technique can be very time-consuming. The second removes the image pyramid entirely and makes use instead of the assumption that any motions not captured by the linear optical flow are contained within the higher-order, super-pixel terms [5]. In this method, optical flow is performed on the full-resolution image pair, knowing the result will contain some but not all of the full motion signal. One of the images in the pair is then warped by the resulting motion estimate. This removes a portion of the overall motion, meaning the resulting disparity between the pair is effectively smaller. The process is iterated on the warped pair until the error converges, with each output being superimposed to form the entire motion signal. Again, while clearly effective, the iterative nature implies the technique could be very time-consuming. These techniques have been applied to intensity-based optical flow, but applications of them to PBOF have yet to appear in the literature. In contrast, most work in PBOF uses an image pyramid or dynamic processing procedure.
3.5 Dynamic, Frame-to-Frame, Image Processing A simple switch to a frame-to-frame—also called dynamic—processing style allows for more processing schemes when working with super-pixel motions [6, 7]. Two of which will be discussed here: purely dynamic processing and a hybrid style.
20
S. Collier and T. Dare
Fig. 3.3 Dynamic motion extraction results for the object of interest at a pixel on edge, showing scaling issues
In the purely dynamic processing, the difference in phase is calculated between adjacent images in time, rather than a current frame and reference. That is, the phase difference goes as φ = φ(x, t) − φ(x, t − 1)
.
(3.3)
with .φ(x, t) being the spatial phase at some time t. It is strictly still the case that |φ| = 2–3 px;
.
(3.4)
however, the disparity or motion between frames is normally smaller than between a frame and reference. In this way, larger motions may be captured than in the static scheme, so long as the motion frame-to-frame remains within .3 px. Of interest is that this difference can be further reduced by an increase of sample rate of the camera. With high-speed cameras becoming more commonplace in vibration measurement, motions incapable of being captured in a static scheme are now easily resolved through PBOF. For clarity, the OLS problem is now done on adjacent image pairs, rather than from frame-to-reference. This dynamic procedure is what is typically used in computer vision techniques, as the interest is generally more in trajectory and flow rather than displacement or vibration from some reference [3, 5, 8]. To follow through regardless, consider the dynamic motion extractions for the object of interest given from the frameto-frame OLS. These are differences in displacement values between adjacent frames and so can be thought of as pseudovelocity terms. Figure 3.3 shows the results of numerically integrating these dynamic extractions from a pixel along the edge of the cube: while the vertical is nearly correct, the horizontal shows clear issue. Figure 3.4 shows the results from a pixel within the center of the cube: the agreement here is visually quite good. While the centrally located extraction looks much closer to true, these two extractions would ideally be identical and equal to true. Hence, while the overall trend of motion may be captured through such a scheme (i.e., that the object is moving sinusoidally), this is not necessarily the signal of interest for many vibration applications and the dependence on location is problematic. Rather, we are interested in absolute displacements at all pixels and so a purely dynamic processing scheme is not quite what is needed. The cause for such issue and dependence on location is discussed in the next section.
3.6 The Region of Goodness: A Hybrid Approach The so-called region of goodness, as it will be termed here, is a direct result of what appears to be a measure taken in practice to handle large motions within PBOF. If PBOF is done statically, then the small motion assumption (Eq. (3.2)) limits all motions in video to 2–3 px frame-to-reference. Again, not only is this to preserve the linear assumption from the phase-
3 Effects of Image-Pair Processing Styles on Phase-Based Motion Extraction
21
Fig. 3.4 Dynamic motion extraction results for the object of interest at a central pixel, showing quite good extraction
constancy equation, but more pressingly to avoid phase wrapping issues. If instead the motions are processed dynamically, then accumulated, the phase wrapping can often be avoided thereby removing one of the small motion constraints. This combines the two styles in a hybrid sense: getting motions related to a reference but with the flexibility of a dynamic processing scheme. This appears to be a known procedure, but documentation is sparse beyond a small side comment [9] and a more robust implementation of the PBOF code [1]. This extends the applicability of the method but does possess some drawbacks, which will be discussed. Rather than process statically, motions are first processed dynamically, or frame-to-frame. That is, motions are extracted as φ = φ(x, t) − φ(x, t − 1),
.
(3.5)
which allows for the small motion assumption to be |φ| = 2–3 px.
.
(3.6)
An implementation such as this largely removes the phase wrapping concerns and makes use of the nice properties from dynamic processing. Now, as most use cases in vibrations testing involve measuring a particular point’s motion over time, these dynamic measurements need to be related back to a reference image. In more robust versions of the MIT CSAIL code [1], this is accomplished by accumulating a series of dynamic (phase) motion extractions prior to be being solved through the OLS. Mathematically, this accumulation can be represented as .n + 1 dynamic motion extractions [φθ (x, t) − φθ (x, t − 1)] + [φθ (x, t − 1) − φθ (x, t − 2)] + · · ·
.
+[φθ (x, t − n − 1) − φθ (x, t − n)] + [φθ (x, t − n) − φθ (x, 0)] = ωx δ(t),
(3.7)
for some position x. It is clear to see that all of the cross terms should cancel, leaving what is in effect a static processing result [φθ (x, t) − φθ (x, 0)] = ωx δ(t).
.
This becomes the temporal component, .φt , within the phase-constancy equation, and so too in the OLS solution. This procedure is repeated for each time step, appending the telescoping series with the next pair of adjacent images in time
22
S. Collier and T. Dare
[φθ (x, 1) − φθ (x, 0)] = ωx δ1 (t)
.
[φθ (x, 2) − φθ (x, 1)] + [φθ (x, 1) − φθ (x, 0)] = ωx δ2 (t) [φθ (x, 3) − φθ (x, 2)] + [φθ (x, 2) − φθ (x, 1)] + [φθ (x, 1) − φθ (x, 0)] = ωx δ3 (t) ... At first glance, Eq. (3.7) seems to solve the issue of large motion entirely; unfortunately, many small inconsistencies are born from such a procedure. First, while it is true that the images telescope out accordingly in a temporal sense, individual pixel motions will not when super-pixel motion is present. This is because the pixel of interest will eventually pass a pixel boundary, and the telescoping series will no longer be accurate for that particular pixel location. More importantly is the implementation of this procedure into the general least squares solution. Recall, in a simplified case, the motion can be extracted as xi =
.
∂φ(x, 0) −1 ∂φ(x, t) ∂xi
∂t
,
(3.8)
which means even in a hybrid approach, the gradient still depends on the reference frame. Part of the basis of gradientbased optical flow is a preservation of spatial gradient constancy, and when motions become large enough within the time difference, this assumption will be broken. Clearly, the location of the edge within .φ(x, 0) will eventually be inconsistent with the edge after some large amount of super-pixel motion contained within the time difference. Thus, this hybrid type of processing is limited to within a region of constant spatial gradient. That said, the result that may be close is if surrounding pixels possess similar gradients and are exhibiting similar temporal behavior. Thus, our explanation for the locationdependent dynamic approach, noting that this will also affect the hybrid scheme. The object of interest provides a nice example. In Fig. 3.5, we see decent motion extraction from a pixel near edge until spatial gradient constancy is sufficiently broken; once this occurs, the motion extraction suffers. Indeed, unless every time step is compared to the initial reference frame for constancy, the correctness of the result is nonquantifiable for both the hybrid and dynamic schemes. As such, a “new” constraint is gained within this style of processing. The motion should remain within the region of spatial gradient constancy, such that the reference image remains a good candidate for representing the spatial gradients during motion. This is in actuality a constraint always present in PBOF, but with the normal assumption of sub-pixel motions it does not to be considered explicitly. Therefore, so long as motions remain within a region of constant spatial gradient, PBOF should prove capable of handling super-pixel motions given a sufficient sample rate to render motions 2–3 px frameto-frame. A simplified, 1-D version of the hybrid approach is mentioned briefly within a work by Javh et al. [9] and is equivalent when generalized to the OLS.
Fig. 3.5 Hybrid motion extraction results for the object of interest, showing error once gradient constancy is broken
3 Effects of Image-Pair Processing Styles on Phase-Based Motion Extraction
23
Fig. 3.6 Synthetic, texture-less cantilever beam created in Blender: an ideal candidate for using the region of goodness
3.7 Rule of Thumb for Spatial Gradient Constancy This brings about another question: When is spatial gradient constancy preserved? Since this is highly dependent on the structure being observed, there is little theory to yield a concrete response in general. However, insights gained during evaluations of PBOF [10] give a useful rule of thumb. Primarily, objects that present with strong edges perpendicular to the direction of motion but are otherwise texture-less provide good candidates in this scheme. Consider such a structure as in Fig. 3.6 and a pixel of interest along its edge. As the object moves with super-pixel motion, assume the texture-less region moves into the pixel of interest. No “new” spatial gradient information is being observed by the pixel of interest. In actuality, the spatial gradient is zero, but the temporal component is still capturing the change in phase. Thus, so long as the region remains texture-less, the initial spatial gradient can represent the apparent edge that is trying to be tracked, while the temporal component carries the bulk of the motion extraction. The region formed between the initial and current locations of the edge of interest, in truth covered by a texture-less region moving in time, is deemed the “Region of Goodness.” Empirically, a good rule of thumb for this region is related to the motion amplitude, width of the object in the direction of motion, and the filter response of the steerable pyramid/width of the spatial phase contours. This combination of factors yields a quantitatively defined, yet situation-dependent upper bound for how large the motion can be. The filter response of the complex steerable pyramid has a width/phase contour of .10 px. When dealing with super-pixel motion, these phase contours shift with the physical object, and the amplitude .|δ(t)| of oscillation can be seen to be the distance from the edge at rest to the midline of the phase contour (.5 px). This then defines the region of goodness as the area covered between .|δ|−5 px and the edge at rest. This is illustrated and visualized in Fig. 3.7. Thus, so long as the pixel of interest corresponding to the intended measurement location resides in the region continuously—a property that can be checked visually by plotting the surface plot of the extracted hybrid displacements as in Fig. 3.7—super-pixel motion extraction well-beyond the small motion assumption is possible with PBOF.
3.8 Deficiencies of a Hybrid Scheme On the surface, the region of goodness appears to extend PBOF nicely into the super-pixel regime; however, the method quickly breaks down once spatial gradient constancy is no longer preserved. The most obvious issue is when the motion is large enough to introduce a new edge into the pixel of interest, altering the spatial gradient. This can occur when measuring thin beams, for example, where the motion amplitude is larger than the span or width of the object. When this is the case, the individual regions of goodness overlap, leading to nonsense. This is the same scenario for lightly textured objects, such as a grid, where only slightly super-pixel motions allow for admissible regions of goodness with minimum overlap but can become problematic once larger motions are introduced. Therefore, for very simple structures, the maximum pixel displacement able to be accurately recorded goes is roughly .|δ| ≤ w − 5.
24
S. Collier and T. Dare
Fig. 3.7 Illustration for the formation of the region of goodness (LEFT). Example of super-pixel displacement results for a synthetic cantilever beam experiencing positive deflection (RIGHT); here, color references displacement magnitude
In practice, it may be rare to have a truly texture-less object beyond the edges of interest. So long as the edges present with good contrast, it is possible to still use the region of goodness, but the approach is more nuanced. The internal texture of the object or region of interest is still sufficient to affect the motion extraction. In such a case, the individual regions of goodness for the leading and trailing edges of the object (in the case of the beam, the top and bottom edge) need to be considered together. The positive motion will come from the trailing edge, and the negative motion will come from the leading edge. In each case, what would be the second region of goodness is disrupted by the interior texture.
3.9 Conclusion This chapter collectively discussed the primary image processing styles associated with phase-based optical flow, both in the literature and in practice. Primarily, observations on their effects for super-pixel motion extraction were given on a generic and non-ideal object of interest. The study also explicitly introduced the hybrid processing style used within more complex iterations of the PBOF code by MIT CSAIL [1]. Indeed, it was seen that this hybrid style and the so-called region of goodness provide added flexibility in the handling of large motions but may require further attention and processing for objects that are heavily textured. It is the hope of the authors that this chapter helps clarify and illuminate broader motion extraction potentials of PBOF. That is, while the technique may be strictly limited to 2–3 px motions between any two frames, tools do exist to handle some super-pixel motions of interest. Acknowledgments We would like to thank the ARL Walker Assistantship program for supporting this work.
References 1. Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phase-based video motion processing. ACM Trans. Graph. 32(4), 2013. 2. Blender Online Community: Blender—a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam (2021) 3. Anandan, P.: A computational framework and an algorithm for the measurement of visual motion. Int. J. Comput. Vis. 2, 283–310 (1989) 4. Collier, S., Dare, T.: Regime sorting for multiscale vibration and phase-based motion extraction. In: Di Maio, D., Baqersad, J. (eds.) Rotating Machinery, Optical Methods & Scanning LDV Methods, vol. 6. Springer, Cham (2023) 5. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the Seventh International Joint Conference on Artificial Intelligence, Volume 2, IJCAI’81, pp. 674–679. Morgan Kaufmann, San Francisco (1981) 6. Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Riesz pyramids for fast phase-based video magnification. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–10. Santa Clara (2014)
3 Effects of Image-Pair Processing Styles on Phase-Based Motion Extraction
25
7. Oh, T.-H., Jaroensri, R., Kim, C., Elgharib, M., Durand, F., Freeman, W.T., Matusik, W.: Learning-based video motion magnification (2018). arXiv preprint arXiv:1804.02684 8. Fleet, D.J., Jepson, A.D.: Computation of component image velocity from local phase information. Int. J. Comput. Vis. 5, 77–104 (1990) 9. Javh, J., Slaviˇc, J., Boltežara, M.: The subpixel resolution of optical-flow-based modal analysis. Mech. Syst. Signal Process. 88, 89–99 (2017) 10. Collier, S., Dare, T.: Accuracy of phase-based optical flow for vibration extraction. J. Sound Vib. 535, 117112 (2022)
Chapter 4
Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate Alessandro Zanarini
Abstract The exploitation of experiment-based optical full-field technologies in a broad frequency band is here proposed for the sound radiation numerical simulation of a vibrating plate, instead of linear structural finite-element or analytical models—overly simplified on the boundary conditions, frictions, mistunings, and non-linearities—commonly used in limited investigations about a single effective eigenmode of the structure. Spatially detailed operative deflection shapes coming from real testing, indeed, can be a viable dataset for the best achievable representation in the spatial and frequency domains of the real behaviour of manufactured and mounted components around their working dynamic load levels. The Rayleigh’s integral formulation is here adopted for the numerical approximation of the sound radiation field from a lightweight rectangular plate retaining a complex and tightly populated structural dynamics, with effective constraints and damping characteristics. The Rayleigh’s approach is here reformulated to take advantage of the full-field receptances, to obtain maps of acoustic pressure frequency response functions, or of acoustic pressure, once the structural excitation signature is defined. Examples are given in the space and frequency domains, with special attention on the contribution of the experiment-based full-field receptance maps to the accuracy of the radiated acoustic fields. Keywords Sound radiation · Acoustic pressure fields · Acoustic transfer functions · Optical full-field techniques · Experiment-based receptance · Full-field FRFs · Structural dynamics · NVH
Nomenclature DIC dof EFFMA EMA ESPI FRF NDT
Digital Image Correlation Degree of Freedom Experimental Full-Field Modal Analysis Experimental Modal Analysis Electronic Speckle Pattern Interferometry Frequency Response Function Non-destructive Testing
NVH ODS SLDV .(ω) .X(ω) .F(ω) .Hd (ω)
Noise and Vibration Harshness Operative Deflection Shape Scanning Laser Doppler Vibrometer Frequency Dependency Displacement Map Excitation Forces Receptance Map
4.1 Introduction Fast sound radiation simulations from vibrating surfaces can be usually achieved by means of linear structural finite-element or analytical models, generally overly simplified on the boundary conditions, frictions, mistuning, and non-linearities from actual parts. Most of the times these investigations may be limited to a single effective eigenmode of the vibrating surface in the structure, as a distributed vibrating source in a single tonal or resonance propagation. Spatially detailed operative deflection shapes coming from broad frequency-band real testing, instead, may be a viable dataset for the best achievable representation in the spatial and frequency domains of the real behaviour of manufactured and mounted components around their working load levels, without any simplifications, usable also with a modally dense and damped structural dynamics and complex patterns in the dynamic signature of the excitations.
A. Zanarini () Department of Industrial Engineering, University of Bologna, Italy e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_4
27
28
A. Zanarini
The exploitation of experiment-based optical full-field technologies in the sound radiation numerical simulation is therefore at the core of this chapter. Optical full-field receptance maps are becoming an increasingly consistent noncontacting experiment-based mean in the estimation of high-spatial-resolution dynamic deflection shapes with low noise on the fields, across a broad frequency range, especially in the case of lightweight structures or panels, with all the blending, and potential delays, of a modally dense dynamics, without the need of any analytical model nor of a truncated modal identification [1, 2]. When full-field measurements started, they were mostly qualitative and not competitive in terms of time-to-result [3], but they were proficiently used to show the whole response shape of a structure at a particular frequency of interest, thanks to the larger vision offered and high consistency of the viewed field [4]. They offered already a highly detailed picture of the spatial domain against the lumped representation obtained by traditional sensors, with surface ODSs rapidly changing with frequency, functions of the complex blending of the structural dynamics in non-conventional patterns, highlighting where could have been the best location for the lumped vibration sensors or strain gauges (at that time, the only quantitative instruments). As a non-native full-field instrumentation, the SLDV [2] has introduced the concept of contact-less point measurement— adding no mass to the specimen—in time/frequency domains since the 1980s, expanding it to finer grids of scanned locations, thus extending the concept of the velocity sensors to a spatially detailed acquisition. Thus SLDV is considered the reference in NVH when spatially detailed FRF measurements are needed: SLDV keeps the same peculiarities of lumped sensor technologies (and of established procedures to exploit few dofs), but adds dofs in the spatial domain at a reasonable cost, with a trade-off at high frequency in the sensitivity between displacements and accelerations. Instead, the optical full-field technologies that acquire the motion-related information from photons synchronously recorded at every sensible site of an imaging sensor, normally in a much denser grid, can be called “native”. Therefore, the native full-field techniques approach the measurement from the point of view of the quality obtainable in the spatial domain, acknowledged in terms of consistency of the motion field among the neighbouring dofs, as proved earlier by [5, 6]. Among the native technologies, since the years 2000s [7, 8], ESPI gives, in stroboscopic light acquisitions, extremely accurate displacement fields up to several kHz of frequency; the drawback remains, due to a lack of complete automation till now, the time-consuming stepped-sine excitation/acquisition, in order to have data at all the lines of a broad frequency band. Another native technology is the high-speed DIC, with its first commercial prototypes starting around year 2005. DIC can have good detail in the time-resolved displacement maps, but the evaluation of the correlated data can be very time demanding. But DIC is generally more limited in the frequency domain, due to both a sensor/electronics bandwidth trade-off between resolution and sampling rate and the difficulty in exciting properly the higher frequency displacement components, though rapid electronics and processing improvements are foreseen. Indeed, nowadays, scanning and native full-field optical technologies allow displacement and velocity measurements in a non-contact way with dense spatial mapping, without inertia-related distortions of the dynamics due to added sensors, and without any structural finite-element (FE) or analytical models to be accurately tuned for the lumped sensor expansion [9–11]. Furthermore, ESPI technology actually permits the most precise estimation from non-contacting testing of superficial receptance FRF maps, as it was previously shown in [12, 13], also for derived quantities (rotational dofs and strains). The accurate maps of experiment-based optical full-field receptances [1, 2] can therefore motivate the trial of the here proposed experiment-based approximation of sound radiation from a vibrating plate, as high-quality estimations of the real structural dynamics with accurate spatial description for complex pattern sensing, especially for lightweight structures in broad frequency bands. Therefore, this work [14] is focused on the effects on the acoustic field prediction of the excitation dynamic signature and of the energy injection location. The Rayleigh’s integral formulation can be adopted for the numerical approximation of the Helmholtz’s equation for the sound pressure field. In the re-formulation here proposed, the Rayleigh’s approach takes advantage of the spatial oversampling of full-field receptances from a lightweight rectangular plate in a broad frequency band, to obtain detailed maps of the radiated vibro-acoustic frequency response functions (pressure over force), or of the acoustic pressure once the structural excitation is defined in its dynamic signature and energy injection location, in order to show how the most advanced experiment-based knowledge available can augment the fidelity of the acoustic field mapping. A thin and lightweight rectangular plate was the sample here used; its dynamics was measured using the dynamic ESPI technology, in a broad frequency band, during the fundamental research project TEFFMA1 , funded by the European Commission and carried out by the author at the Vienna University of Technology. The TEFFMA project made a comparison between the state-of-the-art in image-based (native, DIC, and ESPI) full-field optical technologies and the scanning LDV
1 A. Zanarini, scientific proposer and experienced researcher in the project TEFFMA—Towards Experimental Full-Field Modal Analysis, financed by the Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 grant, 1/02/2013–31/07/2015.
4 Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate
29
as reference, to understand if these techniques, not completely established, can provide NVH applications with enhanced peculiarities. As in the TEFFMA project 2 distinct shakers were used with broad frequency-band excitation for a complete EFFMA as in [15], the experiment-based full-field FRF approach can bring the complete and real structural dynamics, as subjected to 2 different loading locations, into sound radiation approximations, therefore assessing the effect of the proposed experiment-based full-field receptances as the excitation/loading is varied in the dynamic signature or energy injection point. In such a broad perspective of using only experiment-based quantities, for the retained dynamics, for the energy injection area, and for the high-resolution mapping achievable, the quality of the raw dataset plays an uttermost relevance in the acoustic field reconstruction. Although the recent works [14, 16–20] are a spin-off of the activities held during the TEFFMA project, a background’s trace needs to be given. Their roots can be found in the HPMI-CT-1999-00029 Speckle Interferometry for Industrial Needs Post-doctoral Marie Curie Industry Host Fellowship project held by the author in 2004-5 at Dantec Ettemeyer GmbH, Germany. Since that testing (see [5, 6]), it became clear how ESPI measurements allow a dense mapping of the local dynamic behaviour for enhanced structural dynamics assessments (see [8]) and fatigue spectral methods (see [21, 22]). The results in [8] were the starting point of the TEFFMA project, whose works saw earlier presentations in [23, 24], followed by [25– 28]. The first gathering of the works of TEFFMA was first attempted in [12], whereas [29] made an extensive description of the whole receptance testing, and [15] detailed the EFFMA and the full-field-based model updating attempts. In [30], the full-field techniques were scrutinised to find the error distribution on the shapes at each frequency line: this analysis underlined the quality of ESPI datasets in full-field dynamic testing. Furthermore, the precise comparison in [13], about new achievements for rotational and strain FRF high-resolution experiment-based maps, put again in evidence the quality of ESPI datasets, as the numerical derivatives can easily spread errors. In [17], the risk index was introduced as a metric to distinguish failure-exposed areas in a dynamically loaded component, with a focus also on the evidences from a damaged fiberglass reinforced composite panel [6, 16]. In [18], there appeared a comparison among risk index maps when the same white noise spectrum was changed in injection locations, whereas in the recent journal paper [19] the variability of risk maps was analysed by the changes in the excitation dynamic signature from only one input location. In [20], two excitation signatures and two energy injection locations were combined in risk grading. Different works [12, 13, 15, 19, 21–29] were therefore already published by the author on the full-field optical measurements and on their perspectives. Especially, the latest works [12, 13, 15, 19, 23, 26, 29] from the TEFFMA project highlighted the tangible advances for the consistency and continuity of the data fields, which have become available by the growing native full-field techniques, with clear effects in model updating and derivative calculations (rotational dofs, strains, stresses, and risk index maps). The interested reader can appreciate in [12, 13, 26] the effect of the measurement noise on derivative quantities; though, unfortunately due to the complexity or burden of their measurement, the rotational dofs are usually disregarded, while they are relevant for the successful build of a reliable dynamic model for complex structures [1, 2, 11, 31–35]. This chapter is organised as follows. The experiment-based FRF modelling is sketched in Section 4.2. The testing is briefly outlined in Section 4.3, with attentions on the setup. The Rayleigh’s integral re-formulation, first investigated in [36] for the numerical approximation of the sound radiation field, is recalled in Section 4.4 and is fed with the full-field experiment-based receptances obtained in the TEFFMA project. Section 4.5 deals, after notes on the meshing of the acoustic domain, with examples in the space and frequency domains of vibro-acoustic transfer functions and of pressure distributions, with special attention on the excitation signature from coloured noises and on the energy injection locations, to underline the contribution of the experiment-based full-field receptance maps to the accuracy of the radiated acoustic pressure FRFs and fields, before drawing the final conclusions.
4.2 Direct experimental modelling for full-field FRFs The impedance-based studies by means of full-field optical measurements are the pivot of all the activities in the TEFFMA project. The former were thoroughly investigated to achieve highly reliable full-field FRFs with unprecedented spatial resolutions, for any further speculation that may come either from experimental test or from numerical models, also in hybrid frameworks, such as those of transfer path analysis [35, 37, 38]. The well-known formulation of receptance matrix .Hd (ω), as spectral relation between displacements and forces under the assumption of noise only in the output (.H1 estimator [1, 2, 39]), will be used for the full-field FRF estimation, describing the dynamic behaviour of a testing system, with potentially multi-input excitation, here 2 distinct shakers, and many-output responses, here also several thousand covering the whole sensed surface, as can be formulated in the following complexvalued equation:
30
A. Zanarini
N
k k=1 SXi Fj (ω)
Hdij (ω) = N
.
k k=1 SFj Fj (ω)
∈ C,
(4.1)
k where .Xi is the output displacement at i-th dof induced by the input force .Fj at j-th dof, while .SX (ω) is the k-th cross i Fj
power spectral density between input and output, .SFk j Fj (ω) is the k-th auto-power spectral density of the input, and .ω is the frequency, evaluated in N repetitions. Once the specific excitation signature .Fj (ω) is known in the frequency domain, the FRF formulation in Eq. 4.1 can be used to obtain the full-field displacements over the entire surface, as follows at specific output dof i: dij (ω) = Hdij (ω)Fj (ω) ∈ C
(4.2)
.
or for the whole displacement vector .d j due to the excitation in dof j as d j (ω) = Hd j (ω)Fj (ω) ∈ C.
(4.3)
.
The surface velocity vector .v j due to the excitation in dof j easily comes as v j (ω) = −iωd j (ω) = −iωHd j (ω)Fj (ω) ∈ C.
(4.4)
.
Increasing the spatial resolution in dynamic models by means of many dofs full-field FRFs will be therefore a valuable addition to the state-of-the-art of the design of complex systems and will surely lead to the exploration of new FRF-based quantities derived from excellent quality displacement fields, such as surface rotations, velocities, strains and stresses, and failure point distributions [12, 13, 15, 26–28]. Furthermore, distributed loading patterns, such as those resulting from vibroacoustics [14, 36, 40], might be considered.
4.3 The lab activities for the TEFFMA project in brief In the well-equipped laboratory of the TU-Wien, it was possible to organise a complex setup for the comparison of three different technologies available in acquiring full-field FRFs: SLDV, Hi-Speed DIC, and Dynamic ESPI. The interested reader can find more details for TEFFMA in [29], while for the physics’ principles of each instrumentation, see instead [3, 4, 41– 43]. As can be seen in Fig. 4.1a, the lab had a dedicated room with seismic floor and air-cushion anti-vibration optical table, to filter out the environmental vibrations. A Polytec PSV 300 was at disposal as scanning LDV reference system, with the 1D (out-of-plane) scanning head OFV-056. The image-based full-field gears consisted in: the Dantec Ettemeyer Q-500 Hi-Res, as an ESPI system for 3D dynamic measurements in stroboscopic coherent laser light; and in the Dantecdynamics Q-450, for 3D dynamic DIC acquisitions by means of hi-speed cameras in high-frequency and -power white light. There were clear differences in both spatial and frequency domains. In particular, ESPI, with the greatest spatial resolution possible (even 1 million dofs), showed very clean datasets [13, 15, 29] but required a long and cumbersome data acquisition with steppedsine excitation. Great attention was paid to understand the limits and requirements of all these technologies for a common
a
b
c
Fig. 4.1 Full-field optical measurement instruments setup in front of the specimen on the anti-vibration table with shakers on the backside: the instruments in (a), the restrained thin plate in (b), and the 2 shakers in (c).
4 Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate
31
intersection of measurement conditions in the linearity of the structural dynamics. A promising compromise was achieved in the range [20–1024] Hz, with the 3 techniques having their own spatial and frequency resolutions. The experimental setup was designed in order to let all the three measurement technologies focus on the same dynamic behaviour: this was designed to have a high modal density inside the frequency range of interest. A thin aluminum rectangular plate (external dimensions: .250 × 236 × 1.5 mm) was fixed by wires to a solid frame on the air-spring optical table (see Fig. 4.1b) to restrain excessive rigid body movements. The front side of the plate was sprayed with a DIC-friendly random noise pattern paint layer. The excitation was given by 2 electrodynamic shakers positioned on the back side of the plate (see Fig. 4.1c), to fulfill the stepped-sine phase-shifting acquisition procedure for ESPI with an external sine waveform generator, while LMS Test.Lab drove the shaker excitation in SLDV and DIC measurements. The force signals were sampled at the shaker-plate interfaces by means of the force cells in the impedance heads, to calculate the receptance FRFs as in Eq. 4.1. ESPI permitted the assessment, fixing, and thus prevention of unwanted vibrations from any part of the rig up to much higher frequencies than those in the common overlapping: this means that the output motion of the specimen was highly coherent with the input force [12, 23, 29], for the best achievable measurements. Great attention was also paid to the optical alignments of all the instruments; the depth of field of each optics was accurately selected to match the structural dynamics’ requirements, in terms of collected photons and dynamic range for the best image quality, thanks to the extended practice of photography of the author2 . The gained experience with full-field optical measurements revealed itself, together with that of photography, as pivotal in arranging the multiple camera acquisition for photogrammetry [43] tests in [44, 45]. The interested reader can widen the understanding in [12, 13, 29], to appreciate the spatial consistency and continuity of the data, with clean shapes, sharp nodal lines, and excellent coherence, especially from ESPI. The latter was here used as the most clean source available of experimental data for the specific purpose of evaluating vibro-acoustic FRFs and acoustic pressure maps.
4.4 Revisiting the sound pressure radiation formulation The vibro-acoustic topic here discussed is that of the sound radiated to an infinite acoustic domain by a vibrating surface, the plate that was measured. The case is that of propagating waves [46] in an infinite medium: according to [47–51], in the a-th point of global coordinates .a a of the acoustic domain A, here just air, the sound pressure .p(a a , ω) can be defined from the Helmholtz’s equation as
p(a a , ω) =
.
iωρ0 2π
S
vn (q q , ω)
e−ikraq dS raq
(4.5)
or p(a a , ω) = 2iωρ0
.
S
vn (q q , ω)G(raq , ω)dS, G(raq , ω) =
e−ikraq e−iωraq /c0 = , 4π raq 4π raq
(4.6)
where i is the imaginary unit, .ω is the angular frequency (.ω = 2π h, with h being the time frequency in Hertz), .ρ0 is the medium (air) density, .vn (q q , ω) is the normal (out-of-plane) velocity of the infinitesimal vibrating surface dS located in the global coordinate .q q , .q representing the whole vector of coordinates of the vibrating surface S, .k = ω/c0 = 2π/λ is the wavenumber in the Helmholtz’s equation (.c0 is the speed of sound at rest in the medium, .λ is the acoustic wavelength), .raq = r aq is the norm of the distance .r aq = a a − q q between the points in the two domains, and .G(raq ) is the free space Green’s function as described in Eq. 4.6. The surface normal velocities in the frequency domain can be taken from the 3D velocities of Eq. 4.4. They are linked to the dynamic out-of-plane displacements over the static configuration .q, by means of the relation .v n (q, ω) = −iωd n (q, ω), which are expressions, by .d n (q, ω) = H d n qf (ω) · F f (ω), of the receptance FRFs .H d n qf (ω) of size .Nq × Nf —being .Nq the number of the outputs and .Nf of the inputs—and of the excitation signatures .F f (ω). Equation 4.6 can therefore be rewritten taking advantage of the receptance FRFs, being analytical, numerical, or experiment-based: p(a a , ω) = −2ω2 ρ0
.
S
H d nqf (ω)F f (ω)G(raq , ω)dS ∈ C,
with .H d n qf (ω), .F f (ω), and .G(raq , ω) as complex-valued quantities, therefore also .p(a, ω) ∈ C. 2 Fine
art, nature, and wildlife photography at https://www.colorazeta.it/index_EN.htm.
(4.7)
32
A. Zanarini
Note that normally .raq are considered as constant and real-valued vectors. It might be noted, instead, that .raq may become rˆaq (ω) = a − q − H d n qf (ω) · F f (ω), thus complex-valued. Nonetheless, being the contributions .H d n qf (ω) · F f (ω) of at least 3 orders of magnitude lower than the static distances .raq when enough far from the sound pressure source, they can be neglected at first approximation (.rˆaq ≈ r aq = a a − q q ), as well as the infinitesimal surface deflection that pumps in a direction slightly different from the actual normal, with clear computational advantages. If the vibrating surface domain S that scatters the sound pressure can be discretised, as .S ≈ q Sq , with .Sq as the discrete rectangular area of the surface with the q-th point as centroid and dimensions as an average of the distances from the neighbouring points in the grid, Eq. 4.7 can be expressed in terms of a sum of discrete contributions instead of an integral:
.
p(a a , ω) ≈ −2ω ρ0 2
.
Nq
H d n qf (ω)F f (ω)G(raq , ω)Sq ∈ C.
(4.8)
q
To be noted that the accuracy of this discretisation increases with the spatial definition of the receptance FRFs, as it will be proved in the following section and in [40], by means of experiment-based full-field receptances. Being .G(raq , ω) and .Sq functions of the locations of the .Na points in the acoustic domain and of the .Nq points on the radiating structure, they can be grouped in a complex-valued collocation matrix .T aq (ω), sized .Na × Nq , of single element .T aq (ω) = −2ρ0 G(raq , ω)Sq , to compact, by matrix algebra, Eq. 4.8 into p(a a , ω) ≈ ω2 T aq (ω)H d n qf (ω)F f (ω) ∈ C.
.
(4.9)
If, as the acoustic transfer vectors in [52, 53], an acoustic transfer matrix .V aq (ω), also callable as vibro-acoustic FRFs, is defined as .V aq (ω) = ω2 · T aq (ω) · H d n qf (ω) ∈ C, product of the rectangular collocation matrix .T aq (ω) times the receptance matrix .H d n qf (ω) and .ω2 , Eq. 4.9 becomes p(a a , ω) ≈ V aq (ω)F f (ω) ∈ C.
.
(4.10)
The collocation matrix .T aq (ω) allows the estimation of the pressure field by means of matrix multiplications, which are made just bigger in the case of finer grids in both the acoustic and structural domains. Equation 4.10 can be relevant when the structural response and acoustic domains are kept constant (meaning the acoustic transfer matrix .V aq (ω) is unchanged), while varying the excitation signature, to map the effect of different structural responses on the acoustic pressure field. Note also that Eqs. 4.9–4.10 allow the calculation of the sound pressure field directly from experiment-based full-field receptances, which should help the accuracy of the results. The latter can be achieved thanks to the high spatial resolution of the full-field receptances and because these receptances, being evaluated directly from the test, retain the true blending of any complex-valued ODS active at the specific frequency, with the proper real boundaries and dissipation/damping effects of the actual vibrating structure.
4.5 Sound pressure mapping from full-field receptances As just highlighted above in Section 4.4, the motion of the sound radiating surface can be modelled by means of the experiment-based full-field receptances obtained from Sections 4.2 and 4.3. In particular, from Eq. 4.10, it is clear how relevant is the evaluation of the defined acoustic transfer matrix .V aq (ω), before the adoption of a specific excitation signature, to simulate the acoustic pressure in the acoustic domain.
4.5.1 Meshing the acoustic domain For the proof-of-concept of what introduced, a simple 3D acoustic domain meshing routine was prepared. Different geometries are available for the acoustic grid, as the discrete locations of the virtual pressure receivers: a flat rectangular grid, a part of a cylindrical grid, and a part of a spherical grid. The grids can be freely positioned in the 3D space, relative to the radiating surface. The number of nodes in the grid has no specific limit, except for computational burden, if the acoustic simulation evaluates serially, for each frequency line, the positioning vectors .r aq for the complex-valued collocation matrix .T aq (ω). In alternative to a serialised calculation, slower but less demanding in terms of RAM allocation, a parallel computing strategy
4 Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate Step[138]=127.344 [Hz]
Acoustic Pressure / N at dof [833] AmpESPI_r=-3.350e+01 PhaESPI_r=2.779e+00
Step[338]=283.594 [Hz]
33
Acoustic Pressure / N at dof [2064] AmpESPI_r=-1.922e+00 PhaESPI_r=-1.458e+00
3.142
3.142
Pha [rad]
Pha [rad]
-3.142 5.551e+00
-3.142 -5.797e-01
ESPI_r
ESPI_r
Amp [1/m^2] [dB]
Amp [1/m^2] [dB]
-6.465e+01 20.312
Frequency [Hz] Shakers:
active #1[2611]
-6.273e+01
1023.438 mute #2[931]
20.312
Frequency [Hz] Shakers:
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in
active #1[2611]
1023.438 mute #2[931]
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in
a
Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
Fig. 4.2 Example of acoustic transfer matrix or vibro-acoustic FRF graph in the frequency domain evaluated in acoustic dof 833 (a) and in dof 2064 (b) from shaker 1. Step[907]=728.125 [Hz]
Acoustic Pressure / N at dof [833] AmpESPI_r=-2.174e+01 PhaESPI_r=2.796e-01
Step[1148]=916.406 [Hz]
Acoustic Pressure / N at dof [2064] AmpESPI_r=-3.860e+01 PhaESPI_r=-2.949e+00
3.142
3.142
Pha [rad]
Pha [rad]
-3.142 5.335e+00
-3.142 7.564e+00
ESPI_r
ESPI_r
Amp [1/m^2] [dB]
Amp [1/m^2] [dB]
-7.727e+01 20.312
Frequency [Hz] Shakers:
mute #1[2611]
-8.256e+01
1023.438 active #2[931]
20.312
Shakers:
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
Frequency [Hz] mute #1[2611]
1023.438 active #2[931]
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in
a
Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Fig. 4.3 Example of acoustic transfer matrix or vibro-acoustic FRF graph in the frequency domain evaluated in acoustic dof 833 (a) and in dof 2064 (b) from shaker 2.
was optimised3 on the computing environment available4 to take advantage of the common positioning vectors .r aq in the Green’s functions and in Eqs. 4.8–4.10 between structural and acoustic domains, which may be evaluated just once and kept allocated afterwards in memory for each frequency line, in the acoustic field approximations. Therefore, in this chapter, a rectangular mesh of size .0.5 m × 0.5 m was prepared, with .51 × 51 dofs (.Na = 2601, .10 mm among each dof), centred on the vibrating plate and positioned .0.1 m above it, to speed up the calculations with the parallel computing routines as mentioned above. The parameters of the medium (air) were fixed in .c0 = 300.0 m/s and .ρ0 = 1.204 kg/m3 .
4.5.2 Evaluation of the acoustic transfer matrix or vibro-acoustic FRFs This chapter’s aim is to evaluate the acoustic transfer matrix .V aq (ω), or vibro-acoustic FRFs (as acoustic pressure over force), directly from the experiment-based receptances, as proposed in Section 4.4, without the need of any FE structural model, but with great detail and field quality, especially regarding the lack of assumptions about the real structure. As Sections 4.2 and 4.3 highlighted, accurate receptance matrices .H d n qf (ω) can be estimated with high resolution in spatial and frequency domains from both shakers available in the TEFFMA project. In Fig. 4.2a, an example of vibro-acoustic FRF is reported as a frequency-domain relation from the force from shaker 1 and the acoustic pressure in dof 833, here selected in the rectangular acoustic mesh. Another example of acoustic transfer matrix single-dof graph is shown in Fig. 4.2b at dof 2064, to highlight the variability of the vibro-acoustic FRF among the dofs of the acoustic mesh. The attention about vibro-acoustic FRFs is on the same dofs in Fig. 4.3, with input from shaker 2 instead. Again it is of relevance how the
3 Limited
to use up to .51×51 nodes in the acoustic grid and to use 145 GB of RAM; the experiment-based receptances are obtained in a grid of × 51 dofs (.Nq = 2907) and 1285 frequency lines. 4 Based on C language OpenMP code, gcc 7.5.0 in OpenSUSE® Linux environment, and on a workstation with 192 GB of RAM, 12 physical cores in dual hexacore Intel® Xeon® X5690 CPUs running at 3.46–3.73 GHz.
.57
A. Zanarini
1.147e+00 ESPI_r
Acoustic Pressure / N Shakers: active #1[2611] mute #2[931] Frequency step [138] = 127.344 Hz Complex amplitude [1/m^2] [projection angle 0 deg]
Dof [833] ESPI_r [833] = 2.113e-02
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
a
3.369e-01
4.313e-03
1.429e-01 ESPI_r
34
Acoustic Pressure / N Shakers: active #1[2611] mute #2[931] Frequency step [907] = 728.125 Hz Complex amplitude [1/m^2] [projection angle 0 deg]
Dof [2064] ESPI_r [2064] = 7.100e-01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
9.092e-02 ESPI_r
Acoustic Pressure / N Shakers: mute #1[2611] active #2[931] Frequency step [338] = 283.594 Hz Complex amplitude [1/m^2] [projection angle 0 deg]
Dof [833] ESPI_r [833] = 4.163e-01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
a
8.141e-03
6.349e-02
7.954e-01 ESPI_r
Fig. 4.4 Example of acoustic transfer matrix or vibro-acoustic FRFs’ mesh evaluated at the specific frequency of 127 Hz (in a) and of 728 Hz (in b) from shaker 1. Acoustic Pressure / N Shakers: mute #1[2611] active #2[931] Frequency step [1148] = 916.406 Hz Complex amplitude [1/m^2] [projection angle 0 deg]
Dof [2064] ESPI_r [2064] = 1.175e-02
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Fig. 4.5 Example of acoustic transfer matrix or vibro-acoustic FRFs’ mesh evaluated at the specific frequency of 284 Hz (in a) and of 916 Hz (in b) from shaker 2.
acoustic transfer matrix obtained from the experiment-based receptances preserves, with its complex-valued nature, the real-life conditions of the test, without any simplification in the damping and boundaries, nor in the modal base truncation or identification that is not needed. Instead, in Fig. 4.4 examples of the whole acoustic transfer matrix .V aq (ω) are shown over the entire acoustic mesh at specific frequencies, when shaker 1 is used; they retain again the complex-valued relations and phase delays, coming from the underneath complex-valued receptance matrix .H d n qf (ω), but blended in the complex-valued summation in .V aq (ω). To be noted that the vibro-acoustic FRF maps are here displayed in grey tones, whereas the corresponding receptance maps, here adjoined for the sketching of the corresponding radiating source, have the green tones, as used previously in the TEFFMA project to display ESPI-based structural datasets. Brighter tones are used for range maximum, dark tones for minimum, in both acoustic and structural domains, with a linear tone grading based on the complex amplitude here depicted on Z-direction. An axes triad and a net of dots, at zero Z-elevation on the plate, complete the visualisation of these complex-valued fields. In Fig. 4.5, the acoustic transfer matrices are depicted when the excitation comes from shaker 2. It appears manifest how the distances between the vibrating plate and the acoustic mesh, as expected, play a relevant role in blending the pumping contributions of the specific areas in motion (with potential delays) on the radiating surface. In particular, the proximity to specific nodal lines of the simpler structural ODSs is well revealed in the vibro-acoustic FRF maps, whereas, with increasing complexities in the ODSs’ patterns, that similarity between the two domains appears a bit faded away in the final blending
4 Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate
35
of the contributions. The latter phenomena could be better investigated with a different acoustic mesh, rotated by 90.◦ around the X or Y axes of the triad, to have the acoustic field mapped also as the normal distance varies, but this might be an interest in simulations of near-field acoustics in future works, when also the approximation of .r aq might be re-discussed. It must be remarked how the change of the energy injection point in the structure, here from shaker 1 to shaker 2, effectively has an impact on the predicted vibro-acoustic FRF ODSs, as participation factors and driving point FRFs [1, 2] do vary across the spatial field of the vibrating structure.
4.5.3 Evaluation of the acoustic pressure field with coloured noise excitation Being this procedure applied to specific production samples that face real-life loading conditions, the peculiar test spectrum needs to be used as the excitation, together with the known energy injection location. Complex-valued spectra can introduce phase-related effects, or delays, among the structural dynamics components of the system in Eq. 4.10. Instead, in the following examples, an input source, shaped as coloured noise, was adopted in the known locations of the 2 shakers. The amplitude spectrum of the force excitation, here used, can be simply modelled as the coloured noise .FCα (ω), a real-valued function in the .ω frequency: FCα (ω) =
.
F0 ∈R (ω)α
(4.11)
with .α ∈ [−2, 2] defining the noise colour (.α = −2 for violet noise, .α = −1 for blue noise, .α = 0 for white noise, .α = 1 for pink noise, and .α = 2 for red noise) and with .F0 used as the reference amplitude of the excitation force in the frequency range. No specific relative phase assumptions were made; thus the phases were kept fixed to a null delay for all the lines by means of real numbers; but, instead, the use of complex-valued input spectra changes only the product in Eq. 4.10, being all the quantities already complex-valued. Therefore, a red noise highly exalts the lower frequency region of the receptance FRFs, a white noise gives the same energy to all the frequency lines, and a violet noise gives a strong power to the higher frequency structural dynamics, with the pink noise and blue noise as intermediate situations, aside of the neutral white noise, adopted for an equal energy balance in all the frequency lines. In the proof here organised, the pink and blue noise amplitude spectra were used with .F0 = 0.01 N, with excitation from shakers 1 and 2. As can be seen in Fig. 4.6, the complex-valued acoustic pressures in dofs 833 and 2064 are depicted as they were their acoustic transfer matrix functions of Fig. 4.2: their phase is not changed, but the complex amplitude is strictly influenced by the varying scaling due to the adopted .F0 and the deployment in the frequency domain of the pink noise excitation as in Eq. 4.11. Specifically, as expected, there is a decreasing complex amplitude of the acoustic pressure as the frequency increases. In Fig. 4.6, the acoustic pressure can be noted to vary among the dofs of the acoustic mesh, to show its variability coming from the simulated location of the receiver. Instead, a different excitation spectrum as that of the blue noise differently conditioned the scaling in the complex amplitude of each spectral contribution, but here not the phase. Indeed, the comparison between Figs. 4.7 and 4.3 shows the acoustic pressure complex amplitude raising with the frequency, as determined by Eq. 4.11, with variability among the dofs as dependent on the location in the acoustic mesh.
Step[138]=127.344 [Hz]
Acoustic Pressure PINK NOISE excit. at dof [833] AmpESPI_r=-8.945e+01 PhaESPI_r=2.779e+00
Step[338]=283.594 [Hz] 3.142
Pha PINK NOISE excit. at step[138]=0.000e+00 [rad]
Acoustic Pressure PINK NOISE excit. at dof [2064] AmpESPI_r=-6.482e+01 PhaESPI_r=-1.458e+00 3.142
Pha PINK NOISE excit. at step[338]=0.000e+00 [rad]
Pha [rad]
Pha [rad]
-3.142 -5.766e+01
Amp PINK NOISE excit. at step[138]=-5.594e+01 [N] [dB]
-3.142 -6.482e+01
Amp PINK NOISE excit. at step[338]=-6.290e+01 [N] [dB]
ESPI_r
ESPI_r
Amp [N/m^2] [dB]
Amp [N/m^2] [dB]
-1.284e+02 20.312
Frequency [Hz] Shakers:
active #1[2611]
-1.302e+02
1023.438 mute #2[931]
20.312
Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
Frequency [Hz] Shakers:
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in
active #1[2611]
1023.438 mute #2[931]
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in
a
Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Fig. 4.6 Example of acoustic pressure graph in the frequency domain evaluated in acoustic dof 833 (a) and in acoustic dof 2064 (b) with pink noise excitation from shaker 1.
36 Step[907]=728.125 [Hz]
A. Zanarini Acoustic Pressure BLUE NOISE excit. at dof [833] AmpESPI_r=-6.469e+01 PhaESPI_r=2.796e-01
Step[1148]=916.406 [Hz] 3.142
Pha BLUE NOISE excit. at step[907]=0.000e+00 [rad]
Acoustic Pressure BLUE NOISE excit. at dof [2064] AmpESPI_r=-7.956e+01 PhaESPI_r=-2.949e+00 3.142
Pha BLUE NOISE excit. at step[1148]=0.000e+00 [rad]
Pha [rad]
Pha [rad]
-3.142 -4.193e+01
Amp BLUE NOISE excit. at step[907]=-4.296e+01 [N] [dB]
-3.142 -4.163e+01
Amp BLUE NOISE excit. at step[1148]=-4.096e+01 [N] [dB]
ESPI_r
ESPI_r
Amp [N/m^2] [dB]
Amp [N/m^2] [dB]
-1.314e+02 20.312
Frequency [Hz] Shakers:
mute #1[2611]
-1.388e+02
1023.438 active #2[931]
20.312
Shakers:
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
Frequency [Hz] mute #1[2611]
1023.438 active #2[931]
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in
a
Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Fig. 4.7 Example of acoustic pressure graph in the frequency domain evaluated in acoustic dof 833 (a) and in acoustic dof 2064 (b) with blue noise excitation from shaker 2.
For the sake of compactness, no acoustic pressure maps are here displayed. Nonetheless, the colour of the noise in Eq. 4.11 just scales the shapes of the acoustic transfer matrix already shown in Figs. 4.4 and 4.5, without changes in the phase. Again it must be remarked how the predicted sound pressure fields vary with the change of the energy injection points in the structure, here from shaker 1 to shaker 2. It is therefore relevant, for the success of these experiment-based sound radiation approximations, that the excitations in the setup are replicated as those in real-life scenarios.
4.6 Conclusions The possibilities in vibro-acoustics coming from experiment-based full-field tools, specifically the optical full-field measurements, were highlighted for a clear advancement of experimental benchmarks of design procedures of complex structures. The unprecedented mapping ability in both spatial and frequency domains, thanks to the experiment-based fullfield receptances, targets novel vibro-acoustic prediction paradigms, where the real-life structural dynamics of the radiating surface is entirely retained in the acoustic simulation with great accuracy, but without assumptions in the damping nor errors in unneeded identifications nor model updating. Experiments to verify these sound radiation predictions are needed to further give assurance in future of the achieved results, for an increased awareness of full-field measurements’ capabilities for the most demanding vibro-acoustic radiation predictions. Experimental full-field NVH, nonetheless, is on an encouraging path, paved with promising advancements. Acknowledgments This activity is a spin-off of the Project TEFFMA—Towards Experimental Full-Field Modal Analysis, funded by the European Commission at the Technische Universitaet Wien, through the Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 grant, for which the Research Executive Agency is greatly acknowledged. TU-Wien, in the person of Prof. Wassermann and his staff, are kindly acknowledged for having hosted the TEFFMA project of the author at the Schwingungs- und Strukturanalyse / Optical Vibration Measurement Laboratory. The workstation used to extensively process the datasets and code the acoustic radiation was provided by the author on his own savings.
References 1. Heylen, W., Lammens, S., Sas, P.: Modal Analysis Theory and Testing, 2nd edn. Katholieke Universiteit Leuven, Leuven (1998). ISBN 9073802-61-X 2. Ewins, D.J.: Modal Testing—Theory, Practice and Application, 2nd edn. Research Studies Press, Hertfordshire (2000) 3. Rastogi, P.K.: Optical Measurement Techniques and Applications. Artech House, Nordwood (1997) 4. Kreis, T.: Handbook of Holographic Interferometry. Wiley-VCH, Berlin (2004) 5. Zanarini, A.: Dynamic behaviour characterization of a brake disc by means of electronic speckle pattern interferometry measurements. In: Proceedings of the IDETC/CIE ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Long Beach, California, USA, September 24–28, pp. 273–280. ASME (2005). Paper DETC2005-84630 6. Zanarini, A.: Damage location assessment in a composite panel by means of electronic speckle pattern interferometry measurements. In: Proceedings of the IDETC/CIE ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Long Beach, California, USA, September 24–28, pp. 1–8. ASME (2005). Paper DETC2005-84631 7. Van der Auweraer, H., Steinbichler, H., Haberstok, C., Freymann, R., Storer, D., Linet, V.: Industrial applications of pulsed-laser ESPI vibration analysis. In: Proc. of the XIX IMAC, Kissimmee, FL, USA, pp. 490–496. SEM (2001)
4 Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate
37
8. Zanarini, A.: Full field ESPI measurements on a plate: challenging experimental modal analysis. In: Proceedings of the XXV IMAC, Orlando (FL) USA, Feb 19–22, pp. 1–11. SEM (2007). Paper s34p04 9. Craig, R.R.: Structural Dynamics: An Introduction to Computer Methods. John Wiley & Sons, New York (1981) 10. O’Callahan, J, Avitabile, P, Riemer, R.: System equivalent reduction expansion process (SEREP). In: Proceedings of the VII International Modal Analysis Conference, vol. 1, pp. 29–37 (1989) 11. Pogacar, M., Ocepek, D., Trainotti, F., Cepon, G., Boltezar, M.: System equivalent model mixing: a modal domain formulation. Mech. Syst. Signal Process. 177, 109239 (2022) 12. Zanarini, A.: Broad frequency band full field measurements for advanced applications: Point-wise comparisons between optical technologies. Mech. Syst. Signal Process. 98, 968–999 (2018) 13. Zanarini, A.: Chasing the high-resolution mapping of rotational and strain FRFs as receptance processing from different full-field optical measuring technologies. Mech. Syst. Signal Process. 166, 108428 (2022) 14. Zanarini, A.: Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate. In: Proceedings of the IMAC XLI—International Modal Analysis Conference—Keeping IMAC Weird: Traditional and Non-traditional Applications of Structural Dynamics, Austin (Texas), USA, pp. 1–13. Springer &SEM Society for Experimental Mechanics (2023). Paper ID 14650 15. Zanarini,A.: Full field optical measurements in experimental modal analysis and model updating. J. Sound Vib. 442, 817–842 (2019) 16. Zanarini, A.: On the exploitation of multiple 3D full-field pulsed ESPI measurements in damage location assessment. Proc. Struct. Integr. 37, 517–524 (2022). Paper ID 104, ICSI 2021 The 4th International Conference on Structural Integrity 17. Zanarini, A.: On the defect tolerance by fatigue spectral methods based on full-field dynamic testing. Proc. Struct. Integr. 37, 525–532 (2022) Paper ID 105, ICSI 2021 The 4th International Conference on Structural Integrity 18. Zanarini, A.: About the excitation dependency of risk tolerance mapping in dynamically loaded structures. In: Proceedings of the ISMA2022 including USD2022—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 12–14, pp. 1–15. KU Leuven (2022). Paper ID 208 19. Zanarini, A.: Introducing the concept of defect tolerance by fatigue spectral methods based on full-field frequency response function testing and dynamic excitation signature. Int. J. Fatigue 165, 107184 (2022) 20. Zanarini, A.: Risk tolerance mapping in dynamically loaded structures as excitation dependency by means of full-field receptances. In: Proceedings of the IMAC XLI—International Modal Analysis Conference—Keeping IMAC Weird: Traditional and Non-traditional Applications of Structural Dynamics, Austin (Texas), USA, pp. 1–15. Springer & SEM Society for Experimental Mechanics (2023). Paper ID 14648 21. Zanarini, A.: Fatigue life assessment by means of full field ESPI vibration measurements. In Sas, P. (ed.) Proceedings of the ISMA2008 Conference, September 15–17, Leuven (Belgium), pp. 817–832. KUL (2008). Condition monitoring, Paper 326 22. Zanarini, A.: Full field ESPI vibration measurements to predict fatigue behaviour. In: Proceedings of the IMECE2008 ASME International Mechanical Engineering Congress and Exposition, October 31–November 6, Boston (MA) USA, pp. 165–174. ASME (2008). Paper IMECE2008-68727 23. Zanarini, A.: On the estimation of frequency response functions, dynamic rotational degrees of freedom and strain maps from different full field optical techniques. In: Proceedings of the ISMA2014 including USD2014—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 15–17, pp. 1177–1192. KU Leuven (2014). Dynamic testing: methods and instrumentation, Paper ID676 24. Zanarini, A.: On the role of spatial resolution in advanced vibration measurements for operational modal analysis and model updating. In: Proceedings of the ISMA2014 including USD2014—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 15–17, pp. 3397–3410. KU Leuven (2014). Operational modal analysis, Paper ID678 25. Zanarini, A.: Comparative studies on full field FRFs estimation from competing optical instruments. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp. 1559–1568. Univ. Ljubljana & IFToMM (2015). ID191 26. Zanarini, A.: Accurate FRFs estimation of derivative quantities from different full field measuring technologies. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp 1569–1578. Univ. Ljubljana & IFToMM (2015). ID192 27. Zanarini, A.: Full field experimental modelling in spectral approaches to fatigue predictions. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp 1579–1588. Univ. Ljubljana & IFToMM (2015). ID193 28. Zanarini, A.: Model updating from full field optical experimental datasets. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp. 773–782. Univ. Ljubljana & IFToMM (2015). ID196 29. Zanarini, A.: Competing optical instruments for the estimation of Full Field FRFs. Measurement 140, 100–119 (2019) 30. Zanarini, A.: On the making of precise comparisons with optical full field technologies in NVH. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 2293–2308. KU Leuven (2020). Optical methods and computer vision for vibration engineering, Paper ID 695 31. Avitabile, P., O’Callahan, J., Chou, C.M., Kalkunte, V.: Expansion of rotational degrees of freedom for structural dynamic modification. In: Proceedings of the 5th International Modal Analysis Conference, pp. 950–955 (1987) 32. Liu, W., Ewins, D.J.: The importance assessment of RDOF in FRF coupling analysis. In: Proceedings of the IMAC 17th Conference, Kissimmee, Florida, pp. 1481–1487 (1999). Society for Experimental Mechanics (SEM) 33. Research network: QUATTRO Brite-Euram project no: BE 97-4184. Technical report, European Commission Research Framework Programs, 1998 34. Friswell, M., Mottershead, J.E.: Finite element model updating in structural dynamics. Solid Mechanics and Its Applications. Kluwer Academic Publishers, Springer Netherlands (1995) 35. Haeussler, M., Klaassen, S.W.B., Rixen, D.J.: Experimental twelve degree of freedom rubber isolator models for use in substructuring assemblies. J. Sound Vib. 474, 115253 (2020) 36. Zanarini, A.: On the approximation of sound radiation by means of experiment-based optical full-field receptances. In: Proceedings of the ISMA2022 including USD2022—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 12–14, pp. 1–15. KU Leuven (2022). Paper ID 207
38
A. Zanarini
37. Wagner, P., Huesmann, A.P., van der Seijs, M.V.: Application of dynamic substructuring in NVH design of electric drivetrains. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 3365–3382. KU Leuven (2020). Vehicle noise and vibration (NVH), Paper ID 369 38. Mueller, T., Haeussler, M., Sedlmair, S., Rixen, D.J.: Airborne transfer path analysis for an e-compressor. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 3351–3364. KU Leuven (2020). Vehicle noise and vibration (NVH), Paper ID 287 39. Bendat, J.S., Piersol, A.G.: Random Data: Analysis and Measurement Procedures, 3rd edn. John Wiley & Sons, New York (2000) 40. Zanarini, A.: On the influence of scattered errors over full-field receptances in the Rayleigh integral approximation of sound radiation from a vibrating plate. Acoustics 37pp. (2023). ISSN: 2624–599X. (Invited feature paper, accepted on August 21st 2023) 41. Allen, M.S., Sracic, M.W.: A new method for processing impact excited continuous-scan laser Doppler vibrometer measurements. Mech. Syst. Signal Process. 24(3), 721–735 (2010) 42. Di Maio, D., Ewins, D.J.: Continuous scan, a method for performing modal testing using meaningful measurement parameters; Part I. Mech. Syst. Signal Process. 25(8), 3027–3042 (2011) 43. Baqersad, J., Poozesh, P., Niezrecki, C., Avitabile, P.: Photogrammetry and optical methods in structural dynamics—a review. Mech. Syst. Signal Process. 86, 17–34 (2017) 44. Del Sal, R., Dal Bo, L., Turco, E., Fusiello, A., Zanarini, A., Rinaldo, R., Gardonio, P.: Vibration measurements with multiple cameras. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 2275–2292. KU Leuven (2020). Optical methods and computer vision for vibration engineering, Paper ID 481 45. Del Sal, R., Dal Bo, L., Turco, E., Fusiello, A., Zanarini, A., Rinaldo, R., Gardonio, P.: Structural vibration measurement with multiple synchronous cameras. Mech. Syst. Signal Process. 157, 107742 (2021) 46. Mas, P., Sas, P.: Acoustic source identification based on microphone array processing. Technical report, Katholieke Universiteit Leuven, Belgium, Mechanical Engineering Department, Noise & Vibration Research Group, 2004. In ISAAC 15—Course on numerical and applied acoustics, Katholieke Universiteit Leuven, Belgium, Mechanical Engineering Department, Noise & Vibration Research Group. https://www. isma-isaac.be 47. Kirkup, S.M.: Computational solution of the acoustic field surrounding a baffled panel by the Rayleigh integral method. Appl. Math. Model. 18(7), 403–407 (1994) 48. Desmet, W.: Boundary element method in acoustics. Technical report, Katholieke Universiteit Leuven, Belgium, Mechanical Engineering Department, Noise & Vibration Research Group, 2004. In ISAAC 15—Course on numerical and applied acoustics, Katholieke Universiteit Leuven, Belgium, Mechanical Engineering Department, Noise & Vibration Research Group. https://www.isma-isaac.be 49. Wind, J., Wijnant, Y., de Boer, A.: Fast evaluation of the Rayleigh integral and applications to inverse acoustics. In: Proceedings of the ICSV13, The Thirteenth International Congress on Sound and Vibration, Vienna, Austria, July 2–6, 2006, pp. 1–8. International Institute of Acoustics and Vibration (IIAV), 2006 50. Kirkup, S., Thompson, A.: Computing the acoustic field of a radiating cavity by the boundary element—Rayleigh integral method (BERIM). In: Ao, S.I., Gelman, L., Hukins, D.W.L., Hunter, A., Korsunsky, A.M. (eds.) Proceedings of the World Congress on Engineering, WCE 2007, London, UK, 2–4 July, 2007. Lecture Notes in Engineering and Computer Science, pp. 1401–1406. Newswood (2007) 51. Kirkup, S.: The boundary element method in acoustics: a survey. Appl. Sci. 9(8), 1642 (2019) 52. Gérard, F., Tournour, M., Masri, N., Cremers, L., Felice, M., Selmane, A.: Acoustic transfer vectors for numerical modeling of engine noise. Sound Vib. 36, 20–25 (2002) 53. Citarella, R., Federico, L., Cicatiello, A.: Modal acoustic transfer vector approach in a FEM–BEM vibro-acoustic analysis. Eng. Anal. Bound. Elements 31(3), 248–258 (2007)
Chapter 5
Measurements of Panel Vibration with DIC and LDV Imaged Through a Mach 5 Flow Marc A. Eitner, Yoo-Jin Ahn, Noel T. Clemens, Jayant Sirohi, and Vikrant Palan
Abstract Optical measurement techniques such as digital image correlation (DIC) and laser Doppler vibrometry (LDV) are advantageous to measure structural vibrations due to their non-contact nature. While these techniques are immune to electromagnetic interference, they can suffer from optical distortions due to index-of-refraction gradients associated with boundary layers, shock, shock and expansion waves, and combustion. When the optical distortion is due to unsteady processes, such as those associated with turbulence or other time-dependent flow phenomena, these effects cannot be easily accounted for and may result in erroneous measurements of the vibrations. An exploratory test campaign was performed to characterize these errors in a flow-structure interaction experiment conducted in a Mach 5 low-enthalpy blowdown wind tunnel. The flow-induced vibration of a compliant brass panel (0.25 mm thick) was measured using 3D-DIC and LDV, by imaging the panel through the flow. Additional optical distortions were generated by a 27.5.◦ compression ramp that was installed on the floor of the tunnel and generated a shock-induced, turbulent separated flow. Results showed that the LDV data were not affected within the uncertainty of the measurement by flow-induced optical distortions. LDV and DIC results generally agreed well for frequencies between 250 Hz and 2000 Hz, which contained all dominant vibration modes. The shock unsteadiness was expected to generate dynamic distortions for all locations that were measured through the shock. No evidence of this was found, which indicates that such distortions were below the noise floor of the 3D-DIC setup used for this experiment. In one test, the local discrepancy between DIC and LDV differed more when imaged through the shock than for a case where both systems measured the vibration upstream of the shock. Keywords DIC · LDV · Hypersonics · Aeroelasticity · Optical distortion
5.1 Introduction Modern high-speed wind tunnel facilities utilize a variety of non-contact, optical techniques to measure the behavior of test specimens exposed to compressible flow. Some techniques are used to measure the flow field, such as particle image velocimetry (PIV) or Schlieren imaging. Other techniques have the goal of measuring the vibration of flexible bodies, which is a common task during aeroelastic tests. These techniques include laser Doppler vibrometry (LDV) and digital image correlation (DIC), both of which have been shown to be excellent tools for vibration measurements of flexible structures [1]. In recent years, both of these techniques have been used successfully in wind tunnel experiments that investigate the aeroelastic response of compliant panels [2, 3]. Such experiments have gained increased interest in recent years, and multiple test campaigns are under way to investigate the coupling between supersonic/hypersonic flow and thin panels [4–9]. Optical measurement techniques can be affected by distortions resulting from density gradients that occur in compressible flows. When imaging a specimen through density gradients, the experimentalist needs to know if the measured data are
M. A. Eitner () · Y.-J. Ahn · N. T. Clemens · J. Sirohi Department of Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, Austin, TX, USA e-mail: [email protected]; [email protected]; [email protected]; [email protected] V. Palan Polytec, Inc., Irvine, CA, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_5
39
40
M. A. Eitner et al.
affected by optical distortion introduced by the index-of-refraction gradients. In many experiments, these effects are assumed to be small and thus negligible and no quantification is performed. However, several studies have investigated this effect and shown measurable distortion of the DIC data as a result of imaging through flow-induced density gradients. An important investigation of these effects was performed by Beberniss and Ehrhardt [10] who measured panel vibration in a Mach 2 flow under the influence of an impinging shock wave using DIC. They showed that strong shocks can create errors on the order of 0.1 mm, though these are primarily static in nature and thus do not affect the vibration measurements. The present study is motivated by the need to perform wind tunnel experiments of flexible bodies that are completely immersed in the flow. In many test cases, a small aeroelastic model is fixed on a sting balance and can only be imaged through shock waves generated by the model. Thus having an understanding of the level of errors that can be expected from optical measurements in these scenarios can greatly benefit the design and expectations for future experimental campaigns [11].
5.1.1 Flow-Induced Optical Distortion Refraction can adversely affect the optical vibration measurements in a wind tunnel. Light rays bend when they encounter density gradients that are not parallel to their path. If these density gradients are stationary and time-invariant, they affect all measurements in the same way and do not lead to dynamic errors. For example, when the light rays traverse the windows in the walls of the wind tunnel, they bend, according to Snell’s law. Since the tunnel walls do not move and the material properties are constant in time, the refraction will always be the same and thus can easily be accounted for. On the other hand, when the density gradients vary in space and time, they modify the path of the light rays and can lead to erroneous vibration measurements. In supersonic wind tunnels, frictional heating causes large temperature gradients in the wall boundary layers. These temperature gradients (and thus density gradients) are mostly uniform over the streamwise extent of the test section, where the object of interest is located (e.g., thin panel installed in the floor). This means that if the optical instruments are placed directly above the panel, the light rays from the panel to the sensor are in the same direction as the density gradient and so will not be refracted. If the optical instruments image the panel from a shallow angle, then the refraction will be relatively large. A shock wave induces a large change in density over a very small distance (typically only a few mean path lengths). This can lead to significant refraction, which is also unsteady. This unsteadiness is a feature of the physical interaction between shock waves and boundary layers and occurs commonly at low frequencies [12] (compared to the high-frequency turbulence of the flow and boundary layer). Based on the freestream velocity .U∞ and the boundary layer height .δ99 , the low-frequency shock unsteadiness is expected to peak around .0.01U∞ /δ99 ≈ 400 Hz [13]. A disturbance of the dynamic measurements at such a low frequency is important to quantify since that is a frequency that is of high interest when performing structural vibration tests (most panels tested in the literature have resonant frequencies below 1 kHz).
5.2 Experimental Setup The experimental setup is shown in Fig. 5.1. A thin brass panel (chord .= 122 mm, span .= 64 mm, thickness .= 0.25 mm) was inserted just upstream of a 27.5.◦ compression ramp in the test section of the wind tunnel. The panel was enclosed on its lower side, creating a sealed cavity. The cavity pressure was controlled using a vacuum pump to set a pressure close to the low freestream pressure in the wind tunnel test section, thus reducing excessive panel bending from static pressure loading. The cavity also had a viewing window on the lower side, to allow camera access. Note that in Fig. 5.1 only the lower side of the panel has a speckle pattern on it, whereas in the actual experimental setup there was also a speckle pattern on the top side of the panel, to allow vibration measurement with DIC from above. The speckle pattern was created by attaching a thin sticker to the panel, onto which a speckle pattern had been printed. Th panel was installed in the Mach 5 blowdown wind tunnel at The University of Texas at Austin and operated for up to a minute at a total pressure .p0 = 2480 kPa and total temperature .T0 = 445 Kelvin during this test campaign. A schematic overview of the Mach 5 test section is depicted in Fig. 5.2a. The presence of the compression ramp leads to an oblique shock that separates the boundary layer, forming a separation bubble. The flow then reattaches further downstream, just past the ramp corner. Two DIC cameras and an LDV were positioned above the tunnel. While the DIC cameras image the whole panel, the LDV is aimed at a single point. Two locations were chosen that differ in expected flow-induced distortions. The flow at the center of the panel (marked with .x1 ) is mostly unaffected by the downstream compression ramp. The second
5 Measurements of Panel Vibration with DIC and LDV Imaged Through a Mach 5 Flow
Fig. 5.1 Panel insert in wind tunnel floor
Fig. 5.2 Schematic of shock/boundary layer interaction; (a) Side view setup, (b) oil flow image
41
42
M. A. Eitner et al.
location near the ramp (marked with .x2 ) lies in the region of the separation shock, and thus strong density gradients are expected when trying to image this location from above. The low-frequency unsteady motion of the shock foot is expected to result in low-frequency errors (from refraction) that manifest in the DIC vibration measurement at all locations under the shock (near the ramp). To further illustrate the flow field, an oil flow image is shown in Fig. 5.2b. Highlighted are the beginning and end of the separation bubble near the ramp.
5.2.1 Digital Image Correlation The software Davis 10 by LaVision was used for calibration, image recording, and post-processing. The two cameras were a Phantom Miro M310 and Phantom Lab 310, with 105-mm lenses. The f-stop was set to 5.6, and the frame rate was 5 kHz. Calibration was performed by imaging a target plate at different vertical positions, to obtain the intrinsic and extrinsic parameters of the setup (e.g., camera positions, lens aberrations). The facet size was set to 27.×27 pixels with a 9 pixels overlap.
5.2.2 Laser Doppler Vibrometer An LDV was used for very high-precision measurements of the panel vibration. The system used was a Polytec VibroFlexQTec that is based on multipath Doppler vibrometry [14] providing the cleanest possible, dropout-free raw data. This was important considering the measurements were acquired without any surface preparation on a black surface going through a glass window. Both velocity and displacement can be measured directly based on demodulation of frequency and phase, respectively. The data were originally recorded at 10 kHz and later downsampled to 5 kHz for direct comparison with the DIC results.
5.3 Results and Discussion This section is structured in the following way: First, the vibrations of the wind tunnel are analyzed, using sensors that are unaffected by optical distortions. Then, a test case is presented where the DIC system was set up below the tunnel, to measure the compliant panel vibration without flow-induced distortion effects. Next, the measured vibration of a rigid panel is investigated where the DIC and LDV image the panel through the flow. Finally, measurements are shown where both the DIC and LDV image the compliant panel through the flow.
5.3.1 Tunnel Vibrations and Other Noise Tunnel vibrations are a significant noise source, and efforts were undertaken to quantify their effect on the DIC and LDV measurements. A laser displacement sensor (LDSm 80/10 from LMI Technologies Inc.) was mounted below the tunnel and pointed at the outer wall of the test section, to measure its vertical motion. The sensor uses optical triangulation to detect the position of an object. A single-axis accelerometer was mounted on top of the test section, to measure the vertical acceleration of the tunnel. Data were recorded at 10 kHz during multiple test runs. Initially, only the accelerometer was used, and in later tests, only the LDS was used, but no test case has data from both systems. Shown in Fig. 5.3 are the power spectral densities of the two measurements. In addition, for each sensor, a measurement was made before the tunnel run to determine the noise floor of the sensor. Note that the LDS has a much higher noise floor than the accelerometer and furthermore seems to exhibit a smaller bandwidth (only about 1 kHz). The accelerometer data were integrated twice, using the trapezoidal rule, to obtain displacement. This process makes the spectra rather smooth and adds ringing at higher frequencies, though the trend in the low frequencies seems to match between both sensors. The standard deviation of the tunnel vibration during operation was about 0.1 mm (measured with the LDS).
5 Measurements of Panel Vibration with DIC and LDV Imaged Through a Mach 5 Flow
43
10-2 LDS noise LDS Accelerometer (integrated) Accelerometer noise (integrated)
PSD(wTunnel), mm2 /Hz
10-4
10-6
10-8
10-10
0
500
1000
1500
2000
2500
Frequency, Hz Fig. 5.3 Vertical motion of wind tunnel test section before (labeled as “noise”) and during runs from laser displacement sensor (LDS) and time integration of an accelerometer
The precision of the LDV and DIC setup was quantified by measuring the panel through the top window before the start of the wind tunnel. The standard deviation of the LDV data was 0.73 .μm. The standard deviation of the out-of-plane displacement data measured with the DIC system was about 5.6 .μm, with slight variations in neighboring facets.
5.3.2 Undisturbed DIC–LDV Through Flow Two tests were performed where the DIC cameras imaged the compliant panel from below the tunnel (through the cavity with still air) and were thus not influenced by any flow-induced distortion effects. The LDV remained fixed above the tunnel for all experiments. The deformation data are compared in the frequency domain and in the time domain. Two metrics are defined to compare the LDV and DIC data. These metrics are hereafter referred to as error metrics, with the assumption that the LDV measures the true vibration. This will later be shown to be appropriate. The two error metrics are: The time response assurance criterion (TRAC) is a metric to quantify the similarity between two time histories. It is defined as TRAC(wLDV , wDIC ) =
.
T w 2 [wLDV DIC ] T T w [wLDV wLDV ][wDIC DIC ]
,
(5.1)
where .wDIC and .wLDV are the two time histories from the DIC and LDV measurements, respectively. It is equivalent to the squared cross-correlation coefficient at zero time lag. The second metric is the relative error of the signal standard deviation: std =
.
std(wLDV ) − std(wDIC ) ∗ 100%. std(wLDV )
(5.2)
Since even minute differences in the low-frequency range (or even the mean) can lead to large differences in the time histories, any error computation (or plotting) would be meaningless without filtering. This is why all data are high-passfiltered above 250 Hz before computing the errors and plotting the time histories. This also shifts the focus of the analysis to the frequency range relevant to the structural dynamics of the panel (.>400 Hz). Low-frequency errors can result from sensor support vibration and are discussed separately.
44
M. A. Eitner et al. 10-2
0.6 DIC LDV
0.4
PSD(w), mm2/Hz
w, mm
0.2
DIC LDV
0 -0.2 -0.4
10-4
10-6
10-8
-0.6 -0.8 0.2
0.205
0.21
0.215
10-10
0.22
0
500
1000
1500
Time, s
Frequency, Hz
(a)
(b)
2000
10-4
0.1 DIC LDV
PSD(w), mm2/Hz
w, mm
DIC LDV
10-5
0.05
0
-0.05
-0.1 0.2
2500
10-6 10-7 10-8
0.205
0.21
Time, s
(c)
0.215
0.22
10-9
0
500
1000
1500
2000
2500
Frequency, Hz
(d)
Fig. 5.4 Comparison of measured panel vibration; DIC system below tunnel; LDV through flow. (a) Near center, high-pass-filtered. (b) Near center. (c) Near ramp, high-pass-filtered. (d) Near ramp
Figure 5.4 shows the time-domain and frequency-domain comparison of the measured data from two test runs, where the LDV measured the panel vibration near the center and near the ramp (near locations .x1 and .x2 in Fig. 5.2a) and excellent agreement is shown. The standard deviation of the high-pass-filtered deformation is 0.2146 mm in the case of LDV and 0.297 mm in the case of DIC, representing a relative error of 2.3%. The TRAC value is 0.985. Since both DIC and LDV agree well with each other, this leads to the conclusion that the LDV is unaffected by any flow-induced optical distortion effects. Thus, the LDV is considered a “true” reference value for the purpose of this paper.
5 Measurements of Panel Vibration with DIC and LDV Imaged Through a Mach 5 Flow 0.1
20
y, mm
0
0 -10
w, mm
0.05
10
-0.05
-20
-0.1
20
40
60
80
100
120
(a)
(b)
20
0.06
0.04
0
0.03 0.02
-10
std(w), mm
0.05
10
y, mm
45
0.01
-20 20
0
40
60
80
100
(c)
120
(d)
Fig. 5.5 Mean and standard deviation of displacement of a rigid panel measured using DIC through the flow. Rectangular region marked with dotted line (in b and d) was imaged through the shock wave. (a) Mean, no flow. (b) Mean, with flow. (c) Standard deviation, no flow. (d) Standard deviation, with flow
5.3.3 DIC and LDV Through Flow Rigid Panel Tests Two test runs were performed on a rigid (25-mm-thick aluminum) panel, where the DIC and LDV measured the deformation from above through the flow. Shown in Figs. 5.5a and b are the mean deformations from DIC before and during a tunnel run. The mean values are mostly uniform in the no-flow case, which was expected. However, in the flow case, several issues can be seen. The overall panel deformation is slightly negative (away from the camera) except for the upstream edge and the centerline. Furthermore, five large positive spikes are seen along the centerline. These coincide with the locations of the pressure transducer holes in the rigid panel. The DIC sticker likely did not bond very well to the plugs that were inserted there, and the low static pressure (.> TA = 9), not tolerable, if the excitation comes from shaker 2. Instead, when changing the signature of the excitation to blue noise spectrum, in dof 1947 the risk index from shaker 1 is .RI1947 = 10.42 > TA = 9 and tells that the location of a potential defect is not safe anymore. Only when the blue noise is injected from the other location available, shaker 2, dof 1947 turns to be a tolerable position for a defect, as .RI1947 = 5.915 < TA = 9. Furthermore, the risk index mapping suggests a grading perspective of this defect tolerance concept, tunable by means of the threshold, as the distance of the .RI from the .TA, also if .HtF mean , turns to be absolute and not case dependent. The effectiveness of full-field FRF-based risk index mapping was markedly underlined. Promising outputs resulted with this mixing of excitation signatures and energy injection locations, confirming how the experiment-based full-field structural
5.588e+01 ESPI_r
Risk-index mapping PINK NOISE excit. ref. amp.= 10 [mN] Shakers: active #1[2611] mute #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1454] ESPI_r [1454] = -1.301e+01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
a
-1.481e+01
-1.481e+01
5.588e+01 ESPI_r
9 Risk tolerance mapping in dynamically loaded structures as excitation dependency by means of full-field receptances
79
Risk-index mapping PINK NOISE excit. ref. amp.= 10 [mN] Shakers: active #1[2611] mute #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1947] ESPI_r [1947] = 8.749e+00
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
5.136e+01 ESPI_r
Risk-index mapping PINK NOISE excit. ref. amp.= 10 [mN] Shakers: mute #1[2611] active #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1454] ESPI_r [1454] = -2.142e+01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
a
-3.223e+01
-3.223e+01
5.136e+01 ESPI_r
Fig. 9.9 Examples of risk index mapping on the whole area, with .RI = −13.01 in dof 1454 in (a) and with .RI = 8.75 in dof 1947 in (b), with pink noise excitation from shaker 1. If .threshold = 9, a defect in dof 1454 is tolerable, as well as a defect in dof 1947, but clearly closer to the selected threshold of acceptance, thus more dangerous in a risk grading perspective. Risk-index mapping PINK NOISE excit. ref. amp.= 10 [mN] Shakers: mute #1[2611] active #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1947] ESPI_r [1947] = 1.469e+01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Fig. 9.10 Examples of risk index mapping on the whole area, with .RI = −21.42 in dof 1454 in (a) and with .RI = 14.69 in dof 1947 in (b), with pink noise excitation from shaker 2. If .threshold = 9, a defect in dof 1454 is clearly tolerable, whereas a defect in dof 1947 is not tolerable, markedly above the selected threshold of acceptance, thus very dangerous.
dynamics is fully retained and can usefully represent the loaded structures in broad frequency band dynamics, without misleading assumptions on damping or boundaries, nor modal identification errors nor model tuning inconsistencies. It was sufficient to change the dynamic signature and the energy injection location of the excitation to understand how the problematic areas on the sample changed, with the complete structural dynamics of the real test, with the specific blending of modal shapes as appeared in the test setup, but without modal identification’s approximation. It might be underlined therefore how strategic becomes to run experiments in the same operative conditions of the real samples and to rightly estimate the macro-dynamic loading on the structure, as all this knowledge contributes to the final acceptance of potential defects into the components.
A. Zanarini
4.267e+01 ESPI_r
Risk-index mapping BLUE NOISE excit. ref. amp.= 10 [mN] Shakers: active #1[2611] mute #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1454] ESPI_r [1454] = -2.353e+01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
a
-2.369e+01
-2.369e+01
4.267e+01 ESPI_r
80
Risk-index mapping BLUE NOISE excit. ref. amp.= 10 [mN] Shakers: active #1[2611] mute #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1947] ESPI_r [1947] = 1.042e+01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
2.293e+01 ESPI_r
Risk-index mapping BLUE NOISE excit. ref. amp.= 10 [mN] Shakers: mute #1[2611] active #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1454] ESPI_r [1454] = -1.310e+01
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
a
-1.550e+01
-1.550e+01
2.293e+01 ESPI_r
Fig. 9.11 Examples of risk index mapping on the whole area, with .RI = −23.53 in dof 1454 in (a) and with .RI = 10.42 in dof 1947 in (b), with blue noise excitation from shaker 1. If .threshold = 9, a defect in dof 1454 is clearly tolerable, whereas a defect in dof 1947 is not tolerable, just above the selected threshold of acceptance, thus dangerous. Risk-index mapping BLUE NOISE excit. ref. amp.= 10 [mN] Shakers: mute #1[2611] active #2[931] Real part dB[1/h] [projection angle 0 deg]
Dof [1947] ESPI_r [1947] = 5.915e+00
(c) ALESSANDRO ZANARINI Spin-off activities from the researches in Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 Project TEFFMA - Towards Experimental Full Field Modal Analysis
b
Fig. 9.12 Examples of risk index mapping on the whole area, with .RI = −13.10 in dof 1454 in (a) and with .RI = 5.92 in dof 1947 in (b), with blue noise excitation from shaker 2. If .threshold = 9, a defect in dof 1454 is clearly tolerable, as well as a defect in dof 1947, but clearly in a more solicited area, closer to the selected threshold of acceptance, in a risk grading perspective.
9.7 Conclusions This chapter has highlighted the importance of enhanced experiment-based tools in bringing the real and complete structural dynamics of a component in a broad frequency band, together with excitation signature and energy injection location, into fatigue life predictions. Furthermore, the concept of defect tolerance has been introduced as based upon the fatigue life predictions obtained from full-field experiment-based receptances. The definition of a risk index with related threshold of acceptance was used to discriminate the dangerous location of potential defects, once the real dynamic behaviour is fully retained, and not simplified, directly from real samples and without any FE model to be carefully updated.
9 Risk tolerance mapping in dynamically loaded structures as excitation dependency by means of full-field receptances
81
The optical full-field measurement techniques are becoming a viable means for a risk tolerance experiment-based fullfield assessment in production and working conditions, because of their ability to identify defects, of their unprecedented mapping ability and of their retention of a refined structural dynamics in both the frequency and spatial domains, to be coupled with proper estimations of real-life loading conditions. Furthermore, the risk tolerance experiment-based full-field concept imposes the refinement of structural modelling to cope with enhanced experiment-based new achievements and benchmarks. Acknowledgments This activity is a spin-off of the Project TEFFMA—Towards Experimental Full-Field Modal Analysis, funded by the European Commission at the Technische Universitaet Wien, through the Marie Curie FP7-PEOPLE-IEF-2011 PIEF-GA-2011-298543 grant, for which the Research Executive Agency is greatly acknowledged. TU-Wien, in the person of Prof. Wassermann and his staff, are kindly acknowledged for having hosted the TEFFMA project of the author at the Schwingungs- und Strukturanalyse/Optical Vibration Measurement Laboratory. The workstation used, to extensively code the numerical tools and to process the datasets, was provided by the author on his own savings.
References 1. Zanarini, A.: On the defect tolerance by fatigue spectral methods based on full-field dynamic testing. Proc. Struct. Integrity 37, 525–532 (2022). Paper ID 105, ICSI 2021 The 4th International Conference on Structural Integrity 2. Zanarini, A.: About the excitation dependency of risk tolerance mapping in dynamically loaded structures. In: Proceedings of the ISMA2022 including USD2022—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 12–14, pp. 1–15. KU Leuven (2022). Paper ID 208 3. Zanarini, A.: Introducing the concept of defect tolerance by fatigue spectral methods based on full-field frequency response function testing and dynamic excitation signature. Int. J. Fatigue 165, 107184 (2022) 4. Zanarini, A.: Damage location assessment in a composite panel by means of electronic speckle pattern interferometry measurements. In: Proceedings of the IDETC/CIE ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Long Beach, California, USA, September 24–28, pp. 1–8. ASME (2005). Paper DETC2005-84631 5. Zanarini, A.: On the exploitation of multiple 3D full-field pulsed ESPI measurements in damage location assessment. Proc. Struct. Integrity 37, 517–524 (2022). Paper ID 104, ICSI 2021 The 4th International Conference on Structural Integrity 6. Heylen, W., Lammens, S., Sas, P.: Modal Analysis Theory and Testing, 2nd edn. Katholieke Universiteit Leuven, Leuven (1998). ISBN 9073802-61-X 7. Ewins, D.J.: Modal Testing—Theory, Practice and Application, 2nd edn. Research Studies Press, Hertfordshire (2000) 8. Zanarini, A.: Broad frequency band full field measurements for advanced applications: point-wise comparisons between optical technologies. Mech. Syst. Signal Process. 98, 968–999 (2018) 9. Zanarini, A.: Chasing the high-resolution mapping of rotational and strain FRFs as receptance processing from different full-field optical measuring technologies. Mech. Syst. Signal Process. 166, 108428 (2022) 10. Dirlik, T.: Application of computers in fatigue analysis. Ph.D. Thesis, University of Warwick, 1985 11. Zanarini, A.: Risk tolerance mapping in dynamically loaded structures as excitation dependency by means of full-field receptances. In: Proceedings of the IMAC XLI—International Modal Analysis Conference—Keeping IMAC Weird: Traditional and Non-traditional Applications of Structural Dynamics, Austin (Texas), USA, pp. 1–15. Springer & SEM Society for Experimental Mechanics, 2023. Paper ID 14648 12. Zanarini, A.: Full field optical measurements in experimental modal analysis and model updating. J. Sound Vib. 442, 817–842 (2019) 13. Zanarini, A.: Experiment-based optical full-field receptances in the approximation of sound radiation from a vibrating plate. In: Proceedings of the IMAC XLI—International Modal Analysis Conference—Keeping IMAC Weird: Traditional and Non-traditional Applications of Structural Dynamics, Austin (Texas), USA, pp. 1–13. Springer &SEM Society for Experimental Mechanics, 2023. Paper ID 14650 14. Zanarini, A.: Dynamic behaviour characterization of a brake disc by means of electronic speckle pattern interferometry measurements. In: Proceedings of the IDETC/CIE ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Long Beach, California, USA, September 24–28, pp. 273–280. ASME (2005). Paper DETC2005-84630 15. Zanarini, A.: Full field ESPI measurements on a plate: challenging experimental modal analysis. In: Proceedings of the XXV IMAC, Orlando (FL) USA, Feb 19–22, pp. 1–11. SEM (2007). Paper s34p04 16. Zanarini, A.: Fatigue life assessment by means of full field ESPI vibration measurements. In: Sas, P. (ed.) Proceedings of the ISMA2008 Conference, September 15–17, Leuven (Belgium), pp. 817–832. KUL (2008). Condition monitoring, Paper 326 17. Zanarini, A.: Full field ESPI vibration measurements to predict fatigue behaviour. In: Proceedings of the IMECE2008 ASME International Mechanical Engineering Congress and Exposition, October 31- November 6, Boston (MA) USA, pp. 165–174. ASME (2008). Paper IMECE2008-68727 18. Zanarini, A.: On the estimation of frequency response functions, dynamic rotational degrees of freedom and strain maps from different full field optical techniques. In: Proceedings of the ISMA2014 including USD2014—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 15–17, pp. 1177–1192. KU Leuven (2014). Dynamic testing: methods and instrumentation, paper ID676 19. Zanarini, A.: On the role of spatial resolution in advanced vibration measurements for operational modal analysis and model updating. In: Proceedings of the ISMA2014 including USD2014—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 15–17, pp. 3397–3410. KU Leuven (2014). Operational modal analysis, paper ID678
82
A. Zanarini
20. Zanarini, A.: Comparative studies on full field FRFs estimation from competing optical instruments. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp. 1559–1568. Univ. Ljubljana & IFToMM (2015). ID191 21. Zanarini, A.: Accurate FRFs estimation of derivative quantities from different full field measuring technologies. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp. 1569–1578. Univ. Ljubljana & IFToMM (2015). ID192 22. Zanarini, A.: Full field experimental modelling in spectral approaches to fatigue predictions. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp. 1579–1588. Univ. Ljubljana & IFToMM (2015). ID193 23. Zanarini, A.: Model updating from full field optical experimental datasets. In: Proceedings of the ICoEV2015 International Conference on Engineering Vibration, Ljubljana, Slovenia, September 7–10, pp. 773–782. Univ. Ljubljana & IFToMM (2015). ID196 24. Zanarini, A.: Competing optical instruments for the estimation of full field FRFs. Measurement 140, 100–119 (2019) 25. Zanarini, A.: On the making of precise comparisons with optical full field technologies in NVH. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 2293–2308. KU Leuven (2020). Optical methods and computer vision for vibration engineering, paper ID 695 26. Avitabile, P., O’Callahan, J., Chou, C.M., Kalkunte, V.: Expansion of rotational degrees of freedom for structural dynamic modification. In: Proceedings of the 5th International Modal Analysis Conference, pp. 950–955 (1987) 27. Liu, W., Ewins, D.J.: The importance assessment of RDOF in FRF coupling analysis. In: Proceedings of the IMAC 17th Conference, Kissimmee, Florida, pp. 1481–1487 (1999). Society for Experimental Mechanics (SEM) 28. Research Network: QUATTRO Brite-Euram project no: BE 97-4184. Technical report, European Commission Research Framework Programs, 1998 29. Friswell, M., Mottershead, J.E.: Finite Element Model Updating in Structural Dynamics. Solid Mechanics and Its Applications. Kluwer Academic Publishers, Springer Netherlands (1995) 30. Haeussler, M., Klaassen, S.W.B., Rixen, D.J.: Experimental twelve degree of freedom rubber isolator models for use in substructuring assemblies. J. Sound Vib. 474, 115253 (2020) 31. Pogacar, M., Ocepek, D., Trainotti, F., Cepon, G., Boltezar, M.: System equivalent model mixing: a modal domain formulation. Mech. Syst. Signal Process. 177, 109239 (2022) 32. Wagner, P., Huesmann, A.P., van der Seijs, M.V.: Application of dynamic substructuring in NVH design of electric drivetrains. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 3365–3382. KU Leuven (2020). Vehicle noise and vibration (NVH), paper ID 369 33. Mueller, T., Haeussler, M., Sedlmair, S., Rixen, D.J.: Airborne transfer path analysis for an e-compressor. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 3351–3364. KU Leuven (2020). Vehicle noise and vibration (NVH), paper ID 287 34. Bendat, J.S., Piersol, A.G.: Random Data: Analysis and Measurement Procedures, 3rd edn. John Wiley, New York (2000) 35. Zanarini, A.: On the approximation of sound radiation by means of experiment-based optical full-field receptances. In: Proceedings of the ISMA2022 including USD2022—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 12–14, pp. 1–15. KU Leuven (2022). Paper ID 207 36. Zanarini, A.: On the influence of scattered errors over full-field receptances in the Rayleigh integral approximation of sound radiation from a vibrating plate. Acoustics 37pp. (2023). ISSN: 2624–599X. (Invited feature paper, accepted on August 21st 2023) 37. Rastogi, P.K.: Optical Measurement Techniques and Applications. Artech House, Nordwood (1997) 38. Kreis, T.: Handbook of Holographic Interferometry. Wiley-VCH, Berlin (2004) 39. Allen, M.S., Sracic, M.W.: A new method for processing impact excited continuous-scan laser Doppler vibrometer measurements. Mech. Syst. Signal Process. 24(3), 721–735 (2010) 40. Di Maio, D., Ewins, D.J.: Continuous scan, a method for performing modal testing using meaningful measurement parameters; Part I. Mech. Syst. Signal Process. 25(8), 3027–3042 (2011) 41. Baqersad, J., Poozesh, P., Niezrecki, C., Avitabile, P.: Photogrammetry and optical methods in structural dynamics—a review. Mech. Syst. Signal Process. 86, 17–34 (2016) 42. Del Sal, R., Dal Bo, L., Turco, E., Fusiello, A., Zanarini, A., Rinaldo, R., Gardonio, P.: Vibration measurements with multiple cameras. In: Proceedings of the ISMA2020 including USD2020—International Conference on Noise and Vibration Engineering, Leuven, Belgium, September 7–9, pp. 2275–2292. KU Leuven (2020). Optical methods and computer vision for vibration engineering, paper ID 481 43. Del Sal, R., Dal Bo, L., Turco, E., Fusiello, A., Zanarini, A., Rinaldo, R., Gardonio, P.: Structural vibration measurement with multiple synchronous cameras. Mech. Systems Signal Process. 157, 107742 (2021) 44. Dirlik, T., Benasciutti, D.: Dirlik and Tovo-Benasciutti spectral methods in vibration fatigue: a review with a historical perspective”. Metals 11(9), 1333 (2021) 45. Papuga, J., Margetin, M., Chmelko, V.: Various parameters of the multiaxial variable amplitude loading and their effect on fatigue life and fatigue life computation. Fatigue Fract. Eng. Mater. Struct. 44(10), 2890–2912 (2021) 46. Papuga, J., Nesládek, M., Hasse, A., Cízová, E., Suchý, L.: Benchmarking newer multiaxial fatigue strength criteria on data sets of various sizes. Metals 12(2), 289:1–30 (2022) 47. Wolfsteiner, P., Sedlmair, S.: Deriving Gaussian fatigue test spectra from measured non-Gaussian service spectra. Proc. Eng. 101, 543–551 (2015). Third International Conference on Material and Component Performance under Variable Amplitude Loading, VAL 2015, Editors J. Papuga and M. Ruzicka
Chapter 10
Lightweight Internal Damage Segmentation Using Thermography with and Without Attention-Based Generative Adversarial Network Rahmat Ali and Young-Jin Cha
Abstract A lightweight internal damage segmentation network (IDSNet) (Ali and Cha, Autom Constr 141:104412, 2022) segments internal damages in concrete using active thermography. This study further investigates the performance of IDSNet for segmentation of subsurface damage. The IDSNet consists of an intensive module and a superficial module. The intensive module focuses on contextual features and learning complex correlations, and the superficial module learns spatial features in the input. The lightweight IDSNet has only 0.085 million parameters and processes a thermal image of 640 × 480 × 3 with 74 frames per second. The deep learning network requires extensive data for training; therefore, attention generative adversarial network known as AGAN (Ali and Cha, Autom Constr 141:104412, 2022), was used to generate artificial data for training IDSNet. The IDSNet was trained with and without AGAN data. The IDSNet achieved a 0.767 average intersection over union (aIoU) without using AGAN data and 0.891 aIoU with using AGAN data. Keywords Segmentation · Internal Damage · AI · Damage
10.1 Introduction Surface damage such as cracks, spalling, potholes, and corrosion can be detected easily in digital images. Previously, several studies have replaced the traditional manual inspection procedure with deep learning to autonomously detect surface damages [2–4]. Cha et al. [2] developed a deep convolution neural network for concrete crack detection. The method was further extended, and a faster region-based convolution neural network was used to detect multiple types of damage [3]. The unmanned aerial vehicles (UAV) were used with deep learning techniques to detect concrete cracks in GPS-denied areas [5]. Moreover, the method was further extended to detect multiple damages in a miniature steel bridge model and parkade structures using an autonomous UAV [6]. The modified faster RCNN detected small-sized cracks with thickness varying from 0.0009 m to 0.0002 m in steel sections [6]. Ali et al. (2020) also used multispectral dynamic images for tunnel concrete crack detection [7]. Gustavo et al. (2019) used faster RCNN with a depth camera to evaluate spalling [8]. Recently, pixel-wise segmentation approaches have been developed to detect pixel-level cracks. Choi and Cha (2019) developed SDDNet [9] for concrete crack segmentation in complex environments. Kang and Cha (2022) developed STRNet [10] for concrete segmentation. The developed method can detect cracks in a complex scene with 92 aIoU at 49 frames per second [10]. Hybrid approaches were also adopted for concrete crack segmentation [11]. In addition to surface damage detection, subsurface damage detection is equally important in maintaining structural integrity. Ali and Cha [12, 13] developed deep inception neural network (DINN) for steel member delamination. Structural systems such as bridges are exposed to heavy traffic loading and harsh environmental conditions, particularly freeze-thaw cycles [14]. These factors accelerate the deterioration of the structures. The internal delamination in concrete is caused by an expansion of reinforcement, which induces stresses in the concrete and leads to cracks. Several studies used devices such as ground penetrating radar [15–17] and impact echo [18] for internal damage detection. In a previous study, the internal damage segmentation network (IDSNet) was proposed for concrete damage detection [1]. Additional data was produced using AGAN [1]. In this study, we further investigate the performance of IDSNet on new images with the addition of data from AGAN and without considering data from AGAN.
R. Ali · Y.-J. Cha () Department of Civil Engineering, University of Manitoba, Winnipeg, MB, Canada e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_10
83
84
R. Ali and Y. Cha
10.2 Methodology The IDSNet [1] was designed in a previous study for segmenting internal damage in concrete. Figure 10.1 shows the overall flow chart of the method [1]. Firstly, the data were collected from the concrete slab specimens. These concrete specimens were prepared in the laboratory, and polystyrene sheets were placed at the known depth inside the concrete during concrete casting. The data were collected using ZenMuse XT2 [19] and FLIR T650 SC thermal cameras [20]. The collected video and image data were used for dataset preparation. The traditional data augmentation approaches were adopted to increase the size of the data. The increased augmented data with the original data were used to train AGAN [1]. The trained AGAN was further used to generate fake data. The original data and the new fake generated data were combined for training the IDSNet. The performance of IDSNet was studied further with and without using AGAN. Additional data were used to test the performance of the developed IDSNet. Figure 10.2 shows the overall flowchart of the IDSNet model [1]. A thermal image of the concrete slab samples is fed into IDSNet. The IDSNet consists of convolution blocks, superficial, and intensive modules. The input image passes through convolution block-1 and convolution block-2, average pooling, superficial module, and intensive module. The superficial module and intensive module recover the size by using upsampling operations. The bilinear upsampling operations were
Fig. 10.1 Flow chart [1]
10 Lightweight Internal Damage Segmentation Using Thermography. . .
85
Fig. 10.2 IDSNet architecture [1]
Fig. 10.3 Intensive module [1]
adopted in both the intensive module and the superficial module. The output of the superficial module and intensive module was integrated using elementwise summation. Average pooling was used to decrease the features spatial dimension from the convolution block and input it into the intensive module. The convolution block was used for low-level feature extraction from the input. The output of the convolution block-1 is input to the convolution block-2 and average pooling. The output of convolution block 2 and average pooling is further input to intensive module. The output from convolution block-1 is input into the superficial module. The intensive module consists of convolution, one intensive convolution module, and seven residual intensive convolution modules [1]. Figure 10.3 shows the detailed diagram of the intensive convolution module. The intensive module is provided to extract contextual features and learn complex correlations. The output of the intensive convolution module is input into the residual intensive convolution module. Five intensive modules process the input features, and then the output of the residual intensive convolution module, intensive convolution module, and average pooling are concatenated. Different dilation ratio was adopted in the residual intensive convolution module. The concatenated features are further fed again into another residual intensive convolution module. The output of the residual intensive convolution module is reinput into another residual intensive convolution module which is again concatenated with its input. Figure 10.4 llustrates the intensive convolution module [1], comprising six sub-blocks (SBs). These SBs are organized into two sets of parallel SBs: SB-1, SB-2, and SB-3, and another set consisting of SB-4, SB-5, and SB-6. The intensive convolution module first reduced the input feature’s spatial dimension and the channel depth by half and one-third, respectively. To reduce the dimension, a convolution is used with stride equal 2. The reduced output is further input into the first three parallel SBs. The intensive convolution module aims to diminish feature loss using depth-wise asymmetric convolution. The output of the SB-1 is input into the SB-4, SB-2 to SB-5, and SB-3 to SB-6. The original input dimensions are recovered by concatenating SB-4, SB-5, and SB-6 in the intensive convolution module. The dilation ratio for all the subblocks is presented in Table 10.1. Figure 10.5 shows the residual intensive convolution module [1] used in IDSNet. The residual intensive convolution module extracts local and global features by implementing convolution, depth-wise asymmetric convolution, and higher dilated ratio. The residual intensive convolution module first reduces the channel depth by one-third. The residual intensive convolution module contains three parallel blocks, i.e., convolution module, subblock-1, and subblock-2. The simple convolution, SB-1, and SB-2 output are concatenated and added with the input using a residual connection. Three residual
86
R. Ali and Y. Cha
Fig. 10.4 Intensive convolution module [1]
Table 10.1 Dilation ratio of the subblocks in intensive convolution module
SBs SB -1 SB -2 SB -3
Dilation ratio 1 2 3
SBs SB -4 SB -5 SB -6
Dilation ratio 1 4 7
Fig. 10.5 Residual intensive convolution module [1]
intensive convolution modules with different dilated ratios were used in IDSNet. The dilation ratio of residual intensive convolution module subblocks is presented in Table 10.2. The superficial module first reduces the channel depth by half using a convolution. The output is input into two parallel depth-wise asymmetric convolutions. The depth-wise asymmetric convolution consists of 1 × 3 and 3 × 1, which reduces the computation cost and achieves the same level of accuracy. The batch normalization and parametric rectified linear unit are provided after each layer. Both parallel layers’ output is added, and a point-wise convolution is employed. The point-wise convolution retains the original shape. The residual connection is used to add the input feature with the resize output, as shown in Fig. 10.6.
10 Lightweight Internal Damage Segmentation Using Thermography. . .
87
Table 10.2 Dilation ratio of the subblocks in residual intensive convolution module Residual intensive convolution module Module 1 Module 2 Module 3
SBs SB-1 SB-2 SB-1 SB-2 SB-1 SB-2
Dilation ratio 3 4 3 5 4 8
Fig. 10.6 Superficial module [1]
10.3 Analysis This section shows the results of the IDSNet [1] performance with AGAN and without AGAN. Figure 10.7 shows the input image, the ground truth, the result of IDSNet without AGAN, and the result of IDSNet with AGAN. Four images were input into the model to test the performance with and without AGAN training. In Fig. 10.7a, an image with no internal damage was tested on the trained models. Both IDSNet with AGAN and without AGAN successfully output a clean image without any segmentation. Figure 10.7b shows the results of the slab with internal damage in concrete. There is a slight difference in the performance of IDSNet with and without AGAN, as shown in Fig. 10.7a and Fig. 10.7b. Figure 10.7c shows the input image of the column with internal damage, a difference in the performance of the IDSNet with AGAN and without AGAN was observed. Table 10.3 presents the scores of the additional testing images. The IDSNet without AGAN attained an average intersection over union (aIoU) of 0.767, and with AGAN it attained aIoU of 0.891. The IDSNet with AGAN also achieved higher F1-score, precision, and recall.
10.4 Conclusion The IDSNet-based internal damage segmentation method [1] was tested further to measure its performance on additional data. The thermal image from intact and internal damage structures was collected using a thermal camera, and the images were input into IDSNet. The data were tested on trained IDSNet with and without AGAN, and the results were compared. It was found that the IDSNet with AGAN and without AGAN showed similar performance when tested on the intact image. The difference in the performance was observed when IDSNet with AGAN and without AGAN was tested on thermal images with internal damages. The IDSNet with AGAN attained aIoU of 0.891 and a recall of 0.934. The IDSNet without AGAN achieved aIoU of 0.767, an F1-score of 0.805, and a recall of 0.811.
88
Fig. 10.7 IDSNet with and without AGAN results
R. Ali and Y. Cha
10 Lightweight Internal Damage Segmentation Using Thermography. . .
89
Table 10.3 IDSNet with and without AGAN IDSNet without AGAN IDSNet with AGAN
F1-score 0.805 0.938
Precision 0.801 0.947
Recall 0.811 0.934
aIoU 0.767 0.891
Acknowledgments The research presented in this paper was supported by the Canada Foundation for Innovation Grant (CFI JELF Grant No. 37394).
References 1. Ali, R., Cha, Y.J.: Attention-based generative adversarial network with internal damage segmentation using thermography. Autom. Constr. 141, 104412 (2022) 2. Cha, Y.J., Choi, W., Büyüköztürk, O.: Deep learning-based crack damage detection using convolutional neural networks. Comput. Aided Civ. Inf. Eng. 32(5), 361–378 (2017) 3. Cha, Y.J., Choi, W., Suh, G., Mahmoudkhani, S., Büyüköztürk, O.: Autonomous structural visual inspection using region-based deep learning for detecting multiple damage types. Comput. Aided Civ. Inf. Eng. 33(9), 731–747 (2018) 4. Ali, R., Gopal, D.L., Cha, Y.J.: Vision-based concrete crack detection technique using cascade features. In: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems 2018 (Vol. 10598, pp. 147–153). SPIE (2018, March) 5. Kang, D., Cha, Y.J.: Autonomous UAVs for structural health monitoring using deep learning and an ultrasonic beacon system with geo-tagging. Comput. Aided Civ. Inf. Eng. 33(10), 885–902 (2018) 6. Ali, R., Kang, D., Suh, G., Cha, Y.J.: Real-time multiple damage mapping using autonomous UAV and deep faster region-based neural networks for GPS-denied structures. Autom. Constr. 130, 103831 (2021) 7. Ali, R., Zeng, J., Cha, Y.J.: Deep learning-based crack detection in a concrete tunnel structure using multispectral dynamic imaging. In: Smart Structures and NDE for Industry 4.0, Smart Cities, and Energy Systems (Vol. 11382, pp. 12–19). SPIE (2020, April) 8. Beckman, G.H., Polyzois, D., Cha, Y.J.: Deep learning-based automatic volumetric damage quantification using depth camera. Autom. Constr. 99, 114–124 (2019) 9. Choi, W., Cha, Y.J.: SDDNet: Real-time crack segmentation. IEEE Trans. Ind. Electron. 67(9), 8016–8025 (2019) 10. Kang, D.H., Cha, Y.J.: Efficient attention-based deep encoder and decoder for automatic crack segmentation. Structural Health Monitoring, 14759217211053776 (2021) 11. Kang, D., Benipal, S.S., Gopal, D.L., Cha, Y.J.: Hybrid pixel-level concrete crack segmentation and quantification across complex backgrounds using deep learning. Autom. Constr. 118, 103291 (2020) 12. Ali, R., Cha, Y.J.: Subsurface damage detection of a steel bridge using deep learning and uncooled micro-bolometer. Constr. Build. Mater. 226, 376–387 (2019) 13. Ali, R.: Deep learning-and infrared thermography-based subsurface damage detection in a steel bridge (Master’s thesis) (2019) 14. Gucunski, N., National Research Council: Nondestructive testing to identify concrete bridge deck deterioration. Transportation Research Board (2013) 15. Kuchipudi, S.T., Ghosh, D., Gupta, H.: Automated Assessment of Reinforced Concrete Elements using Ground Penetrating Radar. Autom. Constr. 140, 104378 (2022) 16. Zhang, J., Lu, Y., Yang, Z., Zhu, X., Zheng, T., Liu, X., et al.: Recognition of void defects in airport runways using ground-penetrating radar and shallow CNN. Autom. Constr. 138, 104260 (2022) 17. Li, Y., Liu, C., Yue, G., Gao, Q., Du, Y.: Deep learning-based pavement subsurface distress detection via ground penetrating radar data. Autom. Constr. 142, 104516 (2022) 18. Dorafshan, S., Azari, H.: Evaluation of bridge decks with overlays using impact echo, a deep learning approach. Autom. Constr. 113, 103133 (2020) 19. DJI thermal camera. Available at: https://www.dji.com/ca/zenmuse-xt2 20. FLIR T650sc Available at: https://www.tequipment.net/FLIR/T650sc-25-15/Scientific-Thermal-Imagers/
Chapter 11
A Novel Framework for the Dynamic Characterization of Civil Structures Using 3D Terrestrial Laser Scanners Khalid Alkady, Christine E. Wittich, and Richard L. Wood
Abstract Despite the success of traditional structural health monitoring (SHM) techniques that rely on discrete sensors, several challenges still exist: (1) structures of interest need to be accessed for instrumentation, which is not always feasible due to complex site conditions; (2) the reliability of the results is highly dependent on the locations and the limited number of sensors mounted on the structure. To overcome these challenges, there is a critical need for non-contact monitoring techniques (e.g., laser scanners). While there has been substantial research conducted on the use of terrestrial laser scanners (TLS) (i.e., ground-based LiDAR) in monitoring the static deformation of civil structures, there have been only a few studies on the use of TLS in monitoring the dynamic vibrations of structures, which is critical information for SHM. Therefore, the main objective of this study is to develop a novel end-to-end framework to monitor the dynamic vibrations of structures using a TLS operating in the helical mode, where the TLS is operating at a fixed horizontal angle. To accomplish this goal, an extensive experimental study was conducted to investigate the effect of structure-based parameters on the robustness of TLS-based dynamic monitoring. The key parameter investigated in this chapter is the natural frequency of the specimen. A reconfigurable steel tower and weight plates were used to vary the dynamic structural parameters in this experiment. Accelerometers and infrared-based sensors were used in testing for the validation of TLS measurements. A novel spatiotemporal framework was developed to extract the dynamic vibrations of the structure of interest from the helical point clouds. The framework utilizes the density-based spatial clustering of applications with noise (DBSCAN) and change detection algorithms. The results show that the TLS can detect sub-millimeter structural vibrations. Furthermore, the dynamic response and characteristics extracted by the TLS framework closely match those of the accelerometers and infrared-based sensors. Hence, the results indicate the great potential of using TLS in monitoring the dynamic response of structures remotely. Keywords Terrestrial laser scanners · Helical scans · Remote sensing · Operational modal analysis · Spatial clustering
11.1 Introduction Civil infrastructure systems in the United States are heavily deteriorated and nearly $2.6 trillion dollars are needed over the next 10 years to maintain the country’s infrastructure at a good repair state [1]. Hence, to reduce this massive bill of repair and maintenance in the long run, there is a critical need to develop robust structural health monitoring (SHM) solutions that are capable of accurately detecting and quantifying structural deterioration at an early stage. The early diagnosis and repair of structural damage can significantly reduce the long maintenance backlogs and prolong the service life of our nation’s infrastructure. Vibration-based SHM methods (i.e., operational modal analysis (OMA)) are commonly used to calibrate and update the finite element models of civil structures, which provides valuable information on the structure’s condition in the field [2]. Furthermore, structural damage can be detected based on the modal characteristics extracted from the fieldmeasured vibrations. However, the reliability of these vibration-based methods is highly dependent on the number and locations of the sensors installed on the structure (i.e., the spatial resolution of the field monitoring) [3]. Hence, it becomes very challenging to conduct full-field monitoring of structures using traditional wire-based sensors, due to the limited number of sensors, and the large scale and complex geometry of civil structures. To this end, remote sensing technologies (i.e., laser scanners) can be leveraged to provide full-field monitoring data of civil structures for more robust model characterization and updating, and damage detection analysis [4, 5].
K. Alkady () · C. E. Wittich · R. L. Wood Department of Civil and Environmental Engineering, College of Engineering, University of Nebraska-Lincoln, Lincoln, NE, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_11
91
92
K. Alkady et al.
Terrestrial laser scanners (TLS) have been extensively used in monitoring the long-term deformation of large civil structures due to the high spatial resolution of the point clouds generated by the TLS [6, 7]. For instance, the long-term deformations of dams, bridges, and landslides were accurately quantified by comparing point clouds collected at different epochs to a reference point cloud using change detection algorithms (i.e., iterative closest point (ICP) and Multiscale Model to Model Cloud Comparison (M3C2)) and surface reconstruction methods [6–11]. Despite the great success of TLS in monitoring static deformations, there have been only a few case studies that explored the use of TLS in monitoring the dynamic deformations of civil structures. Jatmiko and Psimoulis [12] monitored the dynamic response of a high-rise steel structure using a TLS operating at a fixed horizontal angle (i.e., helical scan mode). Although the study yielded promising results with respect to quantifying structural vibration, several research gaps remain that need to be addressed to expand the use of TLS in dynamic SHM applications. First, system identification results must be validated with respect to traditional, reliable sensing modalities and evaluated with respect to quality and resolution. Second, the limitations of the TLS approach must be evaluated across a broad range of dynamic properties that represent typical civil infrastructure. Third, an endto-end framework for spatially clustering and processing the helical point clouds autonomously must be developed for scalability. This project aims to address these gaps through the following objectives: (1) investigate the robustness of TLSbased dynamic monitoring across a range of structures with various dynamic characteristics; (2) validate TLS results using infrared-based sensors and accelerometers; and (3) develop a novel spatio-temporal framework to autonomously extract the dynamic vibrations of the structure of interest from the helical point clouds.
11.2 LiDAR Framework for Dynamic Response Extraction In this project, a FARO Focus3D X 130 laser scanner mounted on a helical adapter was used to acquire the dynamic response of a reconfigurable experimental specimen, as shown in Fig. 11.1a. The helical adapter allows the scanner’s mirror to rotate vertically only at a fixed horizontal angle, which facilitates the continuous detection of the specimen’s profile with a high spatial resolution during the scan without time delay. Typically, a helical scan file consists of a series of sequentially timestamped 2D point clouds (i.e., scanlines) of its field of view, which corresponds to the total number of complete revolutions done by the scanner’s vertical mirror. The time-stamped scanlines are extracted from the original helical scan file using the open-source FARO Open software. The scanline files store the time stamp (i.e., automation time) of each data point acquired with a time resolution of 1 μs as well as the position of each data point in the cartesian coordinate system format where Y is the horizontal distance and Z is the height, as shown in Fig. 11.1b.
Fig. 11.1 (a) Experimental setup and (b) helical point cloud
11 A Novel Framework for the Dynamic Characterization of Civil Structures Using 3D Terrestrial Laser Scanners
93
Fig. 11.2 Framework for processing helical scans
A novel spatio-temporal algorithm was developed to process the scanline files to extract the structural vibration information. As Fig. 11.2 shows, the algorithm starts with spatially clustering the 2D point clouds based on the point cloud’s spatial density using the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm. The DBSCAN algorithm was chosen as it is suitable and robust for large spatial databases with noise, which applies to this study [13]. Then, each of the point cloud’s clusters is further grouped into voxels of size 5 cm. The spatial clustering and voxelization steps are necessary to create a higher level of data representation for the 2D point clouds, which facilitates the automation and scalability of the proposed algorithm. To extract the dynamic response of each voxel (i.e., change detection), the median of the points in a voxel at any time “t” is compared to the corresponding point in the reference frame (i.e., 2D point cloud at time t = 0) to quantify the dynamic motion that occurred. Only the medians of the voxels were tracked in the change detection step, rather than all of the voxel points, to make the dynamic motion extracted more robust to the scanner’s noise.
11.3 Experimental Program To accomplish the project’s objectives, an extensive experimental study was executed to investigate the effect of structurebased parameters (i.e., natural frequency) on the robustness of the proposed TLS-based dynamic monitoring framework. A 2.45-m tall reconfigurable HSS 101.6 × 101.6 × 9.53 mm steel tower was used as the specimen in this study, where weight plates were used to vary the dynamic properties of the steel tower during testing, as shown in Fig. 11.1a. Six single-degreeof-freedom (SDOF) configurations of the steel tower with unique natural frequencies were considered in the experimental program. The SDOFs’ natural frequencies range from 3.6 Hz to 7.8 Hz. For each SDOF configuration, the scanner was used to monitor the free vibration response of the specimen for approximately 360 seconds. During each test, the specimen was subjected to an impulse load from a mallet at the top level approximately every 60 seconds for excitation. In all the tests, the scanner was at a distance of 8 m from the tower, and the scan’s resolution and sampling frequency were 4268 points
94
K. Alkady et al.
Fig. 11.3 Sample time history of the vibration at the top of the steel tower under excitation
Fig. 11.4 (a) Sample power spectral density plot. (b) Summary results of the first mode natural frequencies estimated from the accelerometers, Optrotrak, and LiDAR data
per scanline and 48 Hz, respectively. The scanner’s sampling frequency corresponds to the total number of revolutions of the vertical mirror per second, which is inversely related to resolution. To validate the results of the TLS-based dynamic monitoring framework, PCB 352C34 accelerometers and an Optrotrak Cetrus motion capture system (i.e., infrared-based sensors) were used in the experiments. The PCB 352C34 model has a sensitivity of (±5%) 100 mV/g, measurement range of ±50 g, and a frequency range of (±5%) 0.5 Hz – 10 kHz. The accelerometers and Optrotrak markers were mounted on the specimen at three levels (i.e., 1.25, 1.82, and 2.45 m).
11.4 Analysis and Discussion The main goal of this section is to compare the LiDAR results to those of the accelerometers and Optrotrak for validation. Figure 11.3 shows an example of the tower displacement detected by the LiDAR and Optrotrak during a typical test. The plot indicates that both the LiDAR and Optrotrak displacements are in strong agreement. Although the LiDAR displacements are noisier than those of the Optrotrak, the signals are synchronized in time. In addition, it should be noticed that both the LiDAR and Optrotrak measured similar amplitudes at the beginning of each excitation, which demonstrates the accuracy of LiDAR dynamic measurements at higher levels of excitation and the capability of detecting sub-millimeter displacements. Furthermore, the natural frequencies of each of the six SDOF configurations considered in the experiments were extracted from the LiDAR, Optrotrak, and accelerometer data for comparison and further validation of the TLS-based approach. Figure 11.4a shows an example power spectral density (PSD) of one test, where the natural frequency determined via LiDAR is nearly identical to those obtained via the Optrotrak and accelerometers. Also, Fig. 11.4b plots the natural frequencies of the
11 A Novel Framework for the Dynamic Characterization of Civil Structures Using 3D Terrestrial Laser Scanners
95
LiDAR and Optotrak versus that determined via accelerometer. This plot includes a dashed 1-to-1 line for reference, which highlights the strong agreement between the sensing modalities.
11.5 Conclusion Robust remote sensing frameworks are needed to address the limitations of traditional SHM techniques that typically rely on wired sensors to monitor the dynamic response of civil structures. In this study, a novel end-to-end framework was developed to monitor the dynamic response of civil structures using a TLS operating in the helical mode. The proposed framework was used to monitor the free vibration of six unique SDOF configurations in a controlled laboratory environment. To validate the TLS results, the Optrotrak Cetrus motion capture system and high-sensitivity accelerometers were incorporated. In addition, an end-to-end novel spatio-temporal framework was developed to autonomously extract the structural vibrations from the helical point clouds of the structures of interest. The analysis shows that the TLS results (i.e., natural frequencies and displacements) strongly agree with those of the accelerometers and Optrotrak across all SDOF configurations considered in the experiments. Furthermore, the proposed TLS-based framework was capable of detecting sub-millimeter structural vibrations, which emphasizes the potential of TLS for conducting accurate full-field dynamic monitoring of civil structures.
Acknowledgments The authors gratefully acknowledge the support of the Mid-America Transportation Center for funding this research project. In addition, the authors thank Mr. Peter Hilsabeck for assistance in the laboratory.
References 1. The ASCE’s 2021 Report Card for America’s Infrastructure, American Society of Civil Engineers, http://infrastructurereportcard.org. Last accessed 14 Oct 2022 2. Doebling, S.W., Farrar, C.R., Prime, M.B.: A summary review of vibration-based damage identification methods. Shock Vibrat. Digest. 30(2), 91–105 (1998) 3. Fan, W., Qiao, P.: Vibration-based damage identification methods: a review and comparative study. Struct. Health Monit. 10(1), 83–111 (2011) 4. Friswell, M., Mottershead, J.E.: Finite Element Model Updating in Structural Dynamics, 1st edn. Springer, Dordrecht (1995) 5. Ewins, D.J.: Modal Testing: Theory, Practice and Application, 2nd edn. John Wiley & Sons (2009) 6. Vezoˇcnik, R., Ambrožiˇc, T., Sterle, O., Bilban, G., Pfeifer, N., Stopar, B.: Use of terrestrial laser scanning technology for long term high precision deformation monitoring. Sensors. 9(12), 9873–9895 (2009) 7. Mukupa, W., Roberts, G.W., Hancock, C.M., Al-Manasir, K.: A review of the use of terrestrial laser scanning application for change detection and deformation monitoring of structures. Surv. Rev. 49(353), 99–116 (2017) 8. Lague, D., Brodu, N., Leroux, J.: Accurate 3D comparison of complex topography with terrestrial laser scanner: application to the Rangitikei canyon (NZ). ISPRS J. Photogramm. Remote Sens. 82, 10–26 (2013) 9. Schäfer, T., Weber, T., Kyrinovic, P., Zámecniková, M.: Deformation measurement using terrestrial laser scanning at the hydropower station of Gabcikovo. In: Proceedings of the INGEO 2004 and FIG Regional Central and Eastern European Conference on Engineering Surveying, Bratislava, Slovakia (2004) 10. Sarti, P., Vittuari, L., Abbondanza, C.: Laser scanner and terrestrial surveying applied to gravitational deformation monitoring of large VLBI telescopes’ primary reflector. J. Surv. Eng. 135(4), 136–148 (2009) 11. Teza, G., Galgaro, A., Zaltron, N., Genevois, R.: Terrestrial laser scanner to detect landslide displacement fields: a new approach. Int. J. Remote Sens. 28(16), 3425–3446 (2007) 12. Jatmiko, J., Psimoulis, P.: Deformation monitoring of a steel structure using 3D Terrestrial Laser Scanner (TLS). In: Proceedings of the 24th International Workshop on Intelligent Computing in Engineering, pp. 10–12, Nottingham, UK (2017) 13. Ester, M., Kriegel, H. P., Sander, J., Xu, X.: A Density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining, pp. 226–231, AAAI Press, Portland (1996)
Chapter 12
Orthorectification for Dense Pixel-Level Spatial Calibration for Video-Based Structural Dynamics David Mascareñas and Andre Green
Abstract Video-based structural dynamics techniques have shown great promise for applications such as monitoring the structural health of critical infrastructure such as locks and dams. Full-field approaches that make use of direct methods such as optical flow have the added attractive quality that they have demonstrated an ability to detect small damage on account of the high spatial density of pixels associated with imager measurements. For the case of inspecting critical infrastructure such as locks and dams, deformation measurements have also been shown to have utility. Imagers can be used to measure deformation; however, for the case of a complex 3D scene, every pixel can potentially have different sensitivity to deformation on account of the perspective transformation associated with pinhole camera photography. Telecentric lenses can be used to avoid perspective projection effects; however, telecentric lenses are large, expensive, and can only observe an area equal in size to their aperture greatly reducing their suitability for infrastructure inspection applications. In this work, we adapt techniques for orthorectification to attempt to address the issue of calibrating individual pixel deformations measurements across a scene. Orthorectification typically requires a height map information in the direction normal to the plane used to generate the orthophoto. The emergence of sensors such as time-of-flight imagers with high spatial resolution has made collecting these measurements more accessible for terrestrial infrastructure inspection applications. We demonstrate the ability to fuse subpixel motion measurements captured using a perspective camera, with 3D geometry data such as that which can be captured using a time-of-flight imager. This work focuses on the case of structures exhibiting planar geometry. We then show how these techniques impact the ability to perform video-based structural dynamics analysis. Keywords Video · Point cloud · 3D geometry · Full-field · Computer graphics
12.1 Introduction There are two primary approaches to estimating motion from video: feature-based and direct. The feature-based approach is focused on finding areas of images that exhibit good correspondence and performing subsequent motion estimation based on the dynamics of these areas of the image. An overview of the feature-based approach can be found in [1]. The feature-based approach can deal with both large and subpixel motion; however, the features typically do not have high spatial resolution in comparison to the direct approach. The direct approach consists of methods for motion estimation that take information from image quantities at each pixel and use them to recover unknown parameters associated with the image. Optical flow is an example of a direct method. A review of direct methods can be found in [2]. Direct methods inherently are more appropriate for addressing smaller motion; however, they can be adapted to larger motion using pyramidal multi-resolution approaches [3]. A strength of direct approaches is their high spatial resolution resulting from the use of information captured at every pixel. The full-field video-based structural dynamics technique used in this work makes use of direct methods for motion estimation [4]. Typically, phase-based optical flow has been used for pixel-level motion estimation. One of the attractive properties of this approach is that it enables high-resolution estimates of structural mode shapes which can potentially be used to detect small damage in structures [5]. However, one issue with this approach is that a typical perspective projection camera observing an arbitrarily complex 3D scene results in an image which does not have a consistent definition of direction for vertical and horizontal across the image plane. This inconsistency leads to challenges when using phase-based optical flow
D. Mascareñas () · A. Green Los Alamos, NM, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_12
97
98
D. Mascareñas and A. Green
for more complicated 3D scenes. In addition, the current techniques for measurement of structural dynamics from video only estimate mode shapes and modal coordinates. The current technique does not estimate the absolute value of deformation. In this work, we begin developing techniques that make use of orthorectification geometric correction [6, 7] to address the challenge of an inconsistent definition of horizontal and vertical across the image plane. The proposed techniques also point toward a path of being able to enable pixel-level calibration for deformation measurement purposes.
12.2 Background 12.2.1 Mode Shape Extraction For videos of dynamic structures, the change in intensity of a pixel may be expressed as a function of the physical vibration of the structure I(x + δ(x, t)), where x is the spatial variable and δ(x, t) is the spatially local vibration through time. Motion is captured by the amplitudes and local phases of the image measurements, which can be extracted using spatially multi-scale, localized complex steerable pyramid filters [3]; this method of motion measurement has been demonstrated to provide approximations of motion robust to changes in perspective, surface conditions, and scene illumination. Full-field highresolution mode shapes may be extracted in an output-only manner from a video of a vibrating structure. An overview of this technique based on principle components analysis for dimensionality reduction and blind source separation for modal identification was originally published in [4] and further details may be found in [8].
12.2.2 Orthographic Projection Versus Perspective Projection in Complex 3D Scenes Both orthographic and perspective projections transform 3D scenes into 2D images. However, the 3D geometry of the region of space corresponding to a given pixel depends on the type of projection used. Perspective transforms use nonparallel lines of sight for projection, whereas orthographic projects use parallel lines: consequently, the 3D bounding volume corresponding to the region of space associated with a single pixel is a trapezoidal prism for single vanishing-point perspective projection, and a rectangular cuboid for orthographic projection (Fig. 12.1). Descriptions of the mechanisms and mathematics for perspective projection in a conventional imager are given in Appendix X, and the techniques used for generating orthophotos are described in the following section. For perspective projection, the geometry of the bounding volume for a pixel depends on its corresponding pixel’s distance to the center of the imagining plane. Because of the trapezoidal prism shape of this bounding volume, the 2D area of an intersecting surface that can be contained within the volume depends both on the distance and angle between the surface and the imaging plane. By contrast, for orthographic projection, the geometry of the bounding volume for a pixel is uniform
Fig. 12.1 The volume boundary for a perspective pixel is a trapezoidal prism (left) and the volume boundary for an orthographic imager’s pixel is a rectangular cuboid (right). In reality, these boundaries extend infinitely far from the pixel origin (at which point atmospheric distortion and diffusion become significant), but for visualization, here the boundaries are shown with finite depths. For the orthographic imager pixel, the plane (regardless of the distance) always occupies the same fraction of the cross-section (here, the entirety) of the bounding volume, whereas for the perspective imager pixel the plane occupies a smaller fraction of the cross-section of the bounding volume the larger the distance from the plane to the pixel’s origin becomes
12 Orthorectification for Dense Pixel-Level Spatial Calibration for Video-Based Structural Dynamics
99
Fig. 12.2 3D scene featuring a plane
Fig. 12.3 Pixel boundaries from a perspective imager when projected on a plane in a 3D scene
across all pixels; moreover, the area of the intersecting surface contained within the volume depends only on the angle between the surface and the orthophoto plane. The non-uniformity of the spatial regions corresponding to each pixel impacts motion-estimations based on the change in measured brightness. For example, in a perspective projection, the maximum area on an angled 2D plane in 3D space corresponding to a single pixel is larger for regions further from the imager (Figs. 12.2 and 12.3). Consequently, pixels corresponding to edges on the object which lie closer to the imager are more sensitive to sub-pixel motion than those corresponding to far edges. For example, consider two pixels: the first has its right half occupied by a nearby plane and its left half empty, and the second has its right half occupied by a far-away plane and again its left half is empty. Suppose both planes are parallel to the imaging plane and that the area of the plane visible in the first pixel is 2 cm by 4 cm: were the plane to completely cover the first pixel, the area the pixel would correspond to on the plane would be 4 cm by 4 cm. Likewise, suppose the area of the plane visible in the second pixel is 4 cm by 8 cm: were this plane to completely cover the second pixel, the area the pixel would correspond to on the plane would be 8 cm by 8 cm. The area covered by the second pixel on the second plane is larger than the area covered by the first pixel on the first plane because of the perspective effect making the further plane project to a smaller region of the final 2D image. If the planes had a brightness of 1 and the background a brightness of 0, and if half of each pixel is occupied by plane and half by background, the average of all intensity values within either of the pixels would be 0.5.
100
D. Mascareñas and A. Green
However, were each of the planes to move the same amount (1 cm) to the right, their change in intensities would not be equal: specifically, the new intensity of the pixel viewing the close plane would be (1 cm × 4 cm) / (4 cm × 4 cm) = 0.25, whereas the new intensity of the pixel viewing the far plane would be (3 cm × 8 cm) / (8 cm × 8 cm) = 0.375. Were the planes each to physically oscillate horizontally at the same amplitude, the amplitude of the wave corresponding to the brightness time-series of the first pixel (imaging the closer plane) would be greater than that of the second pixel (imaging the farther plane).
12.3 Analysis 12.3.1 Orthophoto Generation For imaging large structures, computer-generated orthophotos are feasible where natural orthophotos (captured with an orthographic camera or telecentric lens) are not; orthographic cameras must have sensors equal in size to the object being imaged and likewise telecentric lenses must be as large as the object being imaged. Producing such large lenses and sensors can be prohibitively expensive, weighty, and unwieldly, making them infeasible for practical mobile deployment. Another advantage of generated orthophotos as opposed to natural is that the virtual ortho-camera can be placed anywhere; even if in reality there is, for example, a building which would occlude the view or physically block a camera from being placed in that position, the virtual camera can be placed there and use the 3D data and perspective data to reconstruct what the view would look like if the building were not there. 3D data for generating the orthophoto can come from a variety of sources including emerging time-of-flight depth imagers, lidar, or even manually specified 3D geometry. We now discuss the procedure for generating an orthorectified image, given 3D geometry data, a camera position, camera matrix, the camera extrinsic, orthophoto plane normal vector, and the pixel parameters for the orthophoto (e.g., number, location, size of pixels). The following steps, illustrated in Fig. 12.4, are taken to compute the orthophoto pixel value for every desired orthophoto pixel. Step 1 Cast a ray from the center of each pixel on the orthophoto imaging plane with a direction normal to the plane toward the surface. We want to know where on the surface the ray collides. Step 2 Cast a ray from the point of collision on the surface toward the center of the camera. This ray will pass through the image plane. We want to know where on the image plane it collides. Step 3 Use bilinear interpolation at the image plane collision location to compute the pixel intensity value at the collision point. Take the computed value of the pixel intensity value at the collision point and set the orthophoto value for the current pixel equal to this value.
Fig. 12.4 Illustration of the process of orthophoto generation
12 Orthorectification for Dense Pixel-Level Spatial Calibration for Video-Based Structural Dynamics
101
Fig. 12.5 Perspective image of 3-block structure simulation
Fig. 12.6 Orthophoto of 3-block structure simulation
When these steps are completed for every pixel in a desired orthophoto, the result is an image which a consistent distance scale across the image. In addition, the concept of horizontal and vertical edges is now uniformly defined across the image plane. This uniform definition of horizontal and vertical can be illustrated with the example of the parallel bars used earlier. In this case, the imagery and 3D geometry of the. It must be noted, the conceptual steps for generating the orthographic photo are relatively straightforward. However, the first recasting step can take significant effort to achieve. General calculation of a raycast collision of complicated virtual 3D scenes can require the implementation of acceleration data structures such as a boundary volume hierarchy [9, 10]. However, 3D rendering packages such as Blender [11] and Unity [12] have functionality built-in to perform raycasts on 3D geometry. Also, in the case of 3D geometry being captured in the form of point clouds, the point cloud will most likely need to be converted to a form such as a mesh that facilitates raycasting and collision detection. Because generated orthophotos use a perspective photo for luminosity information, the density of information from the optical perspective transformation carries over to the orthophoto, specifically, regions which were further in space have low-resolution representations in the perspective projection, meaning interpolation is needed when creating the orthophoto. Conversely, regions close in space in the perspective imager plane have higher resolution and get averaged down in the orthophoto.
12.3.2 Results (Orthorectification Results) To determine whether ortho-rectification through virtual orthophoto generation as a preprocessing step for phase-based blindsource separation improved the accuracy of the extracted mode shapes, the technique was tested on a simulation of a 3-block lumped mass structure. A perspective photo of the 3-block lumped mass structure is found in Fig. 12.5, and the orthophoto of the 3-block lumped-mass structure is found in Fig. 12.6. The mode shape matrix used to define the dynamics of the simulated 3-block structure was compared against the extracted mode-shape vectors using the modal assurance criteria. The mode-shape matrix used for the simulation was a 3x3 matrix wherein the (X, Y)th element dictated the amplitude of the Yth block at the Xth modal frequency. The rows of this matrix are the mode-shape vectors. To extract the mode-shape vectors from the BSS-selected components (Figs. 12.7, 12.8, 12.9, and 12.10), the top (vertically highest) cluster of each pair of clusters associated with each block was averaged: the three values (one for each
102
D. Mascareñas and A. Green
Fig. 12.7 The BSS-extracted mode shapes for the simulation using the perspective projection video
block) from each of the three BSS-selected components were treated as a 3 × 1 mode-shape vectors. The top cluster was used for each block because for the third of the phase-based BSS-selected components of the ortho-rectified video, the leftmost block’s bottom cluster was substantially different in amplitude from the top cluster and did not match the value in the mode-shape vector the other two block’s cluster values corresponded too: this may be a result of the generated orthophoto’s edge behavior. Currently when an orthophoto’s raycast misses, the ortho-pixel at that location is assigned a flat blue color to indicate there is no clear luminosity data for that point: because the depth of the collision is unknown, the corresponding point in the perspective camera (or whether the corresponding point is even in view of the perspective camera) is also unknown. Using this method for extraction, the mode-shape vectors from the orthorectified video had an average MAC score of 0.964, whereas the perspective video’s mode-shape vectors had an average of 0.946 (Fig. 12.11). For simple geometries like the 3-block structure tested, omitting the Fourier-domain filter and instead using the raw timeseries (as opposed to the phase series) can produce BSS components whose mode-shape vectors better match the groundtruth mode-shape vectors than their phase-based companions. However, this is not anticipated to hold for more complex geometries or scenes with a more pronounced perspective effect insofar as the difference is area covered by each pixel may vary substantially; moreover, single-pixel analysis can alias multi-pixel motions which would otherwise be appropriately captured by a phase-based approach.
12.4 Conclusion The modal assurance criteria (MAC) scores between the true simulated mode shapes and the mode shapes extracted from the phase-based BSS technique using the generated orthographic video were higher (across all tested mode shapes) than those extracted using the virtual perspective video. This result indicates that orthorectification as a pre-processing step may result in more accurate mode-shape estimates. In addition, orthorectification provides a consistent length unit in the XY-plane, meaning that distortions of extracted in-plane mode shapes may be measured in terms of real, physical units (e.g., inches, centimeters). Previously, because nonrectified pixels cover different areas of the XY-plane, distances measured in pixels over the mode shape could not be readily translated into physical distances because the distance for each pixel could be different contingent on the 3D geometry of the scene, which was previously uncaptured.
12 Orthorectification for Dense Pixel-Level Spatial Calibration for Video-Based Structural Dynamics
103
Fig. 12.8 The clusters of the BSS-extracted mode shapes for the simulation using the perspective projection. The legend in the top-left corner lists the mean value of each cluster
Fig. 12.9 The BSS-extracted mode shapes for the simulation using the generated orthographic video
Practical implementation of orthorectification for structures faces several challenges. Modern time-of-flight imagers such as the Microsoft Azure Kinect produced depth measurements that varied with the brightness of the object(s) being measured. For a checkerboard, this resulted in a non-planar, rippled surface; consequently, 3D geometry acquired from the Kinect for materials with value-varying textures, objects of different shades, and non-uniformly lit objects may not match the real
104
D. Mascareñas and A. Green
Fig. 12.10 The clusters of the BSS-extracted mode shapes for the simulation using the generated orthographic projection video. The legend in the top-left corner lists the mean value of each cluster
Fig. 12.11 The modal assurance criteria scores for the extracted mode-shapes from the phase-based BSS using generated orthographic (left) and perspective (right) views. That the right-hand side matrix is non-diagonal is not significant: the order of the BSS components is not assured, and in the diagram above did not match the order of the simulation’s mode-shape vectors
geometry. As a result, an object which lies in a 2D plane in 3D space may not be measured as such; even if the true geometry is known a priori, fitting that geometry to the measured point-cloud is error-prone. For future work, we envision extending the data fusion techniques for 3D geometry data and 2D perspective imagery to arbitrarily complex 3D geometry and converting from point-clouds to piecewise-linear meshes, which will require the
12 Orthorectification for Dense Pixel-Level Spatial Calibration for Video-Based Structural Dynamics
105
most effort. Likewise, verification and validation of renders of more complex, dynamic scenes and incorporation of material property behaviors through spectral rendering may be necessary to accurately model real-world phenomena. Acknowledgments This work was funded by the US Army Corp of Engineers. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of US Department of Energy (Contract No. 89233218CNA000001).
References 1. Torr, P., Zisserman, A.: Feature based methods for structure and motion estimation. In: International Workshop on Vision Algorithms, IWVA: Vision Algorithms: Theory and Practice (2000) 2. Irani, P.A.M.: All about direct methods. In: International Workshop on Vision Algorithms, IWVA: Vision Algorithms: Theory and Practice (2000) 3. Simoncelli, E., Freeman, W.: The Steerable Pyramid: A flexible architecture for multi-scale derivative computation. In: 2nd IEEE International Conference on Image Processing, Washington, DC (1995) 4. Yang, Y., Dorn, C., Mancini, T., Talken, Z., Kenyon, G., Farrar, C., Mascarenas, D.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2017) 5. Yang, Y., Dorn, C., Mancini, T., Talken, Z., Theiler, J., Kenyon, G., Farrar, C., Mascarenas, D.: Reference-free detection of minute, non-visible, damage using full-field, high-resolution mode shapes output-only identified from digital videos of structures. Struct. Health Monitor. 17(3), 514–531 (2018) 6. Stachniss, C.: Orthophotos. YouTube (6 October 2020). [Online]. Available: https://www.youtube.com/watch?v=xoOJeogQvUA 7. Pontius, J.: Geometric correction concepts. YouTube (27 Feb 2017). [Online]. Available: https://www.youtube.com/ watch?v=QWkNUBO6_14. Accessed 9 Sept 2021 8. Dasari, S., Dorn, C., Yang, Y., Larson, A., Mascareñas, D.: A framework for the identification of full-field structural dynamics using sequences of images in the presence of non-ideal operating conditions. J. Intell. Mater. Syst. Struct. 29(17) (2018) 9. Anderssohn, J., Wetzel, H.-U., Walter, T.R., Motagh, M., Djamour, Y., Kaufmann, H.: Land subsidence pattern controlled by old alpine basement faults in the Kashmar Valley, northeast Iran: results from InSAR and levelling. Geophys. J. Int. 174, 287–294 (2008) 10. Bouali, E.H.Y.: Analyzing the life-cycle of unstable slopes using applied remote sensing within an asset management framework. Michigan Technological University, Houghton (2018) 11. Blender Foundation: Blender [Online]. Available: https://www.blender.org/. Accessed 9 Sept 2021 12. Unity Technologies: Unity [Online]. Available: https://unity.com/. Accessed 9 Sept 2021
Chapter 13
Digital Twins for Photorealistic Event-Based Structural Dynamics Allison Davis, Edward Walker, Marcus Chen, Moises Felipe, David Mascareñas, Fernando Moreu, and Alessandro Cattaneo
Abstract Digital twins are virtual representations of real-world structures that can be used for modeling and simulation. Because of digital twins’ ability to simulate complex structural behaviors, they also have potential for structural health monitoring (SHM) applications. Video-based SHM techniques are advantageous due to the lower installation/maintenance costs, analysis in high-spatial resolution, and its non-contact monitoring features. Both digital twins and video-based techniques hold particular interest in the fields of non-destructive evaluation, damage identification, and modal analysis. An effective use of these techniques for SHM applications still poses several challenges. Neural radiance fields (NeRFs) are an emerging and promising type of neural network that can render photorealistic novel views of a complex scene using a sparse data set of 2D images. Originally, NeRF was designed to capture static scenes, but recent work has extended its capability to capture dynamic scenes which has implications for medium and long-term SHM. However, to date, most NeRFs use framebased images and videos as input data. Frame-based video monitoring approaches result in redundant information derived from the fact that, for structural dynamics monitoring, only a small number of active pixels record the actual dynamical changes in the structure, resulting in intensive computational loads for data processing and storage. A promising alternative is event-based imaging, which only records pixel-wise changes on the illumination of a scene. Event-based imaging creates a sparse set of data, while accurately capturing the dynamics. The work proposes a method to extract the dynamics of a structure using a generated digital twin. Using Unreal Engine 5, digital twins of rigid and non-rigid structures were generated. The digital twin model was then used along with an event-based camera simulator to generate event-based data. A frequency analysis framework was then developed to extract the modal information on the structure. Validation was performed on a structure of known dynamics using event-based cameras. Keywords Digital twins · Event Cameras · NeRFs
13.1 Introduction Video-based structural health monitoring (SHM) is an appealing method for infrastructure monitoring, as it allows for noncontact monitoring. The current video-based methods rely on frame-based imagers. This class of imagers records redundant information at each pixel, leading to the accumulation of large datasets and computationally intense post-processing costs. Event-based imagers, thanks to their ability of adaptively acquiring sparse-events at high speed, drastically reduce the dataset’s size. However, event-based data are asynchronous in nature. The present work proposes a framework to extract
A. Davis () Department of Electrical Engineering and Computer Science, College of Engineering and Applied Science, University of Wyoming, Laramie, WY, USA e-mail: [email protected] E. Walker Department of Mechanical Engineering, George R. Brown School of Engineering, Rice University, Houston, TX, USA M. Chen Department of Electrical and Computer Engineering, College of Engineering, University of Washington, Seattle, WA, USA M. Felipe · D. Mascareñas · A. Cattaneo Los Alamos National Laboratory, Los Alamos, NM, USA F. Moreu Department of Civil, Construction and Environmental Engineering, University of New Mexico, Albuquerque, NM, USA © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_13
107
108
A. Davis et al.
modal information of both rigid and non-rigid structures from event-based data and use a digital twin to accurately represent the time-continuous dynamics of a vibrating structure and update it in real time.
13.2 Background In recent years, there have been developments in using video-based techniques rather than on-contact methods to characterize the dynamics of a structure [1]. Video-based structural dynamics is defined by the usage of video data to perform structural analysis such as modal frequencies, damping ratios, and full-field mode shapes [1]. Because of the high resolution and high speed of video-based data, video-based structural dynamics are notably advantageous compared to on-contact sensors, like accelerometers and strain gauges, and other contactless sensors, such as lasers. While convenient in use, frame-based cameras result in data-heavy images due to the redundant information captured between pixels. Event-based imagers, on the other hand, are a type of inexpensive and low-power camera that only record pixel-wise changes in illumination of a scene, making it a low-storage, high bandwidth and high dynamic range option for recording videos for structural health monitoring (SHM) and process monitoring applications. Frame reconstruction has been one of the main cornerstones for the successful application of conventional frame-based data processing techniques, reducing the effectiveness of several modern image processing methods. In that regard, coded-exposure techniques, such as those proposed by Gothard et al. [2, 3], have been demonstrated as promising to address the challenges in frame reconstruction. In particular, Gothard et al. demonstrated that coded-exposure coupled with event-based imagers enables the creation of frames with the control of shutter-speed effects, including stroboscopic effects, completely in post-processing and with no need of expensive or complex experimental setups. The development of digital models based on data captured with either contact-based sensors or imagers is at the core of a variety of SHM techniques widely documented in the literature. The recent advent of digital twins, which are a subset of digital models, are particularly suitable for updating the model with real-time data produced by sensors [4]. One of the important dynamic properties of a digital twin is its flexibility with being autonomous, non-autonomous, partially autonomous, and its synchronization of either continuous or specified time [5, 6]. Neural radiance fields (NeRFs) hold promise for the generation of digital twins of structures. Mildenhall et al. introduced the original NeRF in 2020, which created a new way of rendering novel 3D views of a scene using deep neural networks [7]. The network transforms the positional and directional data in a sparse set of static, 2D images and after training, outputs a model based on color and volumetric density that can generate novel views or create a mesh of the scene. This model is fully editable, making it very attractive for digital twin purposes, by allowing editing of the lighting of the scene, removing or adding objects, and changing their physical aspects. In the attempt of leveraging NeRFs and event-based data for the dynamic characterization of structures, this work uses two open-source tools. The first one is Unreal Engine 5 [8] and the second one is an open event-camera simulator, known as ESIM by Rebecq et al. [9]. Unreal Engine 5 is an open-source software for creating computer graphics and has been used in video games and animation. Real-time raytracing in the Unreal Engine system allows for photo-realistic imagery instantaneous to live feedback and interaction. Features of the current build of Unreal Engine, such as the illumination and reflection system Lumen and the virtualized geometry system Nanite, resolve some of the more complicated factors that come with raytracing. Because of these features, Unreal Engine has uses in the creation and maintenance of structural digital twins. In this work, frame-based videos generated with Unreal Engine are converted into event-based datasets using the event-based camera simulator ESIM.
13.3 Experiments & Simulations To build the experimental systems in Unreal Engine 5, one of the core features was utilized: blueprints. Blueprints allow multiple components in a system to be combined and for pieces of the system to be automated through C++. Variables that are manipulated in the simulation can also be modified during the run of the simulation by selectively modifying the visibility. To film the digital twins, another feature of Unreal Engine 5 was used: the level sequencer. The level sequencer is a function of Unreal Engine 5 that can be used to create in-simulation cinematics with a multi-track editor with an adjustable frame-based camera simulation. Another major function of the sequencer is to capture the simulation at different frame rates, which results in different temporal resolution. To focus on the specific system, point lights using Unreal Engine 5’s lumen system were placed in front of the system to evenly illuminate. In this work, the virtual camera focal length for each
13 Digital Twins for Photorealistic Event-Based Structural Dynamics
109
simulation was set at 35 mm and the aperture was 22. The sampling period was set at 0.001 s and 0.0002 s and was later sampled in ESIM at a sampling frequency of 1000 Hz and 5000 Hz over a period of 10 seconds for each experiment.
13.3.1 Aluminum Block As a preliminary test of the simulation/analysis framework, a rigid aluminum block on a shaker was used to create a singlefrequency vibration (Fig. 13.1a). The shaker was supplied 10 Vpp at 5 Hz, and a laser displacement sensor measured the oscillation of the block. The output of the sensor was fed into an oscilloscope, which was used to identify the oscillation frequency. Alongside this setup, the monochromatic CS165MU1-Zelux® 1.6 MP Monochrome CMOS camera and the Inivation DVXplorer event imager were attached to tripods and recorded the block. A Gazechimp 4-12 mm F1.6 1/2” Iris lens was used on both the imagers. The event stream of the event imager was saved and inputted directly into the analysis model proposed in this chapter, while the monochromatic video was fed into ESIM first to generate an event stream from the video frames. To make a digital twin of the aluminum block, a model was made based on measurements of the physical block to match its vibrating properties (Fig. 13.1b). The physics module from Unreal Engine 5 was enabled along with a mass based on the measurements and the assumed density of aluminum. To simulate a shaker, an invisible actor was used and programmed in Blueprints to move sinusoidally based on the amplitude and frequencies of the experiment and attached to the aluminum block with a physics constraint. The simulation was then recorded using the level sequencer, focusing the camera at the center of the system.
13.3.2 Cantilever Beam To capture a combination of modes, a cantilever beam was setup in the lab. The beam was a strip of aluminum with dimensions 0.3 cm × 3.8 cm × 52.6 cm and was clamped to the table using a C-clamp. An accelerometer was attached to the end of the beam, and an impact test was performed to identify the modal frequencies. Next, the beam was excited by an impact hammer, and the resulting vibration was recorded by both a highspeed camera and an event imager. Unlike the aluminum block experiment, the highspeed camera Edgertronic SC2+ with a Nikon AF Nikkor 50 mm f/1.8D lens was used instead of the monochromatic camera. This was due to the expectations of high modal frequencies, which would be aliased by the low framerate of the monochromatic camera. The highspeed camera was processed using ESIM to generate an event stream for frequency analysis. The setup for this experiment is depicted in Fig. 13.2a. One of the main limitations of Unreal Engine 5 is the program’s capabilities of building flexible objects in-house; models of the structures analyzed in this paper were instead created in Blender [10] and then imported into Unreal Engine 5 as a Filmbox (FBX). To set up the cantilever beam in Unreal Engine 5, a flexible beam with multiple bones, or nodes, was first
Fig. 13.1 (a) The setup used to record the aluminum block oscillation. (b) The console view of the level sequencer, as seen in Unreal Engine 5
110
A. Davis et al.
Fig. 13.2 The setup used to record the cantilever beam vibration
made in Blender, based on the measurements of the live cantilever beam. The model was then split up into multiple polygons to smooth the bending of the beam and imported into Unreal Engine 5 as a skeletal mesh (Fig. 13.2b). Physical measurements of the beam were taken to Python and used to determine the natural frequencies of the mode shapes and the amplitude displacement of each node at each mode shape according to Euler-Bernoulli’s theorem for a cantilevered beam, see Eq. (13.1). Equation (13.2) was used to find the natural frequencies, where E is the modulus of elasticity, I is the moment of inertia about the horizontal axis, μ is the mass per unit length, and β is the result of Eq. (13.2) with length L. For the purposes of the simulation, only the natural frequencies for the first four mode shapes of the cantilever beam were calculated. 2 EI .ωn = βn (13.1) μ .
cos (βn L) cosh (βn L) + 1 = 0
(13.2)
Equation (13.3) was used to find the positions at each node, where x is the position of the node along the beam and A is a constant value to scale the amplitude levels for all position oscillations. In Unreal Engine 5, a C++ Blueprint was made to connect the bones of the flexible beam with oscillating invisible actors. The Blueprint iterated through the amplitude displacements at each position and the natural frequencies to assign all actors with their intended displacement based on the real value of Eq. (13.4).
cosh (βn L) + cos (βn L) .ω ˆ n = A (cosh (βn x) − cos (βn x)) + (sin (βn x) − sinh (βn x)) sinh (βn L) + sin (βn L) iωt w (x, t) = ω(x)e ˆ
.
(13.3)
(13.4)
The constant A was set as a value that could be adjusted in the main simulation to determine the amplitude of the modeshape oscillations. Like the aluminum block and shaker experiment, a level sequencer was used to record the oscillation of the simulated cantilever beam (Fig. 13.3). The digital twin events are comparable to the events made through the results of the monochromatic events and the events generated from the event-based imager.
13 Digital Twins for Photorealistic Event-Based Structural Dynamics
111
Fig. 13.3 The simulated cantilever beam at the first four modes and their respective frequencies
13.4 Analysis After videos of the vibrating structures were collected by the event-based and frame-based cameras, the following steps were used to extract modal information: 1. Video was rendered from the event stream of the event-based camera, and the frame-based camera video was converted into events using ESIM. 2. Frames were extracted using MATLAB and used to manually identify pixels of observational interest. 3. A range of frequencies was chosen to scan over and a dictionary of atoms was generated according to those frequencies. 4. Each atom from the dictionary was cross-correlated with the event stream of the pixel. 5. The cross-correlation values at zero-shift for each frequency were collected and plotted against the frequency range to find the frequencies of oscillation in the system. These steps are explained further in the following sections.
13.4.1 Data Visualization Event-Based Images The output of the event-based camera is an Aedat4 file with the timestamps, pixel location in x and y, and polarity of the events provided as a stream of asynchronous data. To visualize these events, the data needed to be structured into frames. By setting the frame rate, a loop was used to iterate through the event data and answer the following key questions: 1. 2. 3. 4.
What frame did the event belong to based on the timestamp? What pixel was the event located at? What event was previously at that pixel? Had the polarity changed for the pixel? By identifying these points, the frames were generated with the respective events that occurred.
112
A. Davis et al.
Frame-Based Imagers with ESIM The frames of a frame-based imager were given to ESIM where several parameters can be set. The frame rate, exposure time, and sensitivity were important in generating an accurate representation of the event-based images. ESIM returned the eventbased information in a rosbag file, which was then taken to MATLAB to implement the same framework as the event-based imagers for generating the frames and videos. Additionally, during this process, the rosbag file was used to restructure the event data to follow the same formatting and orientation as the data of the event-based camera. In doing this, the proposed frequency analysis framework was applied to data collected from either the event-based camera or frame-based camera.
Selecting Pixels for Observation Once the frames of the events have been generated, they were observed in MATLAB. In doing so, individual pixels were selected for observation. This was advantageous as it ensured that the pixels being used fell along the portion of the structure that was most responsive to excitation. Additionally, the algorithm used for frequency analysis was observing a pixel related to the structure, rather than a pixel responding to noise.
13.4.2 User-Input After selecting pixels for observation, the events of the selected pixels were separated out from the total events. The timestamps and polarities of the events were placed into a new matrix, which was then used in the frequency analysis. If desired, this can be done for a rectangular selection of pixels, where pixels with little to no events are discarded. From here, a window of frequencies to observe was selected by setting the lower frequency, upper frequency, and increment. These frequencies were used to generate a dictionary of atoms. The atoms used here were based on the oversampled discrete Fourier transform (DFT) atom. Equation (13.5) was used for generating the waveforms, where n is a vector of indexes pointing to a reference time vector, related to the timestamps of the events, k is a vector of indexes pointing to a reference frequency vector, which is related to the frequencies of interest, N is the number of events for the pixel, T is an oversampling factor, and i is an imaginary unit. 2π ikn 1 ψk (t) = √ e− NT N
.
(13.5)
This atom was chosen due to the phase being implicitly included, and the ability to oversample the data using the oversampling factor. In all cases, the oversampling factor was set to 1. To take into account the fact that event-based cameras generate asynchronous streams of events, a time vector was generated, where the vector is from 0 to the last timestamp of the pixel, with the timestep set to 1 microsecond, which is the temporal resolution of the event-based camera. From here, n was generated by creating a vector of integer values, from 0 to the number of elements in the time vector, N. The frequencies 1 to test were generated by determining the frequency resolution, where .f = tend , with tend being the last timestamp in the event vector for the pixel.
13.4.3 Frequency Analysis Once the atoms at each of the frequencies of interest were generated, they were each cross-correlated with the polarities of the events. The values of the cross-correlation at zero-shift were then plotted on the same frequency axis, and peak-detection was used to identify the frequencies of the object. By performing cross-correlation, the waveform with a frequency that was closest to the frequency of the moving object was detected. In the event of multiple frequencies present in the object, multiple peaks were observed. A peak-detection algorithm was implemented to determine what frequencies corresponded to dominant peaks.
13 Digital Twins for Photorealistic Event-Based Structural Dynamics
113
Fig. 13.4 Frequency analysis results for aluminum block oscillating at 5 Hz. (a) Event-based camera’s pixel (338, 126). (b) Monochromatic camera’s pixel (719, 448). For results shown in (a) and (b), the aluminum block was set on a shaker. (c) Pixel (320, 160) for data generated with the Unreal Engine 5’s digital twin converted to event data with ESIM Table 13.1 Actual frequencies applied to digital twin of cantilever beam vs. extracted frequencies from ESIM data of digital twin Unreal engine frequencies Extracted ESIM frequencies
Mode 1 (Hz) 9.43 11.27
Mode 2 (Hz) 59.08 59.1
Mode 3 (Hz) 165.43 166.1
Mode 4 (Hz) 324.18 323.8
13.4.4 Results of Experiments Aluminum Block The frequency analysis framework was applied to the aluminum block experiment described above. When applied to the data collected from the event-based imager, the oscillation was detected to be 5 Hz, as shown in Fig. 13.4a. The frequency detected in the data collected by the monochromatic camera, then converted to event data by ESIM, was 5 Hz as well (Fig. 13.4b). The event data from the simulated aluminum block resulted in 5 Hz when applied to the frequency analysis algorithm (Fig. 13.4c). These results confirmed that the frequency analysis framework developed could detect the frequency of oscillation on both a physical and simulated rigid structure. This also demonstrated that the digital twin of the rigid body was an accurate representation of the actual rigid structure.
Cantilever Beam Following the experimentation on the aluminum block, the frequency analysis framework was applied to the event data of an excited cantilever beam digital twin. The results of the algorithm are shown in Table 13.1, where the first row shows the calculated values of the cantilever beam, which were applied to the digital twin model, and the second row shows the frequencies extracted by the algorithm.
13.5 Conclusion Overall, the simulation and analysis framework were successful in extracting frequency information from event-based data. Unreal Engine 5 enabled to create digital twins. The proposed framework enabled to correctly identify the frequency of a rigid oscillation and the first four modal frequencies of a vibrating cantilever beam. There was minimal difference between the results for the actual event imager and the videos processed with ESIM, and there was minimal difference between the videos of actual structures and those of Unreal Engine 5 digital twins. This work shows promise in developing event-based structural dynamics techniques. Attempts to use Dynamic NeRFs to create digital twins were unsuccessful with a multitude of issues arising. The network had issues decreasing the error and raising the peak signal-to-noise ratio (PSNR), with some training attempts experiencing infinite error after a period. The training sessions where the error did not blow up were unable
114
A. Davis et al.
to obtain PSNRs greater than ~19 (with ~30 being sought). As the different variations of NeRF continue to evolve and Dynamic NeRFs are improved upon, more work should be done to explore their applications in creating digital twins. Acknowledgments This research was funded by Los Alamos National Laboratory (LANL) through the Engineering Institute’s Los Alamos Dynamics Summer School. The Engineering Institute is a research and education collaboration between LANL and the University of California San Diego’s Jacobs School of Engineering. This collaboration seeks to promote multidisciplinary engineering research that develops and integrates advanced predictive modeling, novel sensing systems, and new developments in information technology to address LANL mission-relevant problems. The authors would also like to acknowledge Kyle Hatler, from Montana State University, for his services as lab manager and Tariq Abdul-Quddoos, from Prairie View A&M University, for providing additional help with neural network programming.
References 1. Yang, Y., Farrar, C., Mascarenas, D.: Full-field structural dynamics by video motion manipulations, pp. 223–226. International Digital Imaging Correlation Society (2017) 2. Gothard, A., Henry, C., Jones, D., Cattaneo, A., Mascarenas, D.: Digital Stroboscopy using event-driven imagery. Data Sci. Eng. 9, 129–133 (2021) 3. Gothard, A., Jones, D., Green, A., Torrez, M., Cattaneo, A., Mascarenas, D.: Digital coded exposure formation of frames from event-based imagery. Neuromorph. Comput. Eng. (2022) 4. Bado, M.F., Tonelli, D., Poli, F., Zonta, D., Casas, J.R.: Digital twin for civil engineering systems: an exploratory review for distributed sensing updating, Sensors (2022) 5. Sharma, A., Kosasih, E., Zhang, J., Brintrup, A., Calinescu, A.: Digital twins: state of the art theory and practice, challenges, and open research questions, arXiv (2020) 6. Ye, C., Butler, L., Calka, B., Ianguarazov, M., Lu, Q., Gregory, A., Girolami, M., Middleton, C.: A digital twin of bridges for structural health monitoring. In: International Workshop on Structural Health Monitoring (2019) 7. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.R.: NeRF: representing scenes as neural radiance fields for view synthesis, arXiv, p. 25 (2020) 8. Unreal Engine 5 [Online]. Available: https://www.unrealengine.com/en-US/unreal-engine-5 9. Rebecq, H., Gehrig, D., Scaramuzza, D.: ESIM: an open Event Camera Simulator. In: Proceedings of the 2nd Conference on Robot Learning (2018) 10. Blender [Online]. Available: https://www.blender.org/
Chapter 14
Multi-path Vibrometer-Based Strain Measurement Technique for Very High Cycle Fatigue (VHCF) Testing Kilian Shambaugh, Arend von der Lieth, Joerg Sauer, and Vikrant Palan
Abstract This feasibility study evaluates the use of a laser Doppler vibrometer to measure strain. Specifically, it uses multipath laser Doppler vibrometry (MDV) to measure high-frequency strain required for high-cycle fatigue (HCF) and very high cycle fatigue (VHCF) testing on various engineered materials. As an indicator of higher durability, there is an emerging need to measure at higher number of cycles while keeping the measurement time to a minimum. One way to meet this diametrically opposed requirement is to perform the testing at higher frequencies. Conventional sensors have a limitation in terms of highest frequency and maximum temperature of operation which are addressed with the proposed method. In this research, we provide a detailed account of the motivation and the procedure to go from vibrational velocity to dynamic strain using MDV. Comparison is shown between strain measured with MDV, 3-dimensional scanning LDV and a strain gage. Results show a good agreement between these techniques. Keywords VHCF · Multi-path laser Doppler vibrometer · Fatigue · Durability · Strain
14.1 Introduction The strength and durability of a material can be evaluated using many methods, one of which involves subjecting it to a high-cycle fatigue test, or HCF. A fatigue test consists of cyclic loading of a test sample most often carried out on a servo hydraulic test machine. The number of cycles a component can withstand before breaking or fracturing is an indication of its strength. Standard tests to determine the strength have been laid out by ASTM E466-21 [1] (force controlled) and ASTM E606/E606M-21 [2] (strain controlled) but are mainly defined for low-cycle fatigue. Traditionally an HCF (107 cycles) test has been sufficient; however, the introduction of new materials and alloys and ever-increasing emphasis on safety and reliability has brought to the forefront a very high-cycle fatigue test (109 cycles), or VHCF. Repeated compressive and tensile stress has a deteriorating effect on the fatigue strength which reduces with number of cycles for homogeneous materials. Strain is one of the parameters being monitored during the test to ensure consistent loading conditions and to use it as a performance metric. Figure 14.1 shows approximate testing times for a certain number of cycles. To keep the testing time to an acceptable level, VHCF typically uses much higher frequencies than what traditional gages are capable of handling over the duration of the test. The graph also shows a substantial reduction in testing time with increasing excitation frequencies. The saving in terms of testing time is the main motivation behind considering higher excitation frequencies. As noted earlier, higher frequencies limit the capabilities of existing sensors. Also, it becomes a bigger challenge to excite the sample at higher frequencies while still dealing with resulting high temperatures. Consequently, this creates a need for an alternative strain measurement technique. The work presented here uses the well-established laser Doppler vibrometer (LDV) as the basis for a strain measurement. A vibrometer is agnostic to surface characteristics, specimen size, operation frequencies, and surface temperature. Previous work [3] has demonstrated the use of a scanning LDV to measure strain on a turbine blade. As opposed to a sequential scanning scheme, the proposed method in the present work measures required vibration data synchronously and calculates strain in real time. Another benefit of the new setup is certainly the multipath vibrometer, or MDV. The optical design of the MDV [4] makes it almost dropout-free and provides the absolute lowest SNR
K. Shambaugh () · A. von der Lieth · V. Palan Polytec, Inc., Irvine, CA, USA e-mail: [email protected] J. Sauer Polytec GmbH, Waldbronn, Germany © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_14
115
116
K. Shambaugh et al.
Fig. 14.1 Testing time estimation for VHCF (109 cycles)
that is required for the small displacement being analyzed for strain measurement. Measurement setup using four multipath LDVs along with image processing techniques for alignment and calibration is shown. The LDV-based strain tester is compared to traditional contact-based technique, as well as to the post-processed results obtained with a 3-dimensional scanning vibrometer system [3].
14.2 Principle of Laser-Based Strain Measurement The strain measurement system consists of four single-point LDVs. Specifically, a QTec-based (or MDV-based) vibrometer is used to achieve dropout-free dataset especially considering small displacements and steep angles [5]. The LDVs are mounted and aimed in a way so that their respective datasets can be used together to calculate the 1-dimensional dynamic strain between two points on an object’s surface. Pairs of two LDVs are used together to allow for two separate but simultaneous measurements of dynamic in-plane displacement at two locations on the DUT’s surface. Considering that the measurement is done with the displacement unit vectors being colinear, a simple subtraction between the two and then dividing by the distance between the points yields strain, as shown in Fig. 14.2. Each of the four LDVs continuously outputs an analog voltage signal proportional to displacement, with a sample rate in the MHz range. This LDV reading represents the component of motion of the surface of the DUT at the laser’s incident location on the surface parallel to the direction of the laser path. Superimposing two lasers together at one measurement location with different but known incident angles, however, allows one to transform those two datasets into a desired orthogonal set of x, y displacement components. The x and y axes, in this case, will lie in the plane created by the two laser directions and their point of intersection on the surface of the object. The orientation of this set of axes in this plane is user-defined based on the relative angles assigned to the two lasers. In the case of dogbone strain samples, which are most used for fatigue testing, axial strain (along the length) is of greatest interest. To measure this strain, the two measurements points (two overlapping lasers at each point) are lined up such that the vector between them is parallel to the axial direction with a desired separation distance. As the sample is excited, the raw measured displacement data is transformed into two inplane displacement measurements. The displacement measurements are consequently used to calculate axial strain based on the known beam separation. Both the incident angels and beam separation are determined by image processing techniques. The in-plane displacement components for each of the two measurement locations, A and B, as shown in Fig. 14.2, can be expressed as: Day =
.
Da1 × cos (θa2 ) − Da2 × cos (θa1 ) sin (θa1 − θa2 )
(1)
14 Multi-path Vibrometer-Based Strain Measurement Technique for Very High Cycle Fatigue (VHCF) Testing
117
Fig. 14.2 Principle of strain measurement
Db1 × cos (θb2 ) − Db2 × cos (θb1 ) sin (θb1 − θb2 )
Dby =
.
(2)
Where, for point A, θ a1 and θ a2 are angles between Sa1 and the x-axis and Sa2 and the x-axis, respectively, and Da1 and Da2 are the measured displacement datasets of the respective sensors with unit vectors in the directions of Sa1 and Sa2 , respectively. The second equation for point B is identical to the first, substituting a-subscripts for b-subscripts. Therefore, Strain =
.
Day − Dby l = L L
(3)
14.3 Experimental Setup In practice, the four raw analog displacement signals are high-pass filtered to remove low-frequency noise and recorded via a computer-connected data acquisition system running through a Python script. The relevant angles and point separations are known and used in the script to calculate, display, and store the measured strain real time. Figure 14.3 shows the 4 single-point QTec sensors (VibroFlex QTec, Polytec GmbH) and the device under test, which in this case is a turbine blade. A strain-gage is mounted on the blade to facilitate a comparison between the two techniques. Additionally, the strain measurement was also done using a 3-dimensional scanning vibrometer (PSV QTec 3D, Polytec GmbH). Note that as opposed to the sequential scanning scheme used by the scanning LDV, the proposed strain test measures the desired raw data simultaneously, which is advantageous and required for a true high-cycle fatigue test. Nonetheless, the purpose of using the scanning LDV was for comparison and sanity check. Figure 14.3 shows the setup used with the single-point vibrometers and fine-positioning fixture. Figure 14.4 shows the turbine blade (device under test) mounted on an electrodynamic shaker for excitation. This figure also shows the strain gage and the two measurement locations. It should be noted that each measurement “point” consists of two LDV measurements superimposed on each other. As mentioned above, one of the most critical aspects and pre-requisites of this strain testing method is accurate measurement of incident angles and laser point separation distance. While there are many approaches to make this measurement, the following discussion will focus on the methodology used for the current study. To measure the four lasers’ incident angles, we use a pre-calibrated stereoscopic camera system. The two cameras sit side by side, facing from the LDV location toward the DUT. The four lasers are turned on and off one by one and pairs of images are taken. Images are subtracted and machine vision techniques are used to determine the given laser’s pixel location. Using the stereo camera calibration matrix, the pair of pixel coordinates for the two cameras is transformed into a 3D coordinate in the camera’s arbitrary 3D coordinate system. First, the coordinates of the two measurement points on the sample are found. Next step is
118
K. Shambaugh et al.
Fig. 14.3 Experimental setup showing 4 single-point vibrometers mounted to form two displacement measurement points Fig. 14.4 Device under test with a strain-gage installed. Electrodynamic shaker is used for excitation. Two green dots comprise 2 pairs of LDV measurement locations
to retrieve the four laser unit vectors which are done by calibrating the point location on a distance-calibrated plate placed out of plane from the measurement sample. Each sensor vector is calibrated individually using the camera system and image processing subsequently providing the four laser angles. As a final step, a third camera is used to measure pixel-to-pixel distance on the measurement object and hence the precise beam separation distance.
14 Multi-path Vibrometer-Based Strain Measurement Technique for Very High Cycle Fatigue (VHCF) Testing
119
Fig. 14.5 Raw displacement data from 4 QTec sensors. These traces are processed to produce the strain time history pictured below in Fig. 14.6. TL, TR, BL, and BR refer to the location of the sensors in Fig. 14.3 – Top Left, Top Right, Bottom Left, Bottom Right
Fig. 14.6 Strain comparison between multi-path QTec LDVs and strain-gage. The respective RMS values of the two traces are 7.97 micro epsilon for the vibrometer-measured strain (blue) and 7.52 micro epsilon for the strain gage (orange), with a difference between the two results of 5.8%
14.4 Results and Discussion The results above show a comparison between strain-gage and the proposed new approach based on 4 single-point QTec (multi-path Doppler vibrometer) LDVs. As mentioned before, a previous study [3] has already compared and demonstrated good corroboration between the standard SLDV and traditional strain-gages. We were able to obtain and use the same sample that had been used in this previous study. In the previous study, the sixth bending mode was measured. In our case, we excited the sample at its third bending mode, 3092 Hz, as this produced a strain map with lower strain gradients, making it easier to directly compare values between our setup and the strain gage. Figure 14.5 shows the raw displacement data, which is subsequently used to calculate strain, shown in Fig. 14.6, based on parameters such as angles of incidence and point spacing. The difference between the QTec method and strain-gage data for this particular mode in the given configuration is 5.8%. The accuracy of the RMS strain value obtained from the LDV measurement depends on precise overlap of the two measurement beams at each of the two measured locations. In the current study, the potential deviation between the pilot and measurement beams is not accounted for. Also, it should be noted that in a more realistic scenario with a dogbone sample being excited predominantly in the inplane direction, the raw displacement amplitudes expected would be much higher, which would in turn increase the strain values and hence the correlation. In Fig. 14.6, the strain-gage data appears noisy, which is also related to small strain values.
14.5 Conclusion In this exploratory study, we demonstrate the use of multi-path vibrometers to measure dynamic strain in real time and without contact. The proposed setup is agnostic to surface effects and sample temperatures. The strain values acquired optically showed a good agreement with both the strain gage and were referenced to full-field strain data post-processed from 3D-SLDV data, again showing a good agreement. Furthermore, even though the data shown is at 3092 Hz, the system is capable of handling up to 25 MHz which is orders of magnitude higher than what is possible with contact sensors. This capability provides promise to further reduce the testing time associated with a VHCF. There are two main challenges for a VHCF test at high frequencies: (i) a sensor that can make the strain measurement and (ii) a fatigue tester that can excite the dogbone sample at these high frequencies. The proposed study provides a path to solve (i) while further investigation is required to address (ii). The following are some of the avenues being considered for future work. • Error budget calculation based on sources such as beam positioning accuracy, pilot/measurement beam deviation, and LDV noise floor. Advanced computer-vision techniques will be implemented to study the pilot/measurement beam deviation effects. We speculate that better overlap will not only increase the accuracy of the proposed method but also make it more robust.
120
K. Shambaugh et al.
• Repeat the test with a dogbone sample mounted on a fatigue tester. This will not only be a more realistic case, but also one resulting in higher displacement/strain values. This should further improve the accuracy and reliability of proposed method. • Long duration test to evaluate its effect on memory and storage requirement. • Noise floor and resolution comparison between single-path and multi-path vibrometry. • Strain comparison at frequencies of interest for a VHCF test, e.g., 20 kHz.
References 1. ASTM: Standard Practice for Conducting Force Controlled Constant Amplitude Axial Fatigue Tests of Metallic Materials (02 07 2021). [Online]. Available: https://www.astm.org/e0466-21.html 2. ASTM: Standard Test Method for Strain-Controlled Fatigue Testing, 02 07 2021. [Online]. Available: https://www.astm.org/e0606_e0606m21.html 3. Schussler, M.M., Mitrofanova, M., Retze, U.: Measurement of 2D dynamic stress distributions with a 3D-Scanning Laser Doppler Vibrometer. In: Conference Proceedings of the Society for Experimental Mechanics (2011) 4. Polytec Application Note - VIB-G-030. Characterization of the robustness of Laser Vibrometers in respect to speckle noise (2021) 5. Eichenberger, J., Sauer, J.: Introduction to Multipath Doppler Vibrometry (MDV) for validating complex models accurately and without contact. In: Conference Proceedings of the Society for Experimental Mechanics Series (2023)
Chapter 15
Measurement of Airborne Ultrasound Using Laser Doppler Vibrometry Zihuan Liu, Xiaoyu Niu, Yuqi Meng, Ehsan Vatankhah, Donghwan Kim, and Neal A. Hall
Abstract The speed of light in air is dependent on the air’s instantaneous density. Since air density is modulated by sound, sound in the air can be observed and measured using optical methods. One such optical method is laser Doppler vibrometry (LDV). In this method, the time derivative of a laser beam’s optical path is measured. The optical path length is the product of the physical path length traversed by a laser beam and the refractive index of the medium. Most commonly, such instruments measure the time rate-of-change of the physical path (i.e., the mechanical velocity of a surface), and index changes of the medium are small in comparison. In contrast, by placing a rigid reflector beneath a sound beam in air, it is possible to measure the time rate-of-change of refractive index and to therefore measure dynamic changes in air density, or sound. Because modern LDVs enable rapid and convenient scanning, entire sound fields and sound beams can be observed. In prior demonstrations by other teams, this method has been used to visualize sound fields in the audible frequency range and ultrasound range underwater. In this work, we present the first measurements of high-intensity airborne ultrasound beams in the frequency range spanning 60–500 kHz. We present a sound field generated by a piezoelectric MEMS transducer at 228 kHz. We also observe interesting phenomena predicted from nonlinear acoustics including accumulated distortion, wave steepening, and weak shock formation as a sound beam propagates. LDV measurements are compared against more conventional measurement methods using pressure transducers. Keywords Nonlinear · Acoustics · PMUT · Laser Doppler
15.1 Introduction Since the speed of light in air depends on the air’s instantaneous density, changes in air density can be measured using optical methods. Laser Doppler vibrometer (LDV) is one such method. Fundamentally, LDVs measure the time derivative of the optical path length traversed by a laser beam. The optical path length is defined as n · l, where n is the refractive index of the medium (most commonly air) and l is the physical path length traversed by the laser light. In most scenarios n is assumed constant and optical path length changes arise from changes in l, resulting from the mechanical vibration of a surface. Commercial LDVs are calibrated for this scenario. Alternative test scenarios may be configured, however, where l is held constant by reflecting the laser light off a rigid mirror, while n is modulated by dynamic changes in air density or sound. Owing to the following equivalency, n· l = n· l,
.
(15.1)
the product n · l can be quantitatively measured using the LDVs calibrated reading for l. In the product n · l, l is the length over which the laser beam interacts with the sound field. This technique was first described and demonstrated in references [1, 2]. We assume the following relationship between air density and refractive index
Z. Liu () · X. Niu · Y. Meng · E. Vatankhah · D. Kim · N. A. Hall Chandra Department of Electrical and Computer Engineering, Cockrell School of Engineering, The University of Texas at Austin, Austin, TX, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_15
121
122
Z. Liu et al.
.
ρ − ρ0 n =− ρ n−1
(15.2)
Density can in turn be related to pressure using the adiabatic gas law P . = p0
ρ ρ0
γ
ρ − ρ0 γ (γ − 1) ρ − ρ0 2 =1+γ + + ... ρ0 2! ρ0
(15.3)
where γ is the specific heat ratio and p0 and ρ 0 are the static values of P and ρ. γ = 1.4 for air. Using the first-order approximation of the adiabatic gas law, expressions (15.1)–(15.3) can be combined to express the sound pressure P as P =
.
np0 γ l n−1 l
(15.4)
where again l is not a physical displacement but rather the reading from the LDV and l is the span of air modulated by the sound wave. Figure 15.1 illustrates the measurement setup. The LDV system we use is Polytec PSV-500 Scanning Vibrometer. The transducer is placed at the edge of the reflector, which is a four-inch polished silicon wafer. The laser beam of the LDV passes through the sound field and is reflected by a rigid reflector beneath the sound field. In a first demonstration, the transducer is a piezoelectric micromachined ultrasound transducer (PMUT) comprising a 7×4 array of diaphragms, each with 380 μm diameter. The overall chip dimensions of this PMUT are 3495 μm × 1595 μm. Cross-section schematics and other fabrication details are identical to devices summarized in reference [3], with the exception that PZT is the active material in this transducer rather than AlN. The PMUT is actuated with a 228.55 kHz tone burst signal with a 20% duty cycle. Prior to measuring the sound field generated by this transducer, we directly measure the vibration of the transducer’s surface. Figure 15.2 presents a scan of the PMUT’s surface velocity in response to the excitation, while
Fig. 15.1 Schematic of the LDV measurements
15 Measurement of Airborne Ultrasound Using Laser Doppler Vibrometry
123
Fig. 15.2 Surface scan of the PMUT
Fig. 15.3 Surface velocity at the center point of a PMUT diaphragm (location index 5658 in Fig. 15.2) in response to the tone burst signal Vin
Fig. 15.3 presents the velocity of a diaphragm’s center vs. time. The maximum peak-to-peak surface velocity is 1.29 m/s. Fig. 15.3 also presents the excitation tone burst signal used in the experiment as Vin . The arrangement described in Fig. 15.1 is then used to scan the sound field. The scan consists of 759 × 37 measurement points. For each point, the transducer is excited with the tone burst waveform and the LDV captures time-domain data with a sampling rate equal to 12.5 MHz and a total window equal to 800 μs. A total of 7 time-averages are taken at each measurement point. Post-processing of results enables visualization of the sound field in both time and space. As an example, Fig. 15.4 presents a color map of the scan at a particular instance in time. Figure 15.5 presents the captured time-domain waveform at two points in the sound field – in immediate proximity of the transducer surface and at a distance 1.82 cm from the transducer surface. Figure 15.6 presents the on-axis profile of the sound field. Several features of these plots are interesting. Comparing the surface velocity in the top row of Fig. 15.3 with the input drive waveform in the bottom row of Fig. 15.3, it is observed that the transducer has characteristics of a second-order system, requiring a ramp-up to steady state in response to the tone burst followed by a subsequent ring down after the tone burst stimulus is removed. From Fig. 15.6 we observe that the acoustic wavelength is 1.5 mm, as expected for sound in air at 228 kHz. Also from Fig. 15.6, the sound pressure is observed to decay in proportion to 1/r, as expected for radiation in the far field. Quantitative examination of the field in Fig. 15.4 shows a full width half maximum (FWHM) beam width equal to 39 degrees. The vertical axis units in Figs. 15.5 and 15.6 are velocity. These are not physical velocities. They can be converted to pressure using Eq. (15.4) if assumptions are made regarding the effective laser path length through the beam l. At an on-axis distance of r = 1 cm for example, the LDV measures a velocity equal to approximately 300 μm/s, following
124
Z. Liu et al.
Fig. 15.4 Color-map measurement of the sound field
Fig. 15.5 LDV point measurements at two points in the field (top) and (bottom) the input waveform
Fig. 15.6. If l is taken as FWHM × 1 cm = 0.71 cm, the RMS pressure averaged along this path following Eq. (15.1) is 10.81 Pa. In a final demonstration, we replaced the PMUT with a bulk piezoelectric ultrasound transducer (StemInc SMATR300H19XDA) capable of generating large surface velocity (~7 m/s amplitude) at 300 kHz and therefore high intensity ultrasound. This transducer has an active diameter of approximately 7 mm. Time-domain waveforms are captured at several distances along the axis using the LDV setup described in Fig. 15.1. In a separate measurement, the same waveforms are captured using a broadband hydrophone model HGL-0400 with a bandwidth of 20 MHz. The hydrophone capture at three different distances from the source is presented as the top row of images in Fig. 15.7. The images presented are single periods of a tone burst waveform comprising many cycles. The distortion of the waveform increases as the beam propagates. This accumulated distortion or wave steepening is predicted by nonlinear acoustics and is the result of positive pressure regions of the wave traveling with higher sound speed than rarefaction regions of the sound wave [4]. Because the hydrophone data and the LDV data are on different scales, to compare them we normalize the data to a 0–1 scale. The bottom row of Fig. 15.7 presents the comparisons. The LDV-measured waveform agrees well with the hydrophone measurement at 6 mm, but less well at distances further from the source as the wave steepens. While the LDV measurements do indeed show some degree of wave steepening, the technique does not capture the waveforms as accurately as the hydrophone. This is likely due to the finite diameter of the laser beam which results in a spatially averaged pressure measurement along the diameter. For the measurements presented, the laser beam is focused to an approximately 25 μm diameter waist with an approximate numerical aperture of 0.052. Averaging a sound wave across space results in low pass filtering the waveform
15 Measurement of Airborne Ultrasound Using Laser Doppler Vibrometry
125
Fig. 15.6 Measured velocity vs propagation distance
Fig. 15.7 Waveform measurements using the hydrophone and LDV system. (a–c) are pressure data from the hydrophone while (d–f) are normalized data from the hydrophone and LDV system
126
Z. Liu et al.
temporally. The wave steepening observed in the top row of Fig. 15.7 results in increased signal power in the higher harmonics of the 300 kHz tone. The hydrophone faithfully captures these harmonics owing to its broadband response while the LDV technique does not due to low pass filtering.
15.2 Summary The measurements presented showcase a unique advantage of the LDV sound field measurement technique, namely the ability to conveniently scan a sound field with high resolution in a completely noninvasive fashion and without scattering off of transducer surfaces. Further, few instruments exist for measuring ultrasound in air beyond 100 kHz, adding to the technological significance of the technique. A limitation of the approach is that pressure measurements are spatially averaged along the length of the laser beam passing through the sound field. In addition, owing to the finite laser beam diameter, pressure measurements are averaged along the beam’s diameter as well which results in low pass filtering of the pressure measurements.
References 1. Jia, X., Quentin, G., Lassoued, M.: Optical heterodyne detection of pulsed ultrasonic pressures. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 40(1), 67–69 (1993) 2. Bahr, L., Lerch, R.: Sound pressure measurement utilizing Light Refractive Tomography. In: IEEE Ultrasonics Symposium. Beijing, China (2008) 3. Seo, Y., Kim, D., Hall, N.A.: High-temperature piezoelectric pressure sensors for hypersonic flow measurements. In: 20th International Conference on Solid-State Sensors. Actuators and Microsystems & Eurosensors, Berlin (2019) 4. Hamilton, M.F., Blackstock, D.T.: Nonlinear acoustics, pp. 233–261. Academic, San Diego (1998)
Chapter 16
Modal Identification of a Turbine Blade with a Curved Surface Under Random Excitation by a 3D CSLDV System and the Extended Demodulation Method Ke Yuan and Weidong Zhu
Abstract This study develops a novel general-purpose 3D continuously scanning laser Doppler vibrometer (CSLDV) system to measure 3D full-field vibration of a turbine blade with a curved surface under random excitation and proposes an operational modal analysis (OMA) method to identify its modal parameters. The 3D CSLDV system developed in this study contains three CSLDVs, an external controller, and a profile scanner. A 3D zig-zag scan trajectory is designed on the blade surface based on profile scanning, and scan angles of mirrors in CSLDVs are adjusted based on relations among their laser beams to ensure that three laser spots can continuously and synchronously move along the same scan trajectory. The OMA method referred to as the extended demodulation method is used to process the measured response of the blade under random excitation to obtain its damped natural frequencies and 3D full-field undamped mode shapes. Comparison between the first six modal parameters from the proposed 3D CSLDV system and those from a commercial 3D scanning laser Doppler vibrometer (SLDV) system is made in this study. Errors between the first six damped natural frequencies from 3D CSLDV measurement and those from 3D SLDV measurement are less than 1.5%, and modal assurance criterion values between the first six undamped mode shapes from 3D CSLDV measurement and corresponding damped mode shapes from 3D SLDV measurement are larger than 95%. It took the 3D SLDV system about 900 seconds to scan 85 measurement points in the experiment, and the 3D CSLDV system 115.5 seconds to scan 132,000 points, which means that the 3D CSLDV system can measure much more points in much less time than the 3D SLDV system in 3D full-field vibration measurement. Keywords 3D continuously scanning laser Doppler vibrometer system · Extended demodulation method · Turbine blade · Curved surface · 3D full-field vibration · Modal parameter identification
16.1 Introduction Three-dimensional (3D) full-field vibration measurement plays an important role in structural analysis, especially for structures with curved and complex surfaces, such as turbine blades, vehicle bodies, and aircraft wings. 3D vibration can provide more information and be used to locate defects on more complex structures than single-axis vibration and can improve the accuracy of their structural health monitoring [1]. 3D full-field vibration can also be used to identify dynamic characteristics of a complex structure and update its finite element (FE) model during structural analysis and product design where vibration must be determined in all its components [2]. A triaxial accelerometer is a common device in modal tests to measure 3D vibration of a structure. However, it can lead to mass loading as a contact-type sensor, which can be amplified when multiple triaxial accelerometers are needed in modal tests of light-weight structures [3]. As a non-contact sensor, a laser Doppler vibrometer (LDV) can be used to measure vibration of a structure by avoiding the mass loading problem. A conventional LDV can only be used to capture the velocity response of a fixed point on the test structure along a single axis that is parallel to its laser beam. It can be extended and automated as a scanning laser Doppler vibrometer (SLDV) by adding a pair of orthogonally mounted galvanometer mirrors. Some investigations focused on extending the LDV and SLDV from one dimension to three dimensions. Typical ideas include assembling an LDV or SLDV on an industrial robot arm or a multi-axis positioning frame [4, 5], sequentially moving an LDV or SLDV to three independent locations [6–8], and placing three LDVs at three locations and calibrating angles among their laser beams [9]. Commercial 3D SLDV systems, such as Polytec PSV-400 and PSV-500, were developed
K. Yuan () · W. Zhu Department of Mechanical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA e-mail: [email protected]; [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_16
127
128
K. Yuan and W. Zhu
by calibrating angles among laser beams from three SLDVs. These 3D SLDV systems have been applied in many areas, such as 3D vibration behavior analysis of power tools under operating conditions for noise reduction purposes [10], longitudinal vibration measurement of a beam for damage detection [11], modal analysis of a whole vehicle body [1], FE model validation of a sandwich panel [12] and a three-bladed wind turbine assembly [13], and 3D dynamic strain field measurement of a fan blade [14]. However, it usually takes a long time to obtain high spatial resolution in 3D SLDV measurement when the test area is large and the measurement grid is dense [1], because laser spots must stay at one measurement point for enough time before they are moved to the next one. An improved method is to continuously move the laser spot over the surface of a structure, which was developed as a continuously scanning laser Doppler vibrometer (CSLDV) [15, 16]. The CSLDV can be used to measure transverse vibration of a structure through one-dimensional (1D) or two-dimensional (2D) scan trajectories [16–18]. Dense vibration measured from the CSLDV can be used in damage detection of the structure [19–22]. There is still a scarcity of studies on 3D vibration measurement of a structure using the CSLDV. Weekes and Ewins [23] developed a method to measure 3D vibration of a turbine blade by sequentially placing the CSLDV at three independent locations. The measurement system combined the single CSLDV with a Microsoft Kinect and captured 3D operational deflection shapes (ODSs) of the blade under multifrequency excitation. The main challenge of using the single CSLDV for 3D ODS measurement is to ensure that the scan trajectory is the same when it is sequentially placed at the three locations. Using the single CSLDV to measure 3D vibration cannot deal with transient response of a structure under random excitation, which is the most practical excitation method, to obtain its mode shapes, since acquisition of three velocity components is not simultaneous. In addition, it takes much time to move the laser head three times during measurement. Recently, a novel 3D CSLDV system that contains three CSLDVs and an external controller was developed to address the above challenges [24–26]. The system can focus three laser spots at one location through calibration and continuously and synchronously move them along the same scan trajectory. The system was experimentally validated by 3D vibration measurements of a beam and a plate, and ODS and mode shape results showed good agreements with those from a commercial 3D SLDV system and FE models. However, the system function was limited to scanning structures with planar surfaces, such as straight beams and flat plates. In the real world, structures can have curved and complex surfaces. Yuan and Zhu [27] addressed the above challenge and developed a novel general-purpose 3D CSLDV system to measure ODSs of a turbine blade under sinusoidal excitation. ODSs obtained by the system were compared with those from a commercial SLDV system to validate its accuracy. Further investigations are needed to estimate modal parameters, such as damped natural frequencies (DNFs) and 3D undamped mode shapes (UMSs), of a structure with a curved surface under random excitation, which is the most realistic excitation. It can be expected that response of structures under random excitation has much more noise than that under sinusoidal excitation, meaning that it is more challenging to estimate mode shapes than ODSs. Several methods including the demodulation method, polynomial method, and lifting method have been developed and improved to process signals from CSLDV measurements. By using these methods, modal parameters, including natural frequencies, mode shapes, and damping ratios, of structures under sinusoidal [28], impact [29, 30], multi-sine [31, 32], and ambient excitations [18, 25, 33, 34] can be identified. Note that the conventional demodulation method cannot be used to process the vibration response of a structure under random excitation since a known excitation frequency is needed to conduct the demodulation. An improved demodulation method was developed to estimate modal parameters of a rotating fan blade under ambient excitation using a tracking CSLDV through 1D and 2D scan trajectories [18, 34], but the studies focused on transverse mode-shape identification. Yuan and Zhu [25] developed the extended demodulation method to identify damped natural frequencies and undamped 3D mode shapes from 3D CSLDV measurement of a beam under white-noise excitation. However, the beam in the study has a planar surface and the scan trajectory is a straight line. One still needs to study whether 3D modal parameters of a turbine blade with a curved surface under random excitation can be identified from 3D CSLDV measurement. This study aims to develop a novel general-purpose 3D CSLDV system that can measure dense 3D full-field vibration of a turbine blade with a curved surface under random excitation in a non-contact and fast way and proposes an operational modal analysis (OMA) method to identify its modal parameters. The 3D CSLDV system developed in this study contains three CSLDVs, an external controller, and a profile scanner. A 3D zig-zag scan trajectory is designed on the blade surface based on profile scanning, and scan angles of mirrors in CSLDVs are adjusted based on relations among their laser beams to ensure that three laser spots can continuously and synchronously move along the same scan trajectory. The OMA method referred to as the extended demodulation method is used to process the measured response of the blade under random excitation to obtain its damped natural frequencies and 3D full-field undamped mode shapes. Comparison between the first six modal parameters from the proposed 3D CSLDV system and those from a commercial 3D SLDV system is made in this study. Errors between the first six damped natural frequencies from 3D CSLDV measurement and those from 3D SLDV measurement are less than 1.5%, and modal assurance criterion (MAC) values between the first six undamped mode shapes from 3D CSLDV measurement and corresponding damped mode shapes from 3D SLDV measurement are larger than
16 Modal Identification of a Turbine Blade with a Curved Surface Under. . .
129
95%. The test time in 3D CSLDV measurement is less than 1/8 of that in 3D SLDV measurement, while the number of measurement points in 3D CSLDV measurement is about 1500 times of that in 3D SLDV measurement. There are three major advancements in this study. First, the 3D CSLDV system developed in this study is a generalpurpose system, which can conduct 3D vibration measurement of a structure with an arbitrarily curved surface. Second, the 3D CSLDV system that contains three laser heads can focus three laser spots at the same point via the calibration method developed here and continuously and synchronously move the three laser spots along the same scan trajectory. It can deal with transient response of a structure and conduct 3D real-time vibration response monitoring. Third, vibration response of a structure with a curved surface under random excitation measured by the 3D CSLDV system is processed via the extended demodulation method developed here to estimate its 3D full-field mode shapes.
16.2 3D CSLDV System for Structures with Curved Surfaces As shown in Fig. 16.1, three CSLDVs in the 3D CSLDV system can be referred to as top, left, and right CSLDVs based on their positions during measurement. Spatial positions of the three CSLDVs with respect to a reference object can be calculated through a calibration process. A 3D scan trajectory on a curved surface of a test structure can be predesigned based on calibration and profile scanning results, and scan angles of mirrors in CSLDVs can be adjusted based on them to ensure that three laser spots can continuously and synchronously move along the same scan trajectory. The 3D CSLDV system developed in this study is extended from the commercial Polytec PSV-500-3D system by adding an external controller. The Polytec PSV-500-3D system has no continuously scanning function, but has the interface connector in each SLDV that can be connected to an external controller. It also has an internal profile scanner that can capture 3D coordinates of points on the test structure. A dSPACE MicroLabBox and the ControlDesk software were used as the external controller to generate a series of signals to control six scan mirrors to continuously rotate in the configured 3D CSLDV system. The 3D CSLDV system and Polytec PSV-500-3D system can be easily switched to each other by connecting or disconnecting the MicroLabBox to the systems. As shown in Fig. 16.2a, a reference object with a plane and two poles, which has some known accurate coordinates, is used as the measurement coordinate system (MCS) to calibrate the 3D CSLDV system. One can see that there are twenty-one points on the plane of the reference object and eight points on its two poles. Two orthogonal scan mirrors in a CSLDV, which can be referred to as X and Y mirrors, are used to build a vibrometer coordinate system (VCS) to describe the position of the CSLDV during measurement in this work. One can see from Fig. 16.2b that the rotating center of the X mirror o is also the origin of the VCS o − x y z , and the rotating center of the Y mirror o is on the o z axis; so the X mirror can rotate about the x axis, and the Y mirror can rotate about the y axis that is parallel to the y axis. Variables α and β that can be controlled by inputting voltages to X and Y mirrors through the external controller are rotational angles of X and Y mirrors from their initial positions, respectively. The distance between centers of the two scan mirrors |o o | = d is a known parameter of the CSLDV. Note that points P and o on the laser beam path are incident points on X and Y mirrors, respectively. The point o is imaged as the point P on the plane y o z , which means that |o P | = |o o | = d. Therefore, coordinates of points P and P in the VCS can be written as
Fig. 16.1 Schematic of the 3D CSLDV system proposed in this study for measuring 3D full-field vibration of a structure with a curved surface
130
K. Yuan and W. Zhu
Fig. 16.2 (a) Reference object used for calibration of the 3D CSLDV system and (b) the geometrical model of X and Y mirrors in a CSLDV
PVCS = [−d tan (β) , 0, 0]T
(16.1)
PVCS = [0, −d cos (α) , d sin (α)]T
(16.2)
.
.
respectively, where the subscript T denotes transpose of a matrix. The unit vector of the laser beam path PP can be derived as e=
.
P P 1 = [d tan (β) , d cos (α) , d sin (α)]T = [sin (β) , cos (α) cos (β) , sin (α) cos (β)]T |P P | d tan2 (β) + 1
(16.3)
Based on the geometrical model shown in Fig. 16.2b, coordinates of a point P in the VCS can be expressed as PVCS = PVCS − re = [−d tan (β) − r sin (β) , −r cos (α) cos (β) , −r sin (α) cos (β)]T
.
(16.4)
where r is the distance from the point P to the incident point of the laser path on the X mirror P . It can be assumed that coordinates of a selected calibrating point P in the MCS are PMCS = [x, y, z]T , whose exact values can be easily obtained from the reference object. The relation between PMCS and PVCS can be written as PMCS = T + RPVCS
(16.5)
.
where T = [xo , yo , zo ]T is the translation vector that denotes coordinates of the origin o in the MCS, and R is the direction cosine matrix from the MCS to the VCS. By using the method proposed in Refs. [24] and [35], which includes procedures of solving an over-determined nonlinear problem and an optimization problem, T and R matrices for all the three CSLDVs can be calculated. An efficient bisection method was developed to design a straight-line scan trajectory for a beam in Ref. [25] and to design a 2D zigzag scan trajectory for a flat plate in Ref. [26]. However, the bisection method assumes that all the measurement points are in a plane, which is not suitable for 3D CSLDV measurements of structures with curved surfaces. In this study, a more general method is developed to address the challenge and design scan trajectories on both planar and curved surfaces. There are three VCSs and one MCS in the proposed 3D CSLDV system. Based on the fact that coordinates of a measurement point Pk on the surface of a test structure are constant in the MCS for three CSLDVs, positional relations among three CSLDVs for the point can be established by PkMCS = T1 + R1 PkVCS_1 = T2 + R2 PkVCS_2 = T3 + R3 PkVCS_3
.
(16.6)
where subscripts 1, 2, and 3 denote top, left, and right CSLDVs, respectively, and the superscript k denotes the k-th measurement point on the scan trajectory. Hence, coordinates of the point Pk in VCSs of top, left, and right CSLDVs can be obtained by
16 Modal Identification of a Turbine Blade with a Curved Surface Under. . .
131
.
k PkVCS_1 = R−1 − T P 1 MCS 1
(16.7a)
.
k PkVCS_2 = R−1 − T P 2 MCS 2
(16.7b)
k PkVCS_3 = R−1 − T P 3 MCS 3
(16.7c)
.
respectively. Therefore, rotational angles of X and Y mirrors of top, left, and right CSLDVs for the point Pk can be obtained by k k k k k k α1k = arctan zVCS _1 /yVCS_1 , β1 = arctan xVCS_1 / yVCS_1 / cos α1 − d
(16.8a)
k k k k k k α2k = arctan zVCS _2 /yVCS_2 , β2 = arctan xVCS_2 / yVCS_2 / cos α2 − d
(16.8b)
k k k k k k α3k = arctan zVCS _3 /yVCS_3 , β3 = arctan xVCS_3 / yVCS_3 / cos α3 − d
(16.8c)
.
.
.
respectively. As discussed above, the key to calculate rotational angles of X and Y mirrors in the three CSLDVs is to obtain exact coordinates of each measurement point on the scan trajectory in the MCS. Although a device like a 3D scanner can be used to easily obtain the profile of a structure, the Polytec PSV-500-3D system is used in this work to scan the test structure and obtain 3D coordinates of points on its surface, which can reduce possible errors from interaction between the scanner and 3D CSLDV system. Note that a linear interpolation is used to process the obtained surface profile, since the Polytec PSV-500-3D system can only move laser spots in a step-wise mode along a predefined grid that is not dense enough to generate signals for CSLDV measurement. With the methodology described above, three laser spots from the three CSLDVs can continuously and synchronously move along the same scan trajectory by inputting corresponding rotational angles to their scan mirrors. Vibration components of the point Pk in x, y, and z directions of the MCS can then be obtained by .
Vxk , Vyk , Vzk
T
T T = R1 ek1 , R2 ek2 , R3 ek3 V1k , V2k , V3k
(16.9)
where .V1k , .V2k , and .V3k are measured velocities of the point Pk by using top, left, and right CSLDVs, respectively. The process described by Eq. (16.9) can be repeated for each point on the prescribed scan trajectory. Finally, 3D vibration of the test structure in the MCS can be obtained by the 3D CSLDV system. The obtained response can not only be directly used to monitor the real-time 3D vibration of the test structure but also be processed to identify its damped natural frequencies and 3D full-field undamped mode shapes. The response of a linear, time-invariant, viscously damped structure under external excitation can be obtained by solving its governing partial differential equation [25, 36]: u (x, t) =
N
.
φi (x) φi xp Ai (t) cos ωd,i t + Bi (t) sin ωd,i t + Ci (t)
(16.10)
i=1
where x denotes the spatial position of a point on the structure, xp denotes the position of a concentrated force that is assumed to be white-noise excitation in this work, t is time, N is the total number of points on the measurement grid, φ i is the i-th mass-normalized eigenfunction of the associated undamped structure, Ai (t), Bi (t), and Ci (t) are arbitrary functions of time related to the concentrated force, and ωd, i is the i-th damped natural frequency of the structure that can be obtained by applying the fast Fourier transform (FFT) on the response. By applying a bandpass filter that allows only ωd, i to pass through it, the response u in Eq. (16.10) can be expressed as ui (x, t) = i (x) cos ωd,i t − ϕ = I,i (x) cos ωd,i t + Q,i (x) sin ωd,i t
.
(16.11)
132
K. Yuan and W. Zhu
Fig. 16.3 (a) Arrangement of the 3D CSLDV system with respect to the test blade and (b) the test blade with clamped-free boundary conditions and its position in the MCS
Fig. 16.4 (a) Profile scanning results of the turbine blade and its 3D view in the MCS and (b) a 3D zig-zag scan trajectory designed for 3D CSLDV measurement of the turbine blade
where ui (x, t) is the filtered response of u(x, t), ·i (x) are responses at measurement points along the scan trajectory, which have two components that are the in-phase component ·I, i (x) and quadrature component ·Q, i (x), and ϕ is the phase variable. To obtain in-phase and quadrature components of ·i (x), multiplying ui (x, t) in Eq. (16.11) by cos(ωd, i t) and sin(ωd, i t) yield .
ui (x, t) cos ωd,i t = 0.5I,i (x) + 0.5I,i (x) cos 2ωd,i t + 0.5Q,i (x) sin 2ωd,i t
(16.12)
ui (x, t) sin ωd,i t = 0.5Q,i (x) + 0.5I,i (x) sin 2ωd,i t − 0.5Q,i (x) cos 2ωd,i t
(16.13)
.
respectively. A low-pass filter can then be used to eliminate sin(2ωd, i t) and cos(2ωd, i t) terms in Eqs. (16.12) and (16.13), and corresponding results can be multiplied by a scale factor of two to obtain ·I, i (x) and ·Q, i (x), respectively. By applying the above procedure on each scan line in the designed 3D zig-zag scan trajectory, the extended demodulation method in [25] is extended from one dimension to three dimensions and can be used to identify damped natural frequencies and 3D full-field undamped mode shapes of a turbine blade with a curved surface under random excitation.
16 Modal Identification of a Turbine Blade with a Curved Surface Under. . .
133
Fig. 16.5 Transformation from velocities measured by top, left, and right CSLDVs to those in x, y, and z directions in the MCS with white-noise excitation Table 16.1 Comparisons between test time and the number of measurement points of 3D CSLDV measurement and those of 3D SLDV measurement Measurement method 3D CSLDV measurement 3D SLDV measurement
Test time (s) 115.5 900
Number of measurement points 132,000 85
16.3 Experimental Investigation A turbine blade twisted from a trapezoidal plate was tested in this work. The original trapezoidal plate had two bases of 26 mm and 42 mm, an altitude of 173.9 mm, and a thickness of 3.5 mm. The blade with a curved surface was clamped on a bench vice at its one end and excited by a MB Dynamics MODAL-50 shaker at its top end through a stinger, as shown in Fig. 16.3. Note that arrangement of three laser heads is close to a plane instead of a cone, as shown in Fig. 16.3a, since modes of interest in this study are mostly bending or torsional modes, not longitudinal modes. A gray reflective tape was attached on the surface of the blade to maximize back-scattering of laser light. In the experiment, the MCS was set as parallel to the clamped end of the blade (Fig. 16.3b), so that the z direction represents the out-of-plane component of vibration of the blade, and x and y directions represent its in-plane components. A periodic chirp signal with a frequency range of 0 to 3000 Hz and a duration of 900 s was used in the modal test of the blade by the 3D SLDV system, and a white-noise signal with a frequency range of 0 to 3000 Hz and a duration of 115.5 s was used as random excitation in its 3D CSLDV measurement. As discussed in Sect. 16.2, a profile scanning procedure was conducted prior to the scan trajectory design. A total of 85 scanning points were arranged as a 17 × 5 grid, which were also used as measurement points in 3D SLDV measurement and reference points in comparison with CSLDV measurement results. The 3D view of the blade profile is shown in Fig. 16.4a. Based on the profile scanning data and the linear interpolation method, a 3D zig-zag scan trajectory that includes 33 scan lines was designed for 3D CSLDV measurement of the blade, as shown in Fig. 16.4b, and coordinates of each point on the scan trajectory were calculated in the MCS. The zig-zag scan trajectory was used since this study focused on using an extended demodulation method to process response of a structure under random excitation, and the zig-zag scan trajectory is more easily processed by the demodulation method than other scan trajectories. A scanning period T can be defined as a cycle that laser spots move along a scan line from the start point to the end point and move back. The scanning frequency fsca = 1/T. In the experiment, fsca = 1 Hz, and laser spots were designed to move along each scan line with 3.5 periods
134
K. Yuan and W. Zhu
Fig. 16.6 Frequency spectrum of the turbine blade shown as black solid line and its identified natural frequencies shown as red dashed lines by using (a) the 3D SLDV system and (b) the 3D CSLDV system
to ensure continuity of the whole zig-zag scan trajectory and obtain enough response data to conduct a three-time average, which is the same as that for 3D SLDV measurement. Therefore, the total time of scanning the whole blade surface in 3D CSLDV measurement is t = 33 × 3.5 = 115.5 s. As discussed in Sect. 16.2, vibration of the turbine blade in three VCSs can be directly obtained from its 3D CSLDV measurement, while vibration components in the MCS can be obtained through velocity transformation using Eq. (16.9). The original and calculated vibration responses from 3D CSLDV measurement of the blade under white-noise excitation are shown in Fig. 16.5, where horizontal axes represent time and vertical axes represent velocities. In each figure, the three left subplots show original velocities from the three CSLDVs, and the three right subplots show calculated velocities in three axes of the MCS. In the experiment, laser spots were moved to scan the surface of the blade from its upper end to its lower end; so time-velocity series shown in Fig. 16.5 represent responses from their free end to their clamped end. One can see that original time-velocity series from the three CSLDVs have similar shapes to each other and to calculated velocities in the z direction in the MCS. This is the case because each 1D CSLDV is designed to measure the single-axial velocity in the direction of its laser beam, which is close to the z direction in the experiment. It can also be found that velocities around lower ends of the blade are much smaller than those in other areas, and the velocity in the y direction has a much smaller amplitude than those in x and z directions, which are expected due to clamped-free boundary conditions of the blade in the experiment. The sampling frequency of 3D CSLDV measurement in the experiment is fsa = 8000 Hz, which means that 8000 data are sampled in 1 s. As a result, the total number of 3D CSLDV measurement points is k = 0.5 × 8000 × 33 = 132,000.
16 Modal Identification of a Turbine Blade with a Curved Surface Under. . .
135
Fig. 16.7 (a) First 3D full-field damped mode shapes of the turbine blade from 3D SLDV measurement and (b) corresponding undamped mode shapes from 3D CSLDV measurement
Fig. 16.8 (a) Second 3D full-field damped mode shapes of the turbine blade from 3D SLDV measurement and (b) corresponding undamped mode shapes from 3D CSLDV measurement
Comparisons between test time and the number of measurement points of 3D CSLDV measurement and those of 3D SLDV measurement are shown in Table 16.1. It can be seen that the number of measurement points in 3D CSLDV measurement is about 1500 times of that in 3D SLDV measurement, while the test time in 3D CSLDV measurement is less than 1/8 of that in 3D SLDV measurement, meaning that the 3D CSLDV system can measure much more points in much less time than the 3D SLDV system in full-field vibration measurement. Frequency spectra of the turbine blade shown as black solid lines from 3D SLDV and 3D CSLDV measurements are shown in Fig. 16.6a and b, respectively, where the first six damped natural frequencies of the blade shown as red dashed lines are identified in the frequency range from 0 to 3000 Hz. Note that amplitudes of frequency spectra are different since different excitation signals are used in SLDV and CSLDV measurements. Comparison between the first six damped natural frequencies of the blade from 3D SLDV measurement and those from 3D CSLDV measurement shows that the maximum error between the first six damped natural frequencies identified by the two systems is 1.5%. Results of the first six 3D full-field undamped mode shapes of the clamped-free turbine blade from 3D CSLDV measurement and corresponding damped mode shapes from 3D SLDV measurement are shown in Figs. 16.7, 16.8, 16.9, 16.10, 16.11 and 16.12, where left subplots represent damped mode shapes from SLDV measurement and right subplots represent undamped mode shapes from CSLDV measurement. Mode-shape components along three directions of each mode are normalized by the maximum value of all the components in the three directions. In each subplot, mode shapes from x, y, and z directions defined by the MCS are shown from left to right. Damped mode shapes of the turbine blade from SLDV
136
K. Yuan and W. Zhu
Fig. 16.9 (a) Third 3D full-field damped mode shapes of the turbine blade from 3D SLDV measurement and (b) corresponding undamped mode shapes from 3D CSLDV measurement
Fig. 16.10 (a) Fourth 3D full-field damped mode shapes of the turbine blade from 3D SLDV measurement and (b) corresponding undamped mode shapes from 3D CSLDV measurement
measurement can be compared with their corresponding undamped mode shapes from CSLDV measurement since it has relatively small damping ratios. One can see that the first, fourth, and sixth mode shapes from CSLDV measurement are bending modes, while the second, third, and fifth mode shapes are torsional modes, which have similar patterns to those from corresponding SLDV measurement. Note that mode shapes in the y direction are less smooth than those in x and z directions for both 3D SLDV and 3D CSLDV measurements. One possible reason is that it is harder to excite longitudinal vibration of a cantilever structure than its transverse vibration, which leads to smaller signal-to-noise ratios in the y direction than those in x and z directions; this can also be validated from the 3D real-time response shown in Fig. 16.5. Another possible reason is that arrangement of three laser heads is close to a plane instead of a cone, which leads to a slightly smaller response amplitude along the y direction. To further check correlations between 3D full-field mode shapes of the blade from 3D CSLDV and 3D SLDV measurements, their MAC values [37] are calculated, as shown in Table 16.2. One can see that MAC values between 3D full-field undamped mode shapes from 3D CSLDV measurement and corresponding damped mode shapes from 3D SLDV measurement are larger than 95% for all the six modes, showing high correlations between results from the proposed 3D CSLDV system and the commercial 3D SLDV system. It can be safely concluded that the 3D CSLDV system proposed in this study has the same accuracy as that of the commercial 3D SLDV system.
16 Modal Identification of a Turbine Blade with a Curved Surface Under. . .
137
Fig. 16.11 (a) Fifth 3D full-field damped mode shapes of the turbine blade from 3D SLDV measurement and (b) corresponding undamped mode shapes from 3D CSLDV measurement
Fig. 16.12 (a) Sixth 3D full-field damped mode shapes of the turbine blade from 3D SLDV measurement and (b) corresponding undamped mode shapes from 3D CSLDV measurement Table 16.2 MAC values between 3D full-field undamped mode shapes of the turbine blade from 3D CSLDV measurement and corresponding damped mode shapes from 3D SLDV measurement
Mode No. 1 2 3 4 5 6
MAC values x direction 100% 95.3% 99.5% 96.5% 95.7% 95.6%
y direction 98.7% 98.1% 99.2% 98.2% 99.8% 98.6%
z direction 100% 98.1% 98.5% 99.5% 95% 96.4%
16.4 Conclusion A novel general-purpose 3D CSLDV system that contains three CSLDVs, an external controller, and a profile scanner is developed to measure 3D full-field vibration of a clamped-free turbine blade with a curved surface under random excitation, and the extended demodulation method is used to process the measured response of the turbine blade to obtain its damped natural frequencies and 3D full-field undamped mode shapes. Calibration among three VCSs built on the three CSLDVs and a MCS built on a reference object is conducted to obtain their relations. A 3D zig-zag scan trajectory is designed on
138
K. Yuan and W. Zhu
the blade surface based on profile scanning, and scan angles of CSLDVs are adjusted based on relations among VCSs and the MCS to ensure that three laser spots can continuously and synchronously move along the designed scan trajectory. Comparison between the first six modal parameters from the proposed 3D CSLDV system and those from a commercial 3D SLDV system is made in this study. Errors between the first six damped natural frequencies from 3D CSLDV measurement and those from 3D SLDV measurement are less than 1.5%, and MAC values between the first six undamped mode shapes from 3D CSLDV measurement and corresponding damped mode shapes from 3D SLDV measurement are larger than 95%, indicating that the 3D CSLDV system proposed in this study has the same accuracy as that of the commercial 3D SLDV system. In the experiment, the number of measurement points in 3D CSLDV measurement is about 1500 times of that in 3D SLDV measurement, while the test time in 3D CSLDV measurement is less than 1/8 of that in 3D SLDV measurement, meaning that the 3D CSLDV system can measure much more points in much less time than the 3D SLDV system in full-field vibration measurement. Acknowledgments The authors are grateful for the financial support from the National Science Foundation through Grant No. CMMI-1763024.
References 1. Rothberg, S.J., Allen, M.S., Castellini, P., Di Maio, D., Dirckx, J.J.J., Ewins, D.J., Halkon, B.J., Muyshondt, P., Paone, N., Ryan, T., Steger, H., Tomasini, E.P., Vanlanduit, S., Vignola, J.F.: An international review of laser Doppler vibrometry: Making light work of vibration measurement. Opt. Lasers Eng. 99, 11–22 (2017) 2. Castellini, P., Martarelli, M., Tomasini, E.P.: Laser Doppler vibrometry: Development of advanced solutions answering to technology’s needs. Mech. Syst. Signal Process. 20(6), 1265–1285 (2006) 3. La, J., Choi, J., Wang, S., Kim, K., Park, K.: Continuous scanning laser Doppler vibrometer for mode shape analysis. Opt. Eng. 42(3), 730–737 (2003) 4. Margerit, P., Gobin, T., Lebée, A., Caron, J.F.: The robotized laser doppler vibrometer: On the use of an industrial robot arm to perform 3D full-field velocity measurements. Opt. Lasers Eng. 137, 106363 (2021) 5. O’Malley, P., Woods, T., Judge, J., Vignola, J.: Five-axis scanning laser vibrometry for three-dimensional measurements of non-planar surfaces. Meas. Sci. Technol. 20(11), 115901 (2009) 6. Sels, S., Vanlanduit, S., Bogaerts, B., Penne, R.: Three-dimensional full-field vibration measurements using a handheld single-point laser Doppler vibrometer. Mech. Syst. Signal Process. 126, 427–438 (2019) 7. Kim, D., Song, H., Khalil, H., Lee, J., Wang, S., Park, K.: 3-D vibration measurement using a single laser scanning vibrometer by moving to three different locations. IEEE Trans. Instrum. Meas. 63(8), 2028–2033 (2014) 8. Chen, D.M., Zhu, W.D.: Investigation of three-dimensional vibration measurement by a single scanning laser Doppler vibrometer. J. Sound Vib. 387, 36–52 (2017) 9. Miyashita, T., Fujino, Y.: Development of 3D vibration measurement system using laser doppler vibrometers. In: Health Monitoring and Smart Nondestructive Evaluation of Structural and Biological Systems V, vol. 6177, pp. 170–179. SPIE (2006) 10. Bendel, K., Fischer, M., Schuessler, M.: Vibrational analysis of power tools using a novel three-dimensional scanning vibrometer. In: Sixth International Conference on Vibration Measurements by Laser Techniques: Advances and Applications, vol. 5503, pp. 177–184. International Society for Optics and Photonics (2004) 11. Xu, W., Zhu, W.D., Smith, S.A., Cao, M.S.: Structural damage detection using slopes of longitudinal vibration shapes. J. Vib. Acoust. 138(3), 034501 (2016) 12. Yuan, K., Zhu, W.: Modeling of welded joints in a pyramidal truss sandwich panel using beam and shell finite elements. J. Vib. Acoust. 143(4), 041002 (2021) 13. Chen, Y., Mendoza, A.S.E., Griffith, D.T.: Experimental and numerical study of high-order complex curvature mode shape and mode coupling on a three-bladed wind turbine assembly. Mech. Syst. Signal Process. 160, 107873 (2021) 14. Vuye, C., Vanlanduit, S., Presezniak, F., Steenackers, G., Guillaume, P.: Optical measurement of the dynamic strain field of a fan blade using a 3D scanning vibrometer. Opt. Lasers Eng. 49(7), 988–997 (2011) 15. Halkon, B.J., Frizzel, S.R., Rothberg, S.J.: Vibration measurements using continuous scanning laser vibrometry: velocity sensitivity model experimental validation. Meas. Sci. Technol. 14(6), 773 (2003) 16. Di Maio, D., Castellini, P., Martarelli, M., Rothberg, S., Allen, M.S., Zhu, W.D., Ewins, D.J.: Continuous Scanning Laser Vibrometry: A raison d’être and applications to vibration measurements. Mech. Syst. Signal Process. 156, 107573 (2021) 17. Lyu, L.F., Zhu, W.D.: Operational modal analysis of a rotating structure under ambient excitation using a tracking continuously scanning laser Doppler vibrometer system. Mech. Syst. Signal Process. 152, 107367 (2021) 18. Lyu, L.F., Zhu, W.D.: Full-field mode shape estimation of a rotating structure subject to random excitation using a tracking continuously scanning laser Doppler vibrometer via a two-dimensional scan scheme. Mech. Syst. Signal Process. 169, 108532 (2022) 19. Chen, D.M., Xu, Y., Zhu, W.: Damage identification of beams using a continuously scanning laser Doppler vibrometer system. J. Vib. Acoust. 138(5), 051011 (2016) 20. Chen, D.M., Xu, Y., Zhu, W.: Experimental investigation of notch-type damage identification with a curvature-based method by using a continuously scanning laser Doppler vibrometer system. J. Nondestruct. Eval. 36(2), 38 (2017) 21. Chen, D.M., Xu, Y., Zhu, W.: Identification of damage in plates using full-field measurement with a continuously scanning laser Doppler vibrometer system. J. Sound Vib. 422, 542–567 (2018)
16 Modal Identification of a Turbine Blade with a Curved Surface Under. . .
139
22. Xu, Y.F., Chen, D.M., Zhu, W.D.: Operational modal analysis using lifted continuously scanning laser Doppler vibrometer measurements and its application to baseline-free structural damage identification. J. Vib. Control. 25(7), 1341–1364 (2019) 23. Weekes, B., Ewins, D.: Multi-frequency, 3D ODS measurement by continuous scan laser Doppler vibrometry. Mech. Syst. Signal Process. 58, 325–339 (2015) 24. Chen, D.M., Zhu, W.: Investigation of three-dimensional vibration measurement by three scanning laser Doppler vibrometers in a continuously and synchronously scanning mode. J. Sound Vib. 498, 115950 (2021) 25. Yuan, K., Zhu, W.: Estimation of modal parameters of a beam under random excitation using a novel 3D continuously scanning laser Doppler vibrometer system and an extended demodulation method. Mech. Syst. Signal Process. 155, 107606 (2021) 26. Yuan, K., Zhu, W.: In-plane Operating Deflection Shape Measurement of an Aluminum Plate Using a Three-dimensional Continuously Scanning Laser Doppler Vibrometer System. Exp. Mech. 62, 667–676 (2022) 27. Yuan, K., Zhu, W.: A Novel General-purpose Three-dimensional Continuously Scanning Laser Doppler Vibrometer System for Full-field Vibration Measurement of a Structure with a Curved Surface. J. Sound Vib. 540, 117274 (2022) 28. Stanbridge, A.B., Ewins, D.J.: Modal testing using a scanning laser Doppler vibrometer. Mech. Syst. Signal Process. 13(2), 255–270 (1999) 29. Stanbridge, A.B., Ewins, D.J., Khan, A.Z.: Modal testing using impact excitation and a scanning LDV. Shock. Vib. 7(2), 91–100 (2000) 30. Allen, M.S., Sracic, M.W.: A new method for processing impact excited continuous-scan laser Doppler vibrometer measurements. Mech. Syst. Signal Process. 24(3), 721–735 (2010) 31. Vanlanduit, S., Guillaume, P., Schoukens, J.: Broadband vibration measurements using a continuously scanning laser vibrometer. Meas. Sci. Technol. 13(10), 1574–1582 (2002) 32. Di Maio, D., Ewins, D.J.: Continuous Scan, a method for performing modal testing using meaningful measurement parameters; Part I. Mech. Syst. Signal Process. 25(8), 3027–3042 (2011) 33. Yang, S., Allen, M.S.: Output-only modal analysis using continuous-scan laser Doppler vibrometry and application to a 20 kW wind turbine. Mech. Syst. Signal Process. 31, 228–245 (2012) 34. Lyu, L.F., Zhu, W.D.: Operational modal analysis of a rotating structure subject to random excitation using a tracking continuously scanning laser Doppler vibrometer via an improved demodulation method. J. Vib. Acoust. 144(1), 011006 (2022) 35. Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 5, 698–700 (1987) 36. Meirovitch, L.: Analytical Methods in Vibrations. The Macmillan Co (1967) 37. Ewins, D.J.: Modal Testing: Theory, Practice, and Application, 2nd edn. Research Studies Press Ltd., Hertfordshire (2000)
Chapter 17
Operational Modal Analysis of a Rotating Structure Using Image-Based Tracking Continuously Scanning Laser Doppler Vibrometry via a Novel Edge Detection Method L. F. Lyu, G. D. Higgins, and W. D. Zhu
Abstract A novel edge detection method is developed for an image-based tracking continuously scanning laser Doppler vibrometer (CSLDV) system to track a rotating structure without attaching any mark or encoder to it. The edge detection method can determine real-time positions of points on edges of the rotating structure by processing images captured by a camera in the tracking CSLDV system. Once a point on an edge of a rotating structure is determined, the position of the rotating structure is determined. The tracking CSLDV system can generate a scan path on the rotating structure and control its laser spot to sweep along the scan path. A newly developed improved demodulation method is used to process measured data of response of the rotating structure under random excitation and estimate its modal parameters including damped natural frequencies and undamped mode shapes. Damped natural frequencies of the rotating structure are estimated from fast Fourier transforms of measured data. Undamped mode shapes are estimated by multiplying measured data by sinusoidal signals whose frequencies are estimated damped natural frequencies and applying low pass filters to measured data multiplied by the sinusoidal signals. Experimental investigation of the edge detection method is conducted by using the tracking CSLDV system to track and scan a rotating fan blade. Modal parameters of the rotating fan blade under random excitation with different constant speeds and its instantaneous undamped mode shapes with a non-constant speed are estimated. Keywords Image-based tracking continuously scanning laser doppler vibrometer system · Edge detection method · Random excitation · Improved demodulation method · Operational modal analysis
17.1 Introduction Clean and efficient energy supply is one of the most severe challenges facing mankind in the twenty-first century. As one of the most efficient renewable energies, wind energy plays an important role in solving this problem. Wind turbines are used to convert wind energy to electrical energy, which is the main way to utilize wind energy. Horizontal-axis wind turbines produce most of wind power in the world today [1]. Horizontal-axis wind turbines are designed with larger sizes to have high efficiencies, which makes maintenance and repair costs higher [2]. Costs of blades account for 15–20% of the cost of a horizontal-axis wind turbine, and damages in blades require highest repair costs and longest repair times [3]. Therefore, blades of a horizontal-axis wind turbine in operational condition should be monitored to ensure that they are in good condition and reduce high maintenance costs caused by sudden failures. Currently, vibration monitoring and structural health monitoring (SHM) of blades of a horizontal-axis wind turbine are mainly accomplished by visual inspection, which can be dangerous and expensive [4]. Therefore, an efficient and safe vibration monitoring and SHM method for blades of a horizontal-axis wind turbine in operational condition is urgently needed. Use of a laser Doppler vibrometer can be a suitable method for vibration monitoring and SHM of a structure since it can measure its surface velocity at a point [5, 6]. To measure vibration on a trajectory on a structure surface, continuously scanning laser Doppler vibrometer (CSLDV) systems were developed [7–10]. Modal analysis methods for CSLDV systems were developed to estimate modal parameters of structures [11–17]. The demodulation method can estimate operational deflection shapes of a structure under sinusoidal excitation [11–13, 18] or undamped mode shapes of a structure under random excitation [19]. The lifting method can estimate modal parameters of a structure under random excitation [16, 17]. Two-dimensional (2D) scan schemes for a CSLDV system were developed to scan the whole surface of a structure and
L. F. Lyu () · G. D. Higgins · W. D. Zhu Department of Mechanical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_17
141
142
L. F. Lyu et al.
estimate its full-filed modal parameters [20–22]. To monitor vibration and estimate modal parameters of a rotating structure, tracking laser Doppler vibrometer systems were developed [23–33]. Point tracking methods that can track a single point on a rotating structure using tracking laser Doppler vibrometry were developed [23–26] and extended to continuously scanning methods that can scan the laser spot of a tracking CSLDV system along a prescribed trajectory on a rotating structure [27–30]. The above continuously scanning methods require mirrors in the tracking CSLDV system to be aligned with the rotation axis of the structure that is tracked by the tracking CSLDV system [31–33], and these methods use encoders that are attached to the rotating structure to determine its real-time rotation speed. An image-based single-point tracking laser Doppler vibrometer system was developed to track motion of a moving windscreen wiper [34]. Lyu and Zhu developed an image-based tracking CSLDV system to track and scan a rotating fan blade and estimate its modal parameters [35, 36]. These image-based laser vibrometer systems use marks that are attached to moving objects to determine their real-time positions, which are difficult to use when tracking large structures like blades of a horizontal-axis wind turbine. An edge detection method provides a better way to track a rotating structure without attaching any encoder or mark to it since this method can directly process images captured by a camera in a tracking CSLDV system. The edge detection method can convert a gray-scale image to a binary image by thresholding and locate boundaries in the binary image by a fundamental image segmentation technique [37]. The fundamental steps in any edge detection method are filtering, enhancement, detection, and localization [38]. Filtering removes ambient noise from the image and simultaneously attempts to maintain strong edges. Enhancement emphasizes regions of significant changes in pixel intensities, which aids gradient computation. Detection retains pixels that constitute true edges and remove false edges caused by noise. Finally, localization obtains the exact location of an edge within the image [38–40]. Many edge detection methods have been developed and used for motion tracking [41–46]. Mukherjee et al. [41] developed an edge detection method to track the path of a moving object. Laurence [42] compared tracking of motion of a rigid body through edge detection and least square fitting. Li et al. [43] proposed a rotation feature extraction method based on temporal differencing and image edge detection to estimate the rotation rate and radius of a moving object. Zhao et al. [44] proposed an object tracking method based on structural sparse representation, semi-supervised learning, and edge detection. Bai et al. [45] developed an edge detection method for structural displacement monitoring. Daga and Garibaldi [46] developed a genetic algorithm-based adaptive template matching method to estimate the instantaneous angular speed of a rotating structure based on edge detection. In this work, a novel edge detection method is developed for the tracking CSLDV system in Refs. [35, 36] to track and scan a rotating structure and estimate its modal parameters, including damped natural frequencies and undamped mode shapes. A rotating fan is used as a model of a horizontal-axis wind turbine that is tracked and scanned by the tracking CSLDV system. Modal parameters of the rotating fan blade are estimated using the newly developed improved demodulation method described in Ref. [36]. Damped natural frequencies and undamped mode shapes of the rotating fan blade with different constant speeds and its instantaneous undamped mode shapes with a non-constant speed are estimated by processing data measured by the tracking CSLDV system using the improved demodulation method. The rest of this chapter is organized as follows. The edge detection method for the tracking CSLDV system is presented in Sect. 17.2. A method for processing mirror signals of the tracking CSLDV system is described in Sect. 17.2. The improved demodulation method for processing measured data of the tracking CSLDV system is presented in Sect. 17.2. The experimental setup is presented in Sect. 17.3. Edge detection and results are presented in Sect. 17.4, respectively. Conclusions from this work are presented in Sect. 17.5.
17.2 Methodologies The edge detection method can track a rotating structure so that the tracking CSLDV system can scan on it. The rotating fan is used here as a model of a horizontal-axis wind turbine (Fig. 17.1a). Image processing modules in the LabVIEW software are used to process images captured by the camera in the tracking CSLDV system to identify edges of the rotating fan. A coordinate system O-XY is used in the captured image of the rotating fan in Fig. 17.1a, where the origin O is located at the upper left corner of the image. Edge detection is performed on a square region around the center of rotation that is subtracted from the original image to facilitate image processing (Fig. 17.1a), and the geometric center of the square region is the hub center (Fig. 17.1b). A coordinate system O -X Y is used in the square region where the origin O is located at the upper left corner of the square region (Fig. 17.1b). A distance condition is used with a point that is the center of a circle edge detection region, as shown in Fig. 17.1b, in the square region to detect an edge of a rotating fan blade that passes through the point. An annulus in Fig. 17.1b whose center is the hub center is used to determine a point on the edge of the rotating fan blade that meets the distance condition, and the point for edge detection is in the annulus. The inner and outer radii of the annulus are ri and ro , respectively, and ri and
17 Operational Modal Analysis of a Rotating Structure Using Image-Based. . .
143
Fig. 17.1 Schematics of (a) an image of the rotating fan and (b) the square region for edge detection
ro should be larger than the radius of the fan hub so that the edge detection method can always track an edge point of the rotating fan blade instead of an edge point on the fan hub. If the position of the edge point of the rotating fan blade in the square region be (X e , Y e ), the position of the point for edge detection in the square region be (X d , Y d ), and the position of the hub center be (X c , Y c ), then .
ri < (Xe − Xc )2 + (Y e − Y c )2 < ro ri < (Xd − Xc )2 + (Y d − Y c )2 < ro
(17.1)
Note that all spatial positions measured by the camera in the tracking CSLDV system are represented by pixel numbers, and (X c , Y c ) can be easily determined by IMAQ Find Circles VI in LabVIEW since it is the hub center position and the boundary of the hub is a circle. The radius of the edge detection region in Fig. 17.1b is rd , and the distance condition to detect any fan edge that passes the point for edge detection is dde < rd
.
(17.2)
where dde =
(Xe − Xd )2 + (Y e − Y d )2
.
(17.3)
which is the distance between the edge point and point for edge detection, as shown in Fig. 17.1b. Before the tracking CSLDV system starts to track and scan the rotating fan blade, the edge detection method calculates distances between the point for edge detection and all edge points of blades of the rotating fan in the annulus. Once an edge point of the rotating fan blade meets the distance condition in Eq. (17.2), the edge point is tracked by updating the point for edge detection with the edge point. Since the updated point is an edge point, points that satisfy the updated distance condition is still on the same edge, and the tracking CSLDV system can track the edge when the fan rotates. Note that tracking and scanning of the tracking CSLDV system are based on spatial positions in original images, and one needs to convert the position of the edge point that is selected to be tracked from the coordinate system in the square region to the position in the coordinate system in the original image using the following relations: .
Xe = Xe + Xc − Xc Y e = Y e + Yc − Y c
(17.4)
where Xe and Ye are positions of the edge point in the original image along X and Y directions, respectively, and Xc and Yc are positions of the hub center in the original image along X and Y directions that are determined by IMAQ Find Circles VI in LabVIEW, respectively.
144
L. F. Lyu et al.
Once the point for edge detection is updated with the edge point, measured edge point positions will experience jitter inside the distance condition because there are many edge points that meet the distance condition (Fig. 17.2a). This causes edge point positions to have variation in the radial distance to the hub center (Fig. 17.2b). Small dots on the smooth dashed circle in Fig. 17.2b are simulated ideal edge point positions when the fan rotates with a constant speed, and diamonds near the smooth dashed circle in Fig. 17.2b are simulated edge point positions measured by the tracking CSLDV system with the jitter effect when the fan rotates with a constant speed. To smoothly scan the rotating fan blade, the jitter effect needs to be removed. Since blades of the rotating fan are along radial directions of the fan hub, angular positions of edge points in the edge detection region in Fig. 17.2a with respect to the hub center can be considered the same if ri and ro are close to each other. Therefore, angular positions of the edge points are used to reconstruct edge point positions on a smooth circle. Let a reconstructed edge point position corresponding to the measured edge point position (X e , Y e ) in the square region be (X r , Y r ); then, one has .
Xr = Xc + c cos (α) Y r = Y c + c sin (α)
(17.5)
where c is a positive constant coefficient and α = arctan
.
Y e − Y c Xe − Xc
(17.6)
Simulated reconstructed edge point positions on a smooth circle are shown in Fig. 17.2c with squares, and their corresponding simulated edge point positions that are measured by the tracking CSLDV system with the jitter effect are shown in Fig. 17.2c with diamonds. Reconstructed edge point positions are used to generate scan paths on the rotating fan blade using a one-dimensional (1D) scan scheme described below. Let (Xr , Yr ) be the reconstructed edge point position in the original image that is obtained by Eq. (17.4) and θ rs be the angle between the line passing through the fan hub center and (Xr , Yr ) and the line in the middle of the rotating fan blade (Fig. 17.3); one can shift the reconstructed edge point position to a position on the line in the middle of the rotating fan blade by .
Xs = Xc + (Xr − Xc ) cos (θrs ) + (Yr − Yc ) sin (θrs ) Ys = Yc − (Yr − Yc ) sin (θrs ) + (Xr − Xc ) cos (θrs )
(17.7)
End point positions of the scan path in the middle of the rotating fan blade are .
Xup = Xc + kup (Xs − Xc ) Yup = Yc + kup (Ys − Yc )
(17.8)
Fig. 17.2 (a) Schematic of edge points that satisfy the distance condition, (b) simulated ideal edge point positions and edge point positions that are measured by the tracking CSLDV system with the jitter effect, and (c) simulated edge point positions on a smooth circle reconstructed from simulated edge point positions that are measured by the tracking CSLDV system with the jitter effect in (b)
17 Operational Modal Analysis of a Rotating Structure Using Image-Based. . .
145
Fig. 17.3 Schematic for determining end points of the scan path based on the reconstructed edge point
.
Xdown = Xc + kdown (Xs − Xc ) Ydown = Yc + Ydown (Ys − Yc )
(17.9)
where (Xup , Yup ) is the position of the end point of the scan path near the tip of the rotating fan blade, (Xdown , Ydown ) is the position of the end point of the scan path near the hub of the rotating fan blade, and kup and kdown are two constant coefficients. Once end point positions of the scan path are determined, the tracking CSLDV system can sweep its laser spot on the scan path. Mirror signals and measured response from the tracking CSLDV system can be processed by the mirror signal processing method and the improved demodulation method in Ref. [36] to estimate modal parameters, including damped natural frequencies and undamped mode shapes, of the rotating fan blade. Mirror signals can also be processed to determine angular positions of the laser spot, and rotation speeds of the fan blade can be estimated by calculating the derivative of angular positions with respect to time.
17.3 Experimental Setup The experimental setup of tracking and scanning a rotating fan blade using the tracking CSLDV system is shown in Fig. 17.4a. The rotating fan was mounted on a stationary frame, and its hub had a height of 122.3 cm. The tracking CSLDV system was mounted on a tripod with a height of 141.1 cm. The distance between the tracking CSLDV system and the rotating fan was 176.4 cm. A small fan 94.6 cm away from the rotating fan whose one blade was scanned was used to excite the rotating fan blade by its air flow. The tracking CSLDV system consists of a Polytec OFV-533 single-point laser vibrometer, a Cambridge 6240H scanner, and a Basler acA2040-90um camera whose max frame rate is 50 frames per second (Fig. 17.4b). The camera captured images of the fan when it rotated with a constant frame rate. Every time when an image of the rotating fan was captured, the image was processed by the edge detection method in Sect. 17.2 to determine the position of a point on a fan blade edge. A scan path was generated on the fan blade once the position of the fan edge point was determined, and the tracking CSLDV system swept its laser spot along the scan path (Fig. 17.4c). A voltage controller was connected to the rotating fan to let it rotate with different constant speeds. Note that the fan can also rotate with a nonconstant speed if one rotates it to an unbalanced position and releases it, since blades of the fan have different weights due to the additional mass of the reflective tape attached to the fan blade that is tracked and scanned by the tracking CSLDV system. The tracking CSLDV system can track the rotating fan blade with a speed between 0 and 40 rpm, while a large horizontal-axis wind turbine has a rotation speed between 5 and 15 rpm [47].
17.4 Experimental Results The fan was turned on and rotated with three different constant speeds R = 18.13, 27.72, and 38.45 rpm, and it was turned off and released from an unbalanced position to rotate with a nonconstant speed. The camera in the tracking CSLDV system captured images of the rotating fan with a pixel resolution of 2048 × 2048 and a frame rate of 50 frames per second. Images captured by the camera when the fan rotated are shown in Fig. 17.5a, b, and c. Processed images that correspond to images
146
L. F. Lyu et al.
Fig. 17.4 (a) Picture of the experimental setup of tracking and scanning the rotating fan blade, (b) a picture of the tracking CSLDV system, and (c) a schematic of tracking and scanning the rotating fan blade
Fig. 17.5 (a), (b), and (c) Images of the rotating fan captured by the camera at three different positions, (d), (e), and (f) processed images that show identified edges of the rotating fan corresponding to (a), (b), and (c), respectively, and (g) an image of the square region for determining the position of an edge point to track
in Fig. 17.5a, b, and c are shown in Fig. 17.5d, e, and f, respectively. Edges of the rotating fan can be easily identified in processed images (Fig. 17.5d, e, and f). A square region whose pixel resolution was 350 × 350 was extracted from the processed image to determine the position of an edge point to track (Fig. 17.5g). The edge detection region is shown as a circle in Fig. 17.5g whose center coordinates are (25, 190). Note that the radius of the edge detection region can be adjusted in experiments to obtain good edge detection results. When a rotating fan blade passed the edge detection region, the tracking CSLDV system started to track the rotating fan blade and scan a straight scan path on it. If a rotating fan blade that is not the one that should be scanned passed the edge detection region, one can shift the scan path to the rotating fan blade that should be scanned. Estimated damped natural frequencies of the rotating fan blade with different constant speeds are shown in Table 17.1. Damped natural frequencies of the rotating fan blade increase with its rotation speed (Table 17.1), which is caused by the centrifugal stiffening effect due to its rotation. The relation between the rotation speed of the fan blade and its damped natural frequencies is derived in Ref. [36], which theoretically shows that damped natural frequencies of the rotating fan blade increase with its speed. The first three undamped mode shapes of the rotating fan blade are shown in Fig. 17.6a, b, and c with different constant speeds and a non-constant speed. Undamped mode shapes of the rotating fan blade are similar to those of a cantilever beam
17 Operational Modal Analysis of a Rotating Structure Using Image-Based. . .
147
Table 17.1 Estimated damped natural frequencies of the rotating fan blade with different constant speeds Rotation speed First damped natural frequency (Hz) Second damped natural frequency (Hz) Second damped natural frequency (Hz)
18.13 rpm 6.20 28.67 56.31
27.72 rpm 6.54 28.83 57.17
38.45 rpm 6.90 29.07 57.60
Fig. 17.6 Estimated (a) first, (b) second and (c) third undamped mode shapes of the rotating fan blade with different constant speeds and a non-constant speed
since the rotating fan blade can be considered as a rotating cantilever beam. Estimated undamped mode shapes have some differences from each other because they are estimated with different rotation speeds, and scan paths when tracking the rotating fan blade with different speeds can be somewhat different. Note that a rotating horizontal-axis wind turbine blade can have torsional and edgewise mode shapes [48], which cannot be estimated using the 1D scan scheme. A two-dimensional scan scheme for the tracking CSLDV system is needed for the tracking CSLDV system to estimate full-field mode shapes of a rotating structure, which will be studied in the future.
17.5 Conclusions A novel edge detection method for an image-based tracking CSLDV system is developed to track and scan a rotating structure using a 1D scan scheme. The edge detection method can detect points on an edge of the rotating structure and track the edge without attaching any encoder or mark to the structure. A 1D scan scheme is used to generate straight scan paths on the rotating structure based on positions of edge points detected by the edge detection method. Mirror signals of the tracking CSLDV system can be processed to extract undamped mode shapes from measured response of the tracking CSLDV system. An improved demodulation method is used to process measured response of the tracking CSLDV system. Experimental investigation of the image-based tracking CSLDV system is conducted and a rotating fan blade is tracked and scanned by it. First three damped natural frequencies and undamped mode shapes of the rotating fan blade with different constant speeds and its first three instantaneous undamped mode shapes with a non-constant speed are successfully estimated. The imagebased tracking CSLDV system can be developed in the future to track and scan large horizontal-axis wind turbine blades and monitor their vibration. Acknowledgments This work was supported by the National Science Foundation through Grant No. CMMI-1763024.
References 1. Kishinami, K., Taniguchi, H., Suzuki, J., Ibano, H., Kazunou, T., Turuhami, M.: Theoretical and experimental study on the aerodynamic characteristics of a horizontal axis wind turbine. Energy. 30(11–12), 2089–2100 (2005) 2. Ciang, C.C., Lee, J.R., Bang, H.J.: Structural health monitoring for a wind turbine system: a review of damage detection methods. Meas. Sci. Technol. 19(12), 122,001 (2008)
148
L. F. Lyu et al.
3. Flemming, M.L., Troels, S.: New lightning qualification test procedure for large wind turbine blades. In: International Conference on Lightning and Static Electricity, Blackpool, UK, vol. 36, pp. 1–10 (2003) 4. Jamieson, P., Hassan, G.: Innovation in Wind Turbine Design, pp. 7–13. Wiley, Chichester (2011) 5. Bell, J.R., Rothberg, S.J.: Laser vibrometers and contacting transducers, target rotation and six degree-of-freedom vibration: what do we really measure? J. Sound Vib. 237, 245–261 (2000) 6. Rothberg, S., Allen, M., Castellini, P.: An international review of laser Doppler vibrometry: Making light work of vibration measurement. Opt. Lasers Eng. 99(1), 11–22 (2017) 7. Sriram, P., Hanagud, S., Craig, J., Komerath, N.M.: Scanning laser Doppler technique for velocity profile sensing on a moving surface. Appl. Opt. 29(16), 2409–2417 (1990) 8. Sriram, P., Hanagud, S., Craig, J.: Mode shape measurement using a scanning laser Doppler, vibrometer. Int. J. Anal. Exp. Modal Anal. 7(3), 169–178 (1992) 9. Allen, M.S., Sracic, M.W.: A new method for processing impact excited continuous-scan laser Doppler vibrometer measurements. Mech. Syst. Signal Process. 24(3), 721–735 (2010) 10. Chen, D.M., Xu, Y.F., Zhu, W.D.: Damage identification of beams using a continuously scanning laser Doppler vibrometer system. J. Vib. Acoust. 138(5), 05011 (2016) 11. Stanbridge, A., Ewins, D.: Modal testing using a scanning laser Doppler vibrometer. Mech. Syst. Signal Process. 13(2), 255–270 (1999) 12. Di Maio, D., Ewins, D.: Continuous scan, a method for performing modal testing using meaningful measurement parameters; Part I. Mech. Syst. Signal Process. 25(8), 3027–3042 (2011) 13. Chen, D.M., Xu, Y.F., Zhu, W.D.: Experimental investigation of notch-type damage identification with a curvature-based method by using a continuously scanning laser Doppler vibrometer system. J. Nondestruct. Eval. 36(2), 38 (2017) 14. Xu, Y.F., Chen, D.M., Zhu, W.D.: Damage identification of beam structures using free response shapes obtained by use of a continuously scanning laser Doppler vibrometer system. Mech. Syst. Signal Process. 92, 226–247 (2017) 15. Xu, Y.F., Chen, D.M., Zhu, W.D.: Modal parameter estimation using free response measured by a continuously scanning laser Doppler vibrometer system with application to structural damage identification. J. Sound Vib. 485, 115,536 (2020) 16. Yang, S., Allen, M.S.: Lifting approach to simplify output-only continuous-scan laser vibrometry. Mech. Syst. Signal Process. 45(2), 267–282 (2014) 17. Xu, Y.F., Chen, D.M., Zhu, W.D.: Operational modal analysis using lifted continuously scanning laser Doppler vibrometer measurements and its application to baseline-free structural damage identification. J. Vib. Control. 25(7), 1341–1364 (2019) 18. Stanbridge, A., Ewins, D., Khan, A.: Modal testing using impact excitation and a scanning LDV. Shock. Vib. 7(2), 91–100 (2000) 19. Yuan, K., Zhu, W.D.: Estimation of modal parameters of a beam under random excitation using a novel 3D continuously scanning laser Doppler vibrometer system and an extended demodulation method. Mech. Syst. Signal Process. 155, 107,606 (2021) 20. Chen, D.M., Xu, Y.F., Zhu, W.D.: Identification of damage in plates using full-field measurement with a continuously scanning laser Doppler vibrometer system. J. Sound Vib. 422, 542–567 (2018) 21. Chen, D., Xu, Y.F., Zhu, W.D.: Non-model-based identification of delamination in laminated composite plates using a continuously scanning laser Doppler vibrometer system. J. Vib. Acoust. 140(4), 041001 (2018) 22. Chen, D.M., Xu, Y.F., Zhu, W.D.: A comprehensive study on detection of hidden delamination damage in a composite plate using curvatures of operating deflection shapes. J. Nondestruct. Eval. 38, 54 (2019) 23. Stanbridge, A.B., Martarelli, M., Ewins, D.J.: Rotating disc vibration analysis with a circular-scanning LDV. In: Proceedings of SPIE, the International Society for Optical Engineering, vol. 4359, pp. 464–469 (2001) 24. Bucher, I., Schmiechen, P., Robb, D.A., Ewins, D.J.: Laser-based measurement system for measuring the vibration on rotating discs. In: First International Conference on Vibration Measurements by Laser Techniques: Advances and Applications, vol. 2358, pp. 398–408 (1994) 25. Castellini, P., Giovanucci, F., Nava-Mambretti, G., Scalise, L., Tomasini, E.P.: Vibration analysis of tyre treads: a in-plane laser vibrometry approach. In: Society for Experimental Mechanics, Inc, 16th International Modal Analysis Conference, vol. 2, pp. 1732–1738 (1998) 26. Fioretti, A., Di Maio, D., Ewins, D.J., Castellini, P., Tomasini, E.P.: Deflection shape reconstructions of a rotating five-blade helicopter rotor from TLDV measurements. AIP Conf. Proc. 1253(1), 17–28 (2010) 27. Di Maio, D., Ewins, D.J.: Applications of continuous tracking SLDV measurement methods to axially symmetric rotating structures using different excitation methods. Mech. Syst. Signal Process. 24(8), 3013–3036 (2010) 28. Gasparoni, A., Allen, M.S., Yang, S., Sracic, M.W., Castellini, P., Tomasini, E.P.: Experimental modal analysis on a rotating fan using trackingCSLDV. AIP Conf. Proc. 1253(1), 3–16 (2010) 29. Martarelli, M., Castellini, P., Santolini, C., Tomasini, E.P.: Laser Doppler vibrometry on rotating structures in coast-down: resonance frequencies and operational deflection shape characterization. Meas. Sci. Technol. 22(11), 115,106 (2011) 30. Khalil, H., Kim, D., Nam, J., Park, K.: Operational deflection shape of rotating object using tracking laser Doppler vibrometer. In: 2015 IEEE International Conference on Electronics, Circuits, and Systems, Cairo, pp. 693–696 (2015) 31. Halkon, B.J., Frizzel, S.R., Rothberg, S.J.: Vibration measurements using continuous scanning laser vibrometry: velocity sensitivity model experimental validation. Meas. Sci. Technol. 14(6), 773 (2003) 32. Halkon, B.J., Rothberg, S.J.: Vibration measurements using continuous scanning laser Doppler vibrometry: theoretical velocity sensitivity analysis with applications. Meas. Sci. Technol. 14(3), 382 (2003) 33. Halkon, B.J., Rothberg, S.J.: Vibration measurements using continuous scanning laser vibrometry: advanced aspects in rotor applications. Mech. Syst. Signal Process. 20(6), 1286–1299 (2006) 34. Castellini, P., Tomasini, E.P.: Image-based tracking laser Doppler vibrometer. Rev. Sci. Instrum. 75(1), 222–232 (2004) 35. Lyu, L.F., Zhu, W.D.: Operational modal analysis of a rotating structure under ambient excitation using a tracking continuously scanning laser Doppler vibrometer system. Mech. Syst. Signal Process. 152, 107,367 (2021) 36. Lyu, L.F., Zhu, W.D.: Operational modal analysis of a rotating structure subject to random excitation using a tracking continuously scanning laser Doppler vibrometer via an improved demodulation method. J. Vib. Acoust. 144(1), 011006 (2021) 37. Kaur, J., Agrawal, S., Vig, R.: A comparative analysis of thresholding and edge detection segmentation techniques. Int. J. Comput. Appl. 39(15), 29–34 (2012)
17 Operational Modal Analysis of a Rotating Structure Using Image-Based. . .
149
38. Jain, R., Kasturi, R., Schunck, B.G.: Machine Vision. McGraw-hill, New York (1995) 39. Jayakumar, R., Suresh, B.: A review on edge detection methods and techniques. Int J Adv Res Comput Commun Eng. 3(4), 6369–6371 (2014) 40. Savant, S.: A review on edge detection techniques for image segmentation. Int J Comput Sci Inf Technol. 5(4), 5898–5900 (2014) 41. Mukherjee, M., Potdar, Y.U., Potdar, A.U.: Object tracking using edge detection. In: Proceedings of the International Conference and Workshop on Emerging Trends in Technology, pp. 686–689 (2010) 42. Laurence, S.J.: On tracking the motion of rigid bodies through edge detection and least-squares fitting. Exp. Fluids. 52(2), 387–401 (2012) 43. Li, Z., Zhao, G., Li, S., Sun, H., Tao, R., Huang, X., Guo, Y.J.: Rotation feature extraction for moving targets based on temporal differencing and image edge detection. IEEE Geosci. Remote Sens. Lett. 13(10), 1512–1516 (2016) 44. Zhao, L., Zhao, Q., Liu, H., Lv, P., Gu, D.: Structural sparse representation-based semi-supervised learning and edge detection proposal for visual tracking. Vis. Comput. 33(9), 1169–1184 (2017) 45. Bai, X., Yang, M., Ajmera, B.: An advanced edge-detection method for noncontact structural displacement monitoring. Sensors. 20(17), 4941 (2020) 46. Daga, A.P., Garibaldi, L.: GA-adaptive template matching for offline shape motion tracking based on edge detection: IAS estimation from the SURVISHNO 2019 challenge video for machine diagnostics purposes. Algorithms. 13(2), 33 (2020) 47. Jonkman, J., Butterfield, S., Musial, W., Scott, G.: Definition of a 5-MW reference wind turbine for offshore system development, (No. NREL/TP-500-38,060), National Renewable Energy Lab. (NREL), Golden, CO. (2009). 48. Acar, G.D., Feeny, B.F.: Bend-bend-twist vibrations of a wind turbine blade. Wind Energy. 21(1), 15–28 (2018)
Chapter 18
Detection of Missing Rail Fasteners Using Train-Induced Ultrasonic Guided Waves: A Numerical Study Chi Yang, Korkut Kaynardag, and Salvatore Salamone
Abstract This study presents the results of the numerical study that aims to identify the missing fastener along the rail using train-induced guided waves. This study is part of an ongoing project that intends to develop a non-contact damage detection system based on the measurements obtained from a laser Doppler vibrometer placed on a moving platform. To achieve the goal, (i) modal analysis was conducted to determine the frequency range of interest in which the dynamic response of the rail is sensitive to the missing fastener (i.e., the frequency range in which the waves localize at the rail foot), (ii) a threedimensional finite element method (FEM) simulation was conducted to imitate the movement of the wheel excitation and the measurements. The goal of the FEM model is to examine the effect of the missing fasteners on the dynamic response of the rail in the frequency range of interest. The guided waves recorded from a rail through an accelerometer during the passage of an operating train were used as the excitation signal in the FEM simulation. A damage function was introduced to examine the change in the wave energy caused by the missing fastener. Consequently, the location of the missing fastener was identified. Keywords Rail inspection · Missing fastener identification · Structural health monitoring · Finite element method · Wave propagation
18.1 Introduction The rail network is a vital infrastructure that requires regular inspection and maintenance to ensure safety and reliability. In particular, the fasteners that constrain the railway track are critical components that must be regularly checked for damage or deterioration. A new method for inspecting rail fasteners is presented in this study, which uses train-induced guided waves to identify missing fasteners. The proposed system is based on a moving laser Doppler vibrometer (LDV) measurement, which has the potential to be a non-contact and automated inspection technique [1]. The LDV measurement is simulated using a 3D finite element model of a rail with missing fasteners. In this study, a numerical simulation was conducted to investigate the feasibility of using train-induced guided waves for the identification of missing rail fasteners. The frequency range of interest was determined through modal analysis and verified using experimental data from the field test in a preliminary study. The 3D FEM simulation was conducted to mimic the movement of the wheel excitation and the damage detection system. Consequently, a missing fasteners detection algorithm was introduced and validated using the simulated LDVs signal. The results showed that train-induced guided waves can be used for rail inspection and that the frequency range of 4–6 kHz is most sensitive to the missing fastener. This study provides a foundation for further development of this rail inspection technique.
18.2 FEM Time Domain Simulation The simulated moving measurements provided insight into the effects of defects on the wave propagation in the rail [2]. A three-dimensional rail model was constructed with 136 lb. AREA rail using Abaqus. The rail model consists of 20 spans.
C. Yang () · K. Kaynardag · S. Salamone Department of Civil Architectural and Environmental Engineering, The University of Texas at Austin, Austin, TX, USA e-mail: [email protected]; [email protected]; [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_18
151
152
C. Yang et al.
The fasteners were modeled using spring-damper pairs [3]. The missing fastener was considered by removing the constraints provided by one of the fasteners, as shown in Fig. 18.1. The length of the fastener in the longitudinal direction of the rail is 0.2 m. The Young’s modulus, density, and Poisson’s ratio of the rail were taken as 210 GPa, 7800 kg/m3, and 0.3, respectively. In this study, the mesh size was chosen to be 0.005 m, and the C3D8 solid elements were used. In this study, the configuration of the simulated test is a replicate of the proposed prototype using a moving laser Doppler vibrometer in the damage detection system. The distance between the excitation (i.e., first wheel) to the first LDV is 2.7 meters, while the distance between the two LDVs is 0.4 m. A sample of train-induced guided wave was used as input to the FEM model. To replicate the moving measurement, the excitation and the measurement points were shifted forward in the positive z direction by 0.09 m for every time instant. At the time instant 1 to time instant 17, the measurements correspond to the time instants when both the LDV1 and LDV2 were located before the span affected by the missing fastener, as shown in Fig. 18.2a. The location of the excitation and the LDVs were drawn in black for the first time instant and that of the 17th time instant was drawn in gray. At the time instant 18 to 33, the measurements correspond to the time instant when either one or both LDVs were located in the span affected by the missing fastener, as shown in Fig. 18.2b. At the time instant 34–60, the measurements correspond to the time instant that both LDVs were located after the span affected by the missing fastener, as shown in Fig. 18.2c. Consequently, a total number of 60 simulations at different time instants were carried out, while 15 of them corresponded to the time instant with one of the LDVs located in the span with a missing fastener.
18.3 Missing Fasteners Detection In this section, the results were presented and evaluated using the data obtained from the FEM model. First, the time domain data of the rail vibration were obtained from the FEM model. The data obtained from the two LDVs at every time instant
Fig. 18.1 Schematic view of the 3D FEM mode
Fig. 18.2 Simulated discretized motion of the damage detection system at the (a) first to 17th time instant, (b) 18th to 33rd time instant, and (c) 34th to 60th time instant
18 Detection of Missing Rail Fasteners Using Train-Induced Ultrasonic Guided Waves: A Numerical Study
153
Fig. 18.3 The damage index for the identification of missing fastener using simulated LDV data
were first analyzed using FDD to obtain the Singular Values [4]. The frequency range from 4.0 kHz to 6.0 kHz was identified to be sensitive to the constraint provided by the fasteners in the preliminary study. The energy at the frequency range of interest (4.0–6.0 kHz) obtained from the first and second LDVs was calculated and denoted as E1 (ti ), and E2 (ti ), respectively. i denotes the time instant, where i = 1, 2, 3, . . . 60. Third, the filtered energy E1 (ti ) and E1 (ti ) was smoothed with a moving average filter. The moving windows’ length (in terms of the number of time instants) is 6, which is the number of time instants required for the measurement to move forward one rail span. Next, at every time instant, the smooth band passed energy (from 4.5 kHz to 6 kHz) was divided by the smoothed total energy from 0 to 30 kHz to obtain the energy ratio. The energy ratios for the first and second LDV were denoted as R1 (t) and R2 (t), respectively. Consequently, the damage function DF(t) is formulated by calculating the multiplication of R1 (t) and R2 (t). Then, a distribution is created using the points of DF(t) corresponding to the rail segment with intact fasteners. The mean and covariance matrix of this distribution are denoted as μ and , respectively. Next, the DM (t) is normalized with respect to its maximum value to obtain the damage index (DI(t)). Then, to identify the rail segment that is affected by the missing fastener, a threshold considered as two standard deviations from the sample mean is placed on DI(t). Consequently, the time instant with DI(t) obtained above the threshold would be considered as the time instant when the inspection system passes through the rail span with missing fastener. Figure 18.3 shows DI(t) for the LDV data set used herein along with the threshold. As a result, it is clear to see that the missing fastener detection algorithm can identify the missing fastener successfully.
18.4 Conclusion This study presents the results of the numerical study that aims to identify the missing fastener along the rail using traininduced guided waves. The time domain simulation of the moving measurements using LDVs was carried out using FEM. To identify the location of the missing fasteners through numerical study, (i) the features which were sensitive to the constraints provided by the fasteners were determined, (ii) a three-dimensional finite element method (FEM) simulation was conducted to imitate the movement of the wheel excitation and the measurements, and (iii) the missing fastener detection algorithm was introduced. The goal of the FEM model is to examine the effect of the missing fasteners on the dynamic response of the rail in the frequency range of interest. The guided waves recorded from a rail through an accelerometer during the passage of an operating train were used as the excitation signal in the FEM simulation. A damage function was introduced to examine the change in the wave energy caused by the missing fastener. Consequently, the location of the missing fastener was identified.
Acknowledgments This work was supported under the grant number: 693JJ619C000005 awarded by the Federal Railroad Administration of the US Department of Transportation. Any opinions, findings, conclusions, or recommendations expressed are those of the authors and do not necessarily reflect the views of the FRA.
154
C. Yang et al.
References 1. Kaynardag, K., Battaglia, G., Yang, C., Salamone, S.: Experimental investigation of the modal response of a rail span during and after wheel passage. Transp. Res. Rec. J. Transp. Res. Board. 2674(12), 15–24 (2020). https://doi.org/10.1177/0361198120966931 2. Kaynardag, K., Yang, C., Salamone, S.: Numerical simulations to examine the interaction of train-induced guided waves with transverse cracks. Transp. Res. Rec. J. Transp. Res. Board, 036119812210945 (2022). https://doi.org/10.1177/03611981221094576 3. Yang, C., Asce, S.M., Kaynardag, K., Asce, S.M., Salamone, S., Asce, M.: Evaluation of fastening Modeling approaches for dynamic assessment of rail based on finite-element method. J. Eng. Mech. 148(2016), 1–13 (2022). https://doi.org/10.1061/(ASCE)EM.1943-7889.0002137 4. Brincker, R., Zhang, L., Andersen, P.: Modal identification of output-only systems using frequency domain decomposition. Smart Mater. Struct. 10(3), 441–445 (2001). https://doi.org/10.1088/0964-1726/10/3/303
Chapter 19
Dynamic Mode Decomposition for Resonant Frequency Identification of Oscillating Structures Nicholas A. Valente, Celso T. do Cabo, Zhu Mao, and Christopher Niezrecki
Abstract Feature-tracking is widely used in the vibration community for its noninvasive way of extracting subtle motion. High-frequency optical features, such as edges, are prime candidates for motion estimation; however, their rapid motion can pose problems for computer vision techniques. Most vibration seen in video is unperceivable to the naked eye, which can make sub-pixel displacement extraction more complex. Phase-based motion magnification (PMM) is a computer vision technique that can amplify motion seen in video. A boost in the signal-to-noise ratio of a particular frequency band can permit further evaluation of higher order dynamics. A need for larger magnification is necessary to visualize and quantify resonant frequencies. This amplification can produce ringing effects which degrade image quality and key features such as edges. In this work, the use of dynamic mode decomposition (DMD) permits an unsupervised approach of extracting structural dynamic parameters from video. Out-of-plane resonant frequencies are estimated from a 2.7 (m) wind turbine blade. The findings are compared to current state-of-the-art methodologies such as traditional wired sensing. The determination of resonant frequencies ultimately fosters further understanding of large structure dynamics that are present in optical data. Keywords Phase-based motion magnification · Dynamic mode decomposition · Singular value decomposition
19.1 Introduction In most vibration applications, the motion seen between sequential frames is small and thus difficult to track using computer vision approaches without the use of lengthy correlation techniques. Liu et al. introduced motion amplification as a way to accentuate small pixel variation over a series of frames [1]. This technique was then improved by linearly increasing motion amplification using a Taylor Series expansion [2, 3]. Although effective, this approach was further improved upon by Wadhwa et al. who proposed the idea of using a complex steerable pyramid to decompose an image into its amplitude and phase [4, 5]. This is commonly referred to as the phase-based motion magnification (PMM) algorithm. Once the image is decomposed, the image phase is then band-pass filtered and amplified to exaggerate subtle motion in a particular frequency band. Recently, PMM has been further investigated to better comprehend phase translation [6] and magnification factor restrictions [7]. PMM was found to be impactful in the structural dynamic community due to isolation and investigation of resonant frequencies without the use of a stochastic pattern [8–10]. This approach has been adopted to amplify subtle motion for modal analysis of vibrating structures [11–15], while identifying parameters such as damping ratios [16] and natural frequencies [17]. The use of non-destructive evaluation (NDE) techniques permits a more simplistic alternative to traditional wired sensing approaches that require large data acquisition systems. Recently in the literature, there have been several applications of
N. A. Valente · C. Niezrecki Department of Mechanical Engineering, University of Massachusetts Lowell, Lowell, MA, USA e-mail: [email protected]; [email protected] C. T. do Cabo Department of Mechanical and Materials Engineering, Worcester Polytechnic Institute, Worcester, MA, USA e-mail: [email protected] Z. Mao () Department of Mechanical Engineering, University of Massachusetts Lowell, Lowell, MA, USA Department of Mechanical and Materials Engineering, Worcester Polytechnic Institute, Worcester, MA, USA e-mail: [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_19
155
156
N. A. Valente et al.
NDE including infrared imaging [18] and vibration estimation for structural health monitoring [19, 20]. Typically with highspeed video, state estimation is a key component to capturing evolving dynamics in time [21]. Dynamic mode decomposition (DMD) is a data-driven approach that was first used to extract wave motion of fluid behavior [22–24]. Depending on the rate of flow, both standing and traveling waves become apparent as a result of motion through a fluid [25]. As aforementioned, the use of PMM helps to amplify subtle motion seen in video data. This work will focus on using amplified images as an input to the DMD algorithm for quantification of resonant frequencies. Ultimately, the goal of this work is to acquire information that is analogous to traditional wired sensing approaches for evaluation of structural dynamic behavior. The accurate estimation of frequency content using full-field image data will aid in expediting vibration testing procedures of large-scale infrastructure.
19.2 Background DMD relies on constructing a data matrix, X,consisting of snapshots of data that are evolving in time. Each snapshot at a specific time k is transformed to a column vector with dimensions (m • n, 1). Here, m • n represents the product of the height and width of the snapshot in (pixels). ⎡
⎤ | | | = ⎣ x1 x2 . . . xk−1 ⎦ | | |
X(m•n,k)
.
(19.1)
⎡
⎤ | | | = ⎣ x2 x3 . . . xk ⎦ | | |
X(m•n,k)
.
(19.2)
Equations (19.1) and (19.2) represent the snapshot matrices where X is shifted by one-time step. The ultimate goal of DMD is to approximate a linear operator A that accurately describes the shift between X and X , or more simply X ≈ AX.
.
(19.3)
As discussed in [26, 27], proper orthogonal decomposition (POD) utilizes singular value decomposition (SVD), which is equivalent to principal component analysis (PCA) in the statistics domain [28, 29]. SVD decomposes a matrix into both its singular values and vectors. Given this fact, it can be said X ≈ U V T .
.
(19.4)
In Eq. (19.4), the right and left singular vectors are defined as U ∈ R(m • n, r) and V ∈ Rk • r respectively. Here, the POD modes of the system are contained within the matrix U. Similarly, the singular values are defined by ∈ Rr • r , where r is a specified rank. Here, r is used to reduce the dimension of the original input matrix as proposed by Brunton et al. [30]. , . Using only a handful of singular ,and .V For simplicity, the truncated rank matrices will be regarded from now on as .U vectors that contain dominant energy drastically reduces the computational effort necessary to extract dynamic information from video data. Re-expressing Eq. (19.3) and substituting in (4) results in † A = X X† = X U V T .
.
(19.5)
Taking the Moore-Penrose inverse (†) of the decomposed X matrix −1 U T .. A = X V
.
(19.6)
and .V ,will aid in computing the underlying The matrix A is typically large; hence, a projection to a lower dimension ., U dynamics of the system. = U T AU A
.
(19.7)
19 Dynamic Mode Decomposition for Resonant Frequency Identification of Oscillating Structures
157
Equation (19.7) utilizes the computed proper orthogonal decomposition (POD) modes to downward project the original matrix to a smaller subspace. Combining Eq. (19.6) and (19.7) results in the reduced dimension matrix .A. = U T X V −1 . A
.
(19.8)
has been formed, the Eigen decomposition of the matrix can take place where Now that the reduced order matrix .A = Qλ. AQ
.
(19.9)
Finally, the algorithm arrives at the computation of natural frequencies ωk corresponding to the extracted DMD modes,. ωk =
.
ln (λk ) t
= X V −1 Q
.
(19.10)
(19.11)
The time step t is inversely proportional to the sampling rate at which the snapshots were taken (i.e. t = 1/fs ). It should be noted that Tu et al. analytically derived the equivalence between using a lower ranked projection to approximate dynamics in the full space [23].
19.3 Analysis Dynamics Extraction Using DMD The standing wave patterns or wakes that are present in the captured images contain a summation of frequencies and mode patterns that can be extracted by using DMD. Phase-based motion magnification will aid in bridging the gap between thermal fluid applications and structural dynamics. As aforementioned, PMM aids in exaggerating subtle oscillations; however, this process produces motion artifacts which can affect quantification of the resulting motion. DMD can be used to separate and extract the structural motion of a particular region of interest in the presence of PMM artifacts. Figure 19.1 highlights the designed DMD and PMM algorithm.
Fig. 19.1 Proposed DMD and PMM algorithm structure for full-field structural dynamics
158
N. A. Valente et al.
Fig. 19.2 Data reduction techniques used as a basis for the DMD approach
The use of DMD will permit a more expedited way of extracting out-of-plane frequency content while using solely image data. Information concerning the dynamics of the system can be identified due to the high-fidelity images that are captured by the optical system. As discussed in the introduction portion of the dissertation, DMD utilizes singular value decomposition (SVD) to factorize a matrix into three components. This factorization aids in computing the proper orthogonal decomposition (POD) of the image data as shown in Fig. 19.2. The formation of the correlation matrix,R, formulates the eigenvalue problem of the input data matrix. POD modes are an orthogonal basis, which can be used to project downward to a reduced space of the full-field image data. This projection truncates the rank of the original data set, which can be utilized to extract subtle dynamics. Ultimately, reducing the size of the original data matrix expedites the computation of modal coordinates. As commented on by Mizuno et al., dimensionality reduction affects the accuracy of frequency estimation [31]. It should also be noted that the modes extracted from DMD do not form an orthogonal basis. This means each mode is associated with its own sinusoidal behavior which can prove to be physically meaningful in dynamic applications [23].
19.3.1 Dynamics Extraction of a Wind Turbine Blade Wind turbine blades serve as a viable application for this algorithm, given their size and non-uniform cross-section. Techniques such as a scanning laser Doppler vibrometer would provide an accurate optical assessment of the structure, but these tests often take upward of several hours to perform in the field. As shown in Fig. 19.3, a 2.7-m wind turbine blade was subjected to an out-of-plane excitation using a modal impact hammer. The maximum sampling rate of the optical system used was 300 (Hz); therefore, investigation into the first three resonant frequencies will fit the Nyquist sampling theorem and permit dynamic parameter extraction. Implementation of PMM is useful in boosting the signal-to-noise ratio (SNR) of the first three resonant frequencies. That way, this motion can be better perceived by the DMD algorithm for parameter estimation. Table 19.1 provides the PMM parameters used on the input video depicted in Fig. 19.3a. Following magnification of each resonant frequency, the amplified images are inputted into the DMD algorithm. Figures 19.4, 19.5 and 19.6 display the results outputted by the image processing algorithm for the first, second, and third resonant frequencies, respectively. It can be seen that the first three resonant frequencies can be separated into individual modal coordinates and corresponding frequency spectrums, which cannot be said for pixel-based operations. Optical flow and other correlation-
19 Dynamic Mode Decomposition for Resonant Frequency Identification of Oscillating Structures
159
Fig. 19.3 (a) 2.7-m-long wind turbine blade subjected to a roving impact test; (b) Frequency response function of the wind turbine blade sampled at 800 (Hz) Table 19.1 Magnification parameters for the 2.7-m-long wind turbine blade subjected to a roving impact excitation Resonant frequency 1st bending 2nd bending 3rd bending
Frequency band ωl − ωh (Hz) 3.0–7.0 14.3–18.0 35.5–39.5
Magnification factor α 10 20 50
based techniques rely on the change in pixel intensity over time, but they are unable to decompose motion into individual contributions. The use of PMM enables DMD to capture out-of-plane frequency content of higher order dynamics and separate resonant frequency components of the structure. Table 19.2 presents the DMD extracted frequency versus the traditional wired sensing approach with a resultant percent difference. When glancing at the results, it becomes apparent that DMD is successful in identifying the first three resonant frequencies of the 2.7-m wind turbine blade. The second and third bending modes are extracted with a high degree of accuracy due to the introduction of PMM to boost the contribution of those frequencies in a desired range. The first bending frequency is extracted with a larger percent error due to the introduction of ghosting artifacts. It can be deduced that the use of PMM is not necessary to extract the motion characteristics of the first bending mode. This can be said given that the first bending mode is the dominant frequency that is perceivable in the sequence of raw images.
Fig. 19.4 DMD output for 2.7-m-long wind turbine blade: 1st resonant frequency
Fig. 19.5 DMD output for 2.7-m-long wind turbine blade: 2nd resonant frequency
19 Dynamic Mode Decomposition for Resonant Frequency Identification of Oscillating Structures
161
Fig. 19.6 DMD output for 2.7-m-long wind turbine blade: 3rd resonant frequency Table 19.2 Wired sensing versus DMD for out-of-plane frequency extraction of a 2.7-m-long wind turbine blade Resonant frequency 1st bending 2nd bending 3rd bending
Accelerometer (Hz) 5.47 16.41 37.50
DMD (Hz) 5.65 16.30 37.25
Percent difference (%) 3.29 0.67 0.67
19.4 Conclusion This work introduces the use of dynamic mode decomposition (DMD) as a way to perform structural parameter extraction using solely image data. Rather than relying on traditional sensing techniques such as pixel tracking and facet correlation, DMD is a purely data-driven approach that is able to decompose full-field dynamic information into individual spectral components using only a portion of the original data set. Pixel tracking in video requires a large degree of contrast without background disturbance; however, this may not be available as seen with the 2.7-m-long wind turbine blade tested as part of this research. DMD is able to separate multiple frequency components, which can make post-processing less cumbersome and more simplistic. It should be noted that the DMD algorithm solely requires one camera to quantify both small and large resonant frequencies. Thus, DMD ultimately is a tool that can be used to extract out-of-plane frequency content using solely one camera. Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 1762809. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
162
N. A. Valente et al.
References 1. Liu, C., Torralba, A., Freeman, W.T., Durand, F., Adelson, E.: Motion Magnification. ACM Trans. Graph. 24, 519–526 (2005). https://doi.org/ 10.1145/1186822.1073223 2. Wadhwa, N., et al.: Eulerian video magnification and analysis. Commun. ACM. 60, 87–95 (2016). https://doi.org/10.1145/3015573 3. Wu, H.-Y., Rubinstein, M., Shih, E., Guttag, J., Durand, F., Freeman, W.: Eulerian video magnification for revealing subtle changes in the world. ACM Trans. Graph. 31(4) (2012). https://doi.org/10.1145/2185520.2185561 4. Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Riesz pyramids for fast phase-based video magnification. In: Computational Photography (ICCP), 2014 IEEE International Conference on (2014). https://doi.org/10.1109/ICCPHOT.2014.6831820 5. Wadhwa, N., Rubinstein, M., Durand, F., Freeman, W.T.: Phase-based video motion processing. ACM Trans. Graph. (TOG). 32 (2013). https:/ /doi.org/10.1145/2461912.2461966 6. Collier, S., Dare, T.: Accuracy of phase-based optical flow for vibration extraction. J. Sound Vib., 117112 (2022). https://doi.org/10.1016/ j.jsv.2022.117112 7. Valente, N.A., do Cabo, C.T., Mao, Z., Niezrecki, C.: Quantification of phase-based magnified motion using image enhancement and optical flow techniques. Measurement. 189, 110508 (2022). https://doi.org/10.1016/j.measurement.2021.110508 8. Chen, J., Davis, A., Wadhwa, N., Durand, F., Freeman, W.T., Büyüköztürk, O.: Video camera–based vibration measurement for civil infrastructure applications. J. Infrastruct. Syst. 23, B4016013 (2016). https://doi.org/10.1061/(ASCE)IS.1943-555X.0000348 9. Chen, J., Wadhwa, N., Cha, Y.-J., Durand, F., Freeman, W.T., Buyukozturk, O.: Modal identification of simple structures with high-speed video using motion magnification. J. Sound Vib. 345 (2015). https://doi.org/10.1016/j.jsv.2015.01.024 10. Valente, N.A., do Cabo, C.T., Mao, Z., Niezrecki, C.: Template matching and particle filtering for structural identification of high- and lowfrequency vibration. In: Cham, D.D.M., Baqersad, J. (eds.) Rotating Machinery, Optical Methods & Scanning LDV Methods, Volume 6, pp. 43–50. Springer International Publishing (2023). https://doi.org/10.1007/978-3-031-04098-6_5 11. Dorn, C.J., et al.: Automated extraction of mode shapes using motion magnified video and blind source separation. In: Mains, M. (ed.) Topics in Modal Analysis & Testing, Volume 10, pp. 355–360. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-302492_32 12. Siringoringo, D.M., Wangchuk, S., Fujino, Y.: Noncontact operational modal analysis of light poles by vision-based motion-magnification method. Eng. Struct. 244, 112728 (2021). https://doi.org/10.1016/j.engstruct.2021.112728 13. Poozesh, P., Sarrafi, A., Mao, Z., Avitabile, P., Niezrecki, C.: Feasibility of extracting operating shapes using phase-based motion magnification technique and stereo-photogrammetry. J. Sound Vib. (2017). https://doi.org/10.1016/j.jsv.2017.06.003 14. Sarrafi, A., Mao, Z., Niezrecki, C., Poozesh, P.: Vibration-based damage detection in wind turbine blades using phase-based motion estimation and motion magnification. J. Sound Vib. 421, 300–318 (2018). https://doi.org/10.1016/j.jsv.2018.01.050 15. Valente, N.A., Sarrafi, A., Mao, Z., Niezrecki, C.: Streamlined particle filtering of phase-based magnified videos for quantified operational deflection shapes. Mech. Syst. Signal Process. 177, 109233 (2022). https://doi.org/10.1016/j.ymssp.2022.109233 16. Yang, Y., et al.: Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 85, 567–590 (2017). https://doi.org/10.1016/j.ymssp.2016.08.041 17. do Cabo, C.T., Valente, N.A., Mao, Z.: Motion magnification for optical-based structural health monitoring. In: Fromme, P., Su, Z. (eds.) Health Monitoring of Structural and Biological Systems XIV, vol. 11381, pp. 221–227. SPIE (2020). https://doi.org/10.1117/12.2559266 18. Kulkarni, N.N., Dabetwar, S., Benoit, J., Yu, T., Sabato, A.: Comparative analysis of infrared thermography processing techniques for roadways’ sub-pavement voids detection. NDT & E Int. 129, 102652 (2022). https://doi.org/10.1016/j.ndteint.2022.102652 19. Xiao, P., Wu, Z.Y., Christenson, R., Lobo-Aguilar, S.: Development of video analytics with template matching methods for using camera as sensor and application to highway bridge structural health monitoring. J. Civ. Struct. Heal. Monit. 10(3), 405–424 (2020). https://doi.org/ 10.1007/s13349-020-00392-6 20. Qiu, Q., Lau, D.: Defect detection in FRP-bonded structural system via phase-based motion magnification technique. Struct. Control. Health Monit. 25, e2259–e2259 (2018). https://doi.org/10.1002/stc.2259 21. Xiao, X., Xu, X., Shen, W.: Identification of frequencies and track irregularities of railway bridges using vehicle responses: a recursive Bayesian Kalman filter algorithm. J. Eng. Mech. 148(9), 04022051 (2022). https://doi.org/10.1061/(ASCE)EM.1943-7889.0002140 22. Taira, K., et al.: Modal analysis of fluid flows: an overview. AIAA J. 55(12), 4013–4041 (2017). https://doi.org/10.2514/1.J056060 23. Tu, J.H.: Dynamic Mode Decomposition: Theory and Applications [Online]. Available: https://umasslowell.idm.oclc.org/login?url=https:/ /www.proquest.com/dissertations-theses/dynamic-mode-decomposition-theory-applications/docview/1458341928/se-2?accountid=14575 (2013) 24. Kutz, J.N., Brunton, S.L., Brunton, B.W., Proctor, J.L.: Dynamic Mode Decomposition. Society for Industrial and Applied Mathematics (2016) 25. Proctor, J.L., Brunton, S.L., Kutz, J.N.: Dynamic mode decomposition with control. SIAM J. Appl. Dyn. Syst. 15(1), 142–161 (2016). https:// doi.org/10.1137/15M1013857 26. Wall, M.E., Rechtsteiner, A., Rocha, L.M.: Singular value decomposition and principal component analysis. In: Berrar, D.P., Dubitzky, W., Granzow, M. (eds.) A Practical Approach to Microarray Data Analysis, pp. 91–109. Springer US, Boston (2003) 27. Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemom. Intell. Lab. Syst. 2(1), 37–52 (1987). https://doi.org/10.1016/01697439(87)80084-9 28. Jolliffe, I.: Principal component analysis. In: Encyclopedia of Statistics in Behavioral Science (2005) 29. Pearson, K.: LIII. On lines and planes of closest fit to systems of points in space. London Edinburgh Dublin Philos. Mag. J. Sci. 2(11), 559–572 (1901). https://doi.org/10.1080/14786440109462720 30. Brunton, S.L., Kutz, J.N.: Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press (2019) 31. Mizuno, Y., Duke, D., Atkinson, C., Soria, J.: Investigation of wall-bounded turbulent flow using dynamic mode decomposition. J. Phys. Conf. Ser. 318(4), 042040 (2011). https://doi.org/10.1088/1742-6596/318/4/042040
Chapter 20
Time-Inferred Autoencoder for Construction and Prediction of Spatiotemporal Characteristics from Dynamic Systems Using Optical Data Nitin Nagesh Kulkarni, Nicholas A. Valente, and Alessandro Sabato
Abstract Dynamic spatiotemporal (ST) characteristics of a structure, such as natural frequency, frequency spectra, and operational deflection shapes, are used to assess and monitor a system’s condition. Recent developments in machine learning and computer vision approaches have created new paradigms to utilize ST characteristics and extract complex dynamics. Commonly used machine learning models, such as traditional autoencoders, lack the ability to learn complex phenomena in the latent space and extract important structural dynamic parameters. Hence, this study proposes a novel model based on a time-inferred autoencoder (TIA) to learn the ST characteristics of a structure of interest. To validate the proposed approach, experiments were conducted by collecting high-speed videos of multi-degree of freedom systems, which were used to train the TIA model and understand the underlying dynamics of the targeted system. Following the training stage, the TIA model was able to reconstruct the full line-of-sight optical data that can be used for identifying resonant frequencies. The robustness of the TIA approach and its capability to adapt to changes in the dynamics of the inspected structure was evaluated as a function of different levels of structural damage. If further developed, TIA can be used as a structural health monitoring tool to learn the dynamics of a system from videos without having to utilize contact-based sensors. Keywords Computer vision · Optical data · Autoencoder
20.1 Introduction Vibration-based conditional monitoring is a well-established structural health monitoring (SHM) method to extract information about the conditions of a dynamic system [1–3]. Traditionally, the structural response of the targeted system is collected using sensors such as accelerometers and strain gauges [4–6]. Numerous studies have demonstrated the importance of structural dynamic parameters for condition monitoring, including natural frequencies, operational deflection shapes (ODS), and frequency spectra [7–10]. Recent advancements in camera technology, optical sensors, and image-processing algorithms have increased the applicability of non-contact measuring methods for SHM [11, 12]. For example, techniques such as three-dimensional digital image correlation and three-dimensional point tracking have developed into useful tools for performing non-contact measurements and extracting full line-of-sight displacement and strain fields, geometry profiles, and modal parameters of aerospace, civil, energy, and mechanical engineering systems [13–16]. The use of machine learning (ML) tools for SHM has gained popularity due to robust, advanced, and ease-to-use algorithms [17–20]. For example, traditional autoencoders are a family of ML algorithms that can be used to learn features based on datasets with reduced dimensions [21]. Autoencoders rely on a limited number of neurons in the convolutional neural network (CNN) to learn information about latent variables (i.e., reduced numbers of characteristic parameters) and describe the dynamic behavior of complex systems [22]. Autoencoders have been used to learn the dynamic characteristics of a system in chaotic, fluid mechanics, and weather forecasting domains [23]. Autoencoders have also been used in denoising and compressing sensing applications to reconstruct missing data points from both spatial and temporal series [18]. However, traditional autoencoders cannot learn from dynamic phenomena that change rapidly in the latent space. This characteristic limits autoencoders’ capability to extract important structural dynamic parameters (e.g., natural frequency, frequency spectra, and ODSs) [24]. Traditional autoencoders showed limited accuracy when the vibrations of a point must be measured to learn the dynamic characteristics of multi-degree of freedom (M-DOF) systems. This lack of accuracy is caused by the inability of
N. N. Kulkarni () · N. A. Valente · A. Sabato Department of Mechanical Engineering, University of Massachusetts, Lowell, MA, USA e-mail: [email protected]; [email protected]; [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_20
163
164
N. N Kulkarni et al.
traditional autoencoders to treat the dataset with respect to time because the CNN is not dense enough to support advanced learning [25]. Also, poor performances are caused by the use of loss functions such as least mean squared error that is more sensitive toward outliers [24]. To overcome the limitations listed above, a novel ML model based on time-inferred autoencoders (TIA) to learn ST characteristics of a structure of interest and reconstruct full line-of-sight vibration data from a pre-trained autoencoder is proposed. In this study, the proposed TIA was tested for robustness using various damage scenarios showing how it can be used as a conditional monitoring tool for SHM. This chapter is divided as follows. The mathematical framework and formulation of the TIA are summarized in Sect. 20.2 (Background). An overview of the laboratory setup, tests, and analysis of the results is presented in Sect. 20.3 (Analysis). Conclusions and future works are described in Sect. 20.4 (Conclusion).
20.2 Background When a video of a vibrating structure is collected, the displacement of the structure can be extracted from the change of intensity I (x, y) of each pixel in the image over time t. The change in intensity over time I (x, y, t) of each pixel can be used as a temporal encoding characteristic to predict future positions of the structure at time t + 1. TIA checks what features propagate from times t-1, t, and t + 1 into time t + 2 and keeps or rejects the pixel(s) based on the results of this operation performed for the entire duration t + tn + 1 of the video. To make sure the reconstruction of ST characteristics is learned effectively, for each image frame at time instance t, the feature of both t + 1 and t-1 image frames is mapped and used to compute the new memory activations (see Fig. 20.1). The three-stage process is based on long- and short-term memory (LSTM) cells that map the features from multiple time instance data points. Before activating the LSTM process, the video is transformed from a two-dimensional (2D) domain (i.e., x and y pixel coordinates) into a 1D vector. Then classic LSTM cells perform fully connected, linearly biased transformations on the newly obtained vector, followed by nonlinear activation functions to compute gate and cell activations. In the approach proposed in this chapter, the fully connected, linearly biased transformation is replaced with temporal local convolutions to deal with the characteristics of the input image data that change over time. As a result, at time t, the activation of a spatiotemporal convolutional LSTM cell is given by: .
it = σ xt ∗ wxi + ht−1 ∗ whi
(20.1a)
ft = σ xt ∗ wxf + ht−1 ∗ whf
(20.1b)
.
Fig. 20.1 Transfer of features from hidden state in LSTM cells
20 Time-Inferred Autoencoder for Construction and Prediction. . .
165
c˜t = tan h xt ∗ wxc + ht−1 ∗ whc
(20.1c)
ct = c˜t it + ct−1 f
(20.1d)
ot = σ xt ∗ wxo + ht−1 ∗ who
(20.1e)
ht = ot tan h (ct )
(20.1f)
.
.
.
.
where it , ft, .c˜t , and ot are input, forget, cell, and output gates respectively. ct , ct − 1 , ht , and ht − 1 are memory and activation functions at time t and t-1. σ and tanh are the sigmoid and hyperbolic tangent functions, * and represent the convolution and Hadamard product, respectively. xt is the input at time t, while wxi , wxo , and wxf are the weights associated with input x and input, output, and forget cell, respectively. Similarly whi , who , and whf are the weights associated with hidden state h and input, output, and forget cell, respectively. For the TIA model proposed in this chapter, the LSTM’s layer architecture is shown in Fig. 20.2. It should be noticed that for the specific case discussed in this research, two encoding layers, one return vector, and two decoding layers have been used for the TIA model.
20.3 Analysis Several experiments were conducted to validate how well the proposed TIA model can learn the ST characteristics of the targeted system. The TIA model was trained using the response of an undamaged cantilever beam excited by an impact hammer. The beam used in the test had a length of 1.5 m and a hollow rectangular cross-section of 5.3 × 2.8 cm with a thickness equal to 0.3 cm. For the tests described in this research, a Photron FASTCAM-CMOS four mega-pixel high-speed camera, working at a sampling rate of 2000 frames per second (fps), was used to record four videos, each 0.5 s long (see
Fig. 20.2 Time inferred autoencoder architecture using LSTM layers for learning the dynamics of the system
166
N. N Kulkarni et al.
Fig. 20.3a). The four datasets collected included (ia) one with the undamaged beam used to train the TIA model, (ib) one with the undamaged beam for testing (see Fig. 20.3b), and (ii) and (iii) two with the beam in damaged configurations to simulate a cross-section area reduction (see Fig. 20.3c) and mass loading at the free end of the beam (see Fig. 20.3d). It should be noted that the datasets collected for configurations #1b, #2, and #3 were fed into the TIA model trained with the video of the beam in undamaged conditions (i.e., configuration #1a) to determine whether or not the proposed approach can identify the ST characteristics of the system with unlabeled (or untrained) datasets. The optical data collected with the high-speed camera for configuration #1a were used as ground truth by extracting the displacement of the beam using the open-source software Digital Image Correlation engine (DICe) [26], as shown in Fig. 20.4a. From the extracted displacement, the first three natural frequencies of the beam were calculated to be equal to 10.66 Hz, 67.98 Hz, and 186.60 Hz (see Fig. 20.4b). As mentioned before, two different sets of data were collected with the beam in undamaged conditions. The first set of data was used to train the TIA model, while the second set was used for testing. A total of 1000 images per set were trained at a 1 × 10−3 learning rate and over 102 epochs. Figure 20.5 shows the results of training and validation loss curves of the TIA obtained for the discussed settings of the neural network. As can be observed from the figure, training and validation losses decrease with the increase in training steps, and after ~60 epochs, both losses tend to stabilize. Figure 20.6a, b presents the displacement time histories and frequency spectra extracted using the optical data (i.e., original) and the one computed using the TIA. From Fig. 20.6a, it is possible to observe that TIA reconstructs the displacement of the cantilever beam from configuration #1b with an accuracy equal to 96% when compared to the data processed using DICe. TIA autoencoder was also able to calculate the first three natural frequencies of the beam to be equal to 10.00 Hz, 68.01 Hz, and 186.10 Hz with less than 2% error. Those results suggest that the TIA model trained with the data collected for configuration #1a can learn the ST characteristics when tested on configuration #1b. To check the robustness of the TIA model trained with the video of the undamaged beam in configuration #1a, the model was tested using two different damage scenario cases. The videos collected with the beam in configurations #2 and #3 were processed using the pre-trained TIA generating the results shown in Fig. 20.7. Here the frequency spectra extracted by processing the videos using DICe are compared to the spectra obtained from the TIA model.
Fig. 20.3 Validation of the proposed TIA model: (a) experimental setup showing the high-speed camera used to record the response of a cantilever beam; (b) undamaged beam (i.e., configuration #1); (c) damaged beam with sudden change in cross section highlighted by a bounding box (i.e., configuration #2); and (d) damaged beam with additional mass attached at the free end (i.e., configuration #3)
20 Time-Inferred Autoencoder for Construction and Prediction. . .
167
Fig. 20.4 ST characteristics of the cantilever beam in undamaged condition (i.e., configuration #1a): (a) displacement time history extracted with DICe; (b) frequency spectrum calculated from the displacement data
Fig. 20.5 Trend of training and validation losses of the TIA model as a function of the number of epochs used
In particular, the results shown in Fig. 20.7a refer to the data collected for the beam in configuration #2 (i.e., sudden change in cross section), while Fig. 20.7b plots the spectrum of the data collected for the beam in configuration #3 (i.e., added mass to the free end of the beam). Table 20.1 summarizes the values of the first three natural frequencies of the beam in the two configurations and the corresponding values reconstructed with the TIA model. As observed from Table 20.1, for the beam in configuration #2, the pre-trained TIA model can predict the first three natural frequencies of the system with an error below 0.11%. For the second and third scenarios considered, the difference in the value of the first natural frequency was less than 1.1 Hz, and for higher frequencies, the error was less than 2%.
168
N. N Kulkarni et al.
Fig. 20.6 Comparison of reconstruction TIA autoencoders with test data: (a) displacement comparison of both signals and (b) FFT comparison of both the signals
Fig. 20.7 Comparison of frequency spectra obtained for the two damaged scenarios: (a) beam with sudden change in cross section (i.e., configuration #2) and (b) beam with mass loading at the free end (i.e., configuration #3)
20.4 Conclusion In this study, a framework named time inferred autoencoder (TIA) was proposed as a method to learn the ST characteristics of a dynamics system. A TIA model was trained using the videos of a multi-degree of freedom system in an undamaged configuration and showed the capability to preserve the system’s dynamic characteristics with less than 2% error. To validate the robustness of the proposed method, the TIA pre-trained with the video of the structure in undamaged configurations was tested when the structure had (i) a change in the cross-sectional area and (ii) additional mass. For both cases, the TIA model extracted all the structure’s spatial, temporal, and frequency domain characteristics in the new configurations. In particular, for the change in the cross-sectional area scenario, the TIA model was able to preserve the information about the structure’s natural frequency with an error of less than 0.11%. For the mass-loading scenario, the TIA model reconstructed the higher frequencies with an error of less than 2% and the first natural frequency of the system with less than a 1.2 Hz difference.
20 Time-Inferred Autoencoder for Construction and Prediction. . . Table 20.1 Values of the natural frequencies reconstructed using the TIA model
169 Configuration #2 Ground truth (Hz) 10.00 64.06 180.20 Configuration #3 Ground truth (Hz) 6.99 58.97 172.20
TIA (Hz) 10.00 64.00 180.00
Error (%) 0.00 0.09 0.11
TIA 8.01 60.06 170.20
Error (%) 14.59 1.85 1.66
This work lays the foundation for the development of a framework that can be used as a structural health monitoring tool for any dynamic systems that has changing ST characteristics over time.
References 1. Yang, Y., Zhang, Y., Tan, X.: Review on vibration-based structural health monitoring techniques and technical codes. Symmetry. 13(11), 1998 (2021) 2. Amezquita-Sanchez, J.P., Adeli, H.: Signal processing techniques for vibration-based health monitoring of smart structures. Arch. Comput. Methods Eng. 23(1), 1–15 (2016) 3. Tcherniak, D., & Mølgaard, L. L.: Vibration-based SHM system: application to wind turbine blades. In: Journal of Physics: Conference Series, vol. 628, no. 1, p. 012072. IOP Publishing (2015) 4. Sabato, A., Niezrecki, C., Fortino, G.: Wireless MEMS-based accelerometer sensor boards for structural vibration monitoring: a review. IEEE Sensors J. 17(2), 226–235 (2016) 5. Dos Reis, J., Oliveira Costa, C., Sá da Costa, J.: Strain gauges debonding fault detection for structural health monitoring. Struct. Control. Health Monit. 25(12), e2264 (2018) 6. Yang, W., Peng, Z., Wei, K., Tian, W.: Structural health monitoring of composite wind turbine blades: challenges, issues and potential solutions. IET Renew. Power Gener. 11(4), 411–416 (2017) 7. Lizé, E., Hudin, C., Guenard, N., Rébillat, M., Mechbal, N., Bolzmacher, C.: Combination of frequency shift and impedance-based method for robust temperature sensing using piezoceramic devices for SHM. arXiv preprint arXiv, 1712.02832 (2017) 8. Li, Z., Feng, M.Q., Luo, L., Feng, D., Xu, X.: Statistical analysis of modal parameters of a suspension bridge based on Bayesian spectral density approach and SHM data. Mech. Syst. Signal Process. 98, 352–367 (2018) 9. Srivastava, V., Baqersad, J.: An optical-based technique to obtain operating deflection shapes of structures with complex geometries. Mech. Syst. Signal Process. 128, 69–81 (2019) 10. Valente, N.A., Sarrafi, A., Mao, Z., Niezrecki, C.: Streamlined particle filtering of phase-based magnified videos for quantified operational deflection shapes. Mech. Syst. Signal Process. 177, 109233 (2022) 11. Dong, C.Z., Catbas, F.N.: A review of computer vision–based structural health monitoring at local and global levels. Struct. Health Monit. 20(2), 692–743 (2021) 12. Feng, D., Feng, M.Q.: Computer vision for SHM of civil infrastructure: from dynamic response measurement to damage detection–a review. Eng. Struct. 156, 105–117 (2018) 13. Shafiei Dizaji, M., Alipour, M., Harris, D.K.: Leveraging full-field measurement from 3D digital image correlation for structural identification. Exp. Mech. 58(7), 1049–1066 (2018) 14. Sabato, A., Valente, N.A., Niezrecki, C.: Development of a camera localization system for three-dimensional digital image correlation camera triangulation. IEEE Sensors J. 20(19), 11518–11526 (2020) 15. Poozesh, P., Sabato, A., Sarrafi, A., Niezrecki, C., Avitabile, P.: A multiple stereo-vision approach using three dimensional digital image correlation for utility-scale wind turbine blades. In: Proceedings of IMAC XXXVI, p. 12 (2018) 16. Niezrecki, C., Baqersad, J., Sabato, A.: Digital image correlation techniques for non-destructive evaluation and structural health monitoring. In: Handbook of Advanced Non-Destructive Evaluation, p. 46. Springer, Cham (2018) 17. Mao, J., Wang, H., Spencer Jr., B.F.: Toward data anomaly detection for automated structural health monitoring: exploiting generative adversarial nets and autoencoders. Struct. Health Monit. 20(4), 1609–1626 (2021) 18. Ni, F., Zhang, J., Noori, M.N.: Deep learning for data anomaly detection and data compression of a long-span suspension bridge. Comput. Aided Civ. Inf. Eng. 35(7), 685–700 (2020) 19. Yuan, F.G., Zargar, S.A., Chen, Q., Wang, S.: Machine learning for structural health monitoring: challenges and opportunities. In: Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, vol. 2020, p. 1137903 (2020). https://doi.org/10.1117/ 12.2561610 20. Flah, M., Nunez, I., Ben Chaabene, W., Nehdi, M.L.: Machine learning algorithms in civil structural health monitoring: a systematic review. Arch. Comput. Methods Eng. 28(4), 2621–2643 (2021) 21. Wang, Y., Yao, H., Zhao, S.: Auto-encoder based dimensionality reduction. Neurocomputing. 184, 232–242 (2016)
170
N. N Kulkarni et al.
22. Yu, J., Zheng, X., Wang, S.: A deep autoencoder feature learning method for process pattern recognition. J. Process Control. 79, 1–15 (2019) 23. Brunton, S.L., Kutz, J.N.: Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. Cambridge University Press (2022) 24. Shao, H., Jiang, H., Zhao, H., Wang, F.: A novel deep autoencoder feature learning method for rotating machinery fault diagnosis. Mech. Syst. Signal Process. 95, 187–204 (2017) 25. Kong, X., Li, X., Zhou, Q., Hu, Z., Shi, C.: Attention recurrent autoencoder hybrid model for early fault diagnosis of rotating machinery. IEEE Trans. Instrum. Meas. 70, 1–10 (2021) 26. Turner, D.Z.: Digital Image Correlation Engine (DICe) Reference Manual, Sandia Report, SAND2015-10606 O. Available on https:// www.osti.gov/biblio/1245432. Accessed Sept 2022 (2015)
Chapter 21
Rotational Operating Deflection Shapes Analysis with High-Speed Camera Peter Bogatikov, Daniel Herfert, and Maik Gollnick
Abstract By using high-speed cameras, non-contact and high-resolution measurements of vibrations are possible for beginners and experts with little configuration effort. This makes surface measurements feasible, where each image pixel can be used as a vibration sensor. By using optical flow algorithms, measurements are possible without pretreatment of the target. This also applies to color homogeneous structures. This technique is very well suited for vibration analysis and online condition monitoring of buildings, such as bridges and wind turbines. Within the scope of this chapter, this technique is used for the first time to investigate the model of a wind turbine in terms of an order-based operating deflection shapes analysis. In addition to the deformation data, the speed signal will also be generated from the video to perform an order analysis, so that the operating deflection shape analysis can be visualized depending on the order or speed. Keywords Order analysis · Artificial intelligence · Optical flow · Rotating machinery · Operating deflection shape
21.1 Introduction During operation, wind turbines are subject to vibrations of up to 1 m, even at normal wind speeds. In particular, the tower and the rotor are constantly exposed to vibrations. Knowledge of their vibration behavior is therefore an essential factor in the development, maintenance, and evaluation of turbines. Unfortunately, the collection of measurement data required to analyze vibration behavior brings with it a number of challenges. The ambient excitation by the wind usually used for measurements during operation is not reproducible. Therefore, the use of a laser scanning vibrometer, which samples a number of measurement points across the structure one after another, is not feasible. On the other hand, mounting measurement sensors on the structure is not only very complex and costly but also provides low spatial measurement resolution. Increasing availability of high-speed, highresolution cameras, as well as innovative algorithms from the fields of optical flow and artificial intelligence, has driven the development of image-based vibration measurement techniques that can be used for non-contact, synchronous, and spatially dense acquisition of vibration data, as we demonstrate in this work. The measurement data is analyzed by means of operational deflection shape (ODS) analysis. ODS analysis is a method to determine the vibration pattern of a structure influenced by its own operating forces and can be applied in time or frequency domain. When evaluating vibration data from a wind turbine, however, the problem arises that the highly varying rotational speed of the turbine influences the vibration characteristics of the structure. In particular, the turbines natural frequencies shift as the rotational speed changes. Therefore, a frequency-based ODS analysis is not useful. To address this problem we introduce Order ODS, an application of order analysis (OA) techniques to ODS analysis. Classical OA deals with the analysis of structures, which contain rotating elements such as engines or turbines. Each rotating component generates its own noise and vibration pattern, which contributes to the overall vibration. With OA, these individual patterns can be identified and analyzed for each rotating element. We transfer this concept to ODS analysis by introducing a new domain to represent the deflection shapes as a function of RPM. Then, the result is transferred to the target domain by Fourier transformation. Since the RPM signal is needed for Order ODS, it must be measured synchronously with the vibration response measurement. Normally, an additional sensor is used for this purpose, but we can show that the RPM data can be reconstructed if the vibration measurement is performed with a high-speed camera.
P. Bogatikov () · D. Herfert · M. Gollnick Department of Structural Dynamics/Pattern Recognition, Society for the Advancement of Applied Computer Science, Berlin, Germany e-mail: [email protected]; [email protected]; [email protected] © The Society for Experimental Mechanics, Inc. 2024 J. Baqersad, D. Di Maio (eds.), Computer Vision & Laser Vibrometry, Volume 6, Conference Proceedings of the Society for Experimental Mechanics Series, https://doi.org/10.1007/978-3-031-34910-2_21
171
172
P. Bogatikov et al.
We validated the approach on a LEGO model wind turbine and it is transferable to real wind turbines. Stiffness and mass changes can be easily made to the model, allowing even more accurate validation.
21.2 Background 21.2.1 Order Analysis The analysis of noise and vibration signals for rotating machinery can be done with OA. An order is a frequency that corresponds to the RPM of a machine. In rotating structures, orders describe multiples or fractions of the rotational speed. Orders are described as proportional constants between rotational speed and frequency. The frequency equal to the RPM is called first order. The measured time signal can be transformed into the angular domain by digital resampling. Thus, RPM signal and time signal can be synchronized with each other [1]. For this purpose, the signal is sampled at uniform angular steps. During a run-up or coast-down, the rotational speed changes constantly, which causes the frequencies of the excited modes to shift as well. Therefore, frequency analysis is greatly improved by viewing the signal in the order domain, which is independent of the rotational speed. In this method, the run-up is considered a sine sweep excitation. When an FFT is applied to the resampled signal, the resulting spectrogram shows the amplitudes as a function of RPM.
21.2.2 Operating Deflection Shapes in Order Domain Operation deflection shapes visualize how a given structure vibrates under operating conditions. Vibration is measured at a number of points on the structure in up to three directions. A geometric model of the structure in question is needed to animate the deflection shape. The measurement channels are matched to their corresponding points on the geometry. At points on the geometry where no measurements were taken, the deflections can be interpolated from surrounding measurement points. ODS analysis gives detailed inside into the operational vibration patterns of a structure. This can be useful for maintenance, damage detection, finding sources of unwanted vibration or noise and a range of other applications. Typically, ODS analysis can be performed in time or frequency domain. For Time ODS, the measured time signal (displacement, velocity, or acceleration) is used directly to animate the geometry. For Frequency ODS, linear spectra (FFT), auto power spectra (APS), cross power spectra (XPS), frequency response function (FRF) or ODS FRFs calculated from the sensor data are used. Single frequency lines can be selected and the geometry is deflected using the amplitude and phase values at that frequency [2]. The novel Order ODS analysis is based on time-domain measurement data and the corresponding RPM signal. In contrast to the Time ODS, where the time series is directly applied as deflection shape, for Order ODS, the RPM signal is used to resample the data to convert it from time to order or frequency domain. For selected channels, different cuts can be extracted from the spectrogram in the order and frequency domain. These cuts from the spectrogram can be visualized and applied to the geometry for the animation of the deflection shapes. This allows the direct reference of the associated RPM signal to an ODS of the machine. The cut shows the amplitude for a specific order or frequency as a function of RPM.
21.2.3 Optical Flow The optical flow of an image sequence is the vector field of the velocity of visible points of the object space in the reference frame of the imaging optics projected into the image plane. With this technique, the movement of pixels can be tracked and used for a variety of applications like object tracking and recognition, image segmentation, and video compression [3]. There are two approaches for calculating the optical flow. For sparse optical flow, only a few pixels of a frame are considered, for example at edges or corners of an object. Calculating the flow vector for all image points is called dense optical flow. As all pixels in the image are considered, this method is more complex and time consuming but has a higher resolution than the sparse optical flow [4]. Since optical flow gives a measure for movement of objects, it can be used for image-based vibration measurement. If dense optical flow is used, every pixel corresponds to a measurement channel. This provides a non-contact, dense, and
21 Rotational Operating Deflection Shapes Analysis with High-Speed Camera
173
Fig. 21.1 Measurement setup to extract vibrations from a wind turbine [6] with Chronos 2.1 high-speed camera and additional LED light
easy-to-use measurement method, enabling to measure hundreds of thousands of positions simultaneously. The only needed sensor is the high-speed camera. The data can be processed using the software WaveCam [5].
21.3 Measurement Setup For the validation of the presented methods, a LEGO model of a wind turbine [6] was used. The dimensions of the wind turbine are 840 mm x 630 mm x 330 mm. This model has an electric motor that can rotate in its original state to a maximum of 20 RPM. The motor can be controlled with different voltages and thus it is possible to perform a run-up. In order to excite a wider frequency band and more potential resonances, the model was redesigned to reach 220 RPM. A Chronos 2.1 monochrome high-speed camera set to 600 fps and a resolution of 828 × 752 pixel was used to record the wind turbine’s vibration. The measurement setup can be seen in Fig. 21.1, which depicts the recording of a run-up. An Amaran 200d LED light and a soft box were used for homogenous lighting conditions. The performed run-up had a duration of 20 s and was controlled manually.
21.4 Analysis For Order ODS analysis, the RPM signal is needed in addition to the measurement data. This would usually require an additional sensor. Furthermore, both signals must be recorded simultaneously using the same sampling rate. Our solution enabled us to extract the RPM signal from the video data at the right sampling rate, so an additional sensor was not needed.
21.4.1 Optical Flow HR Firstly, the structural response must be extracted from the video data. This turns each pixel into a single measuring point able to measure vibrations at subpixel level invisible to the human eye. We used a novel Optical Flow algorithm from the area of autonomous driving combined with artificial intelligence methods. This algorithm requires no additional speckle patterns and no pre-treatment of the test object. It provides new possibilities in two respects: Firstly, very small sub-pixel deflections (down to 1/100,000 of a pixel) in the range of nanometers and below can be measured. Secondly, the algorithm provides data
174
P. Bogatikov et al.
in frequency or time domain, which is comparable to conventional vibration sensors (accelerometers, laser vibrometers). This means it cannot just be used for visualizing vibration shapes but can be considered as a sensor. Due to these two unique features, the algorithm name is extended to Optical Flow High Resolution (Optical Flow HR). The test method was cross-validated using several conventional data acquisition methods, namely accelerometers and a laser Doppler vibrometer (LDV). The measured time signals were comparable between the three sensor types across all tested applications.
21.4.2 RPM Track from Video Data To extract RPM data using WaveCam, motion data for all image points is first calculated using Optical Flow HR. Then, individual measurement points can be selected on the structure whose measurement data correspond to structure responses. These measurement points can be saved in the favorites list. Now a spectrogram is generated, which is defined at each point as the maximum of all spectrograms of the individual points (see Fig. 212). In this spectrogram, the different orders appear as peak lines. Support points can now be set along one of the peak lines to determine the RPM signal. Since it is not clear on which order line the interpolation points are located, it is necessary to enter the minimum and maximum rotation speed so that the RPM signal can be scaled correctly. The RPM signal can be reconstructed clearly and nearly without any gaps if multiple measuring points are selected. Regular spectral analysis, where a spectrogram is calculated by applying fast Fourier transformation (FFT) to multiple, overlapping time data segments and connecting the resulting spectra, would not suffice in this case. Each of these subspectra consists of spectral lines. Each spectral line is a measure for intensity of vibrations with the associated frequency in the time signal. This assumes stationarity of the signal with respect to the amplitude at any given frequency. If the frequency
Fig. 21.2 Overview of the Order ODS module of the software WaveCam. Left: Video image, for selection of measurement points and deflection shape animation. Top right: Spectrogram of a channel measured in x-direction (green square on the wind turbine blade), red dots in the spectrogram represent support points for RPM signal interpolation – they are placed manually by the user. Bottom right: Resulting RPM signal generated from the response data
21 Rotational Operating Deflection Shapes Analysis with High-Speed Camera
175
components change during the underlying time window, a frequency component will not be a clear peak at a single spectral line in the spectrum but will be “smeared” across multiple frequency lines. This results in several spectral lines with reduced intensity instead of a single spectral line with full intensity. This effect can be reduced by shortening the time windows until the RPM signal is basically constant in each window. However, according to the Nyquist-Shannon sampling theorem, this would reduce the frequency resolution of the spectrogram. Furthermore, there is the danger that the significantly reduced measurement time would possibly no longer be representative for the phenomena to be investigated. The faster the RPM changes, the shorter the measuring time would have to be chosen and the lower the resolution of the spectrum would be which would soon no longer meet the requirements of a well-founded diagnosis.
21.4.3 Order Operating Deflection Shape Analysis An order-based spectrogram is calculated instead to remedy these problems. Instead of subdividing the time signal into overlapping sub-segments, the signal is first transformed into angular domain by resampling. This is done using the given RPM signal. The time signal has to be sampled more frequently when the RPM signal is higher and analogously less frequently when the rotational speed is lower. Thus, in the resulting signal, the same rotational angle lies between any two consecutive data points. FFT is applied again to overlapping segments to generate a spectrum with an equidistant order axis. WaveCam provides two different types of angle-based spectrograms or Campbell diagrams: frequency domain and order domain Campbell diagram. RPM-dependent and RPM-independent spectral components and resonances can be easily distinguished in both of them. RPM-dependent components shift in frequency domain (see Fig. 213) but are constant with regard to RPM in the order domain (see Fig. 21.4), while the opposite is true for RPM-independent components. For our application of a wind turbine model, it can be seen that one dominating spectral component is excited during the run-up. Furthermore, the component shifts up the frequency band as RPM goes up. This clearly implies a RPM dependence. In contrast to a real-wind turbine, the rotor blades on our model cannot be fixed properly and evenly. This results in amplitude deviations in the deflection shapes. Instability of the tower can possibly be explained by the fact that the wind turbine is designed to rotate at a maximum of 20 RPM, but by modifying the speed ranges, it rotates at up to 220 RPM. These irregularities result in more harmonic frequencies. The order domain Campbell diagram exhibits two relevant orders: the interharmonic order 0.2 and the harmonic order 3, which have a significant influence on the excitation of the rotor blade.
Fig. 21.3 Frequency domain Campbell diagram for a measuring point on a rotor blade
176
P. Bogatikov et al.
Fig. 21.4 Order domain Campbell diagram for a measuring point on a rotor blade Fig. 21.5 Operating deflection shapes for order 3 at 185 RPM
To visualize the order ODS, a frequency or order cut can now be selected. The corresponding signal can be animated in the form of an ODS (see Fig. 21.5).
21.5 Conclusion In this chapter, a new method for vibration analysis of rotating structures by means of a high-speed camera is presented. For this purpose, a novel form of ODS, the Order ODS, is introduced. It extends classical ODS analysis with a component
21 Rotational Operating Deflection Shapes Analysis with High-Speed Camera
177
specifically designed for rotating structures. The use of Order ODS analysis is demonstrated using a model wind turbine as an example. For the first time, all required sensor data such as the vibration measurement values and the speed of excitation are determined exclusively from the recorded video data. When using variable-speed drives, often only order analysis offers the possibility to investigate and precisely identify vibrations. The Campbell diagram representation also offers a simple way to determine whether a spectral component is rotational speed-dependent or rotational speed-independent. This offers interesting approaches toward designing fully automatic diagnostic algorithms. The representation of run-up or coast-down processes provides reliable information on natural frequencies and resonance modes. Besides wind turbines, the results can also be used for any rotating machinery. Overall, there are almost no machines or other mechanical systems that do not have rotating components such as wheels, gears, bearings, motors, turbines, or generators. Additionally, a novel optical flow approach called Optical Flow HR for extracting vibration measurements from video data without the need for speckle patterns is presented, which produces measurements in the nanometer range and below. The measured time data is comparable to conventional vibration sensors (e.g., accelerometers and laser vibrometers).
References 1. Saavedra, P.N., Rodriguez, C.: Accurate assessment of computed order tracking. Shock. Vib. 13, 13–32 (2006) 2. Schwarz, B., Richardson, M.H.: Introduction to Operating Deflection Shapes, CSI Reliability Week, Orlando, FL, October 1999 3. O’Donovan, P.: Optical Flow: Techniques and Applications. The University of Saskatchewan (2005) 4. Spruyt, V., Ledda, A., Philips, W.: Sparse optical flow regularization for real-time visual tracking. In: Proceedings – IEEE International Conference on Multimedia and Expo, pp. 1–6 (2013) 5. Product Website WaveCam.: https://www.gfaitech.com/products/structural-dynamics/vibration-analysis-with-wavecam. (2022) 6. Product Website Vestas Wind Turbine.: https://www.lego.com/de-de/product/vestas-wind-turbine-10268. (2022)