109 33 30MB
English Pages 1060 Year 2023
Lecture Notes in Electrical Engineering 1047
Yuriy S. Shmaliy Anand Nayyar Editors
7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023) Advances in Computing, Control and Industrial Engineering VII
Lecture Notes in Electrical Engineering Volume 1047
Series Editors Leopoldo Angrisani, Department of Electrical and Information Technologies Engineering, University of Napoli Federico II, Napoli, Italy Marco Arteaga, Departament de Control y Robótica, Universidad Nacional Autónoma de México, Coyoacán, Mexico Samarjit Chakraborty, Fakultät für Elektrotechnik und Informationstechnik, TU München, München, Germany Jiming Chen, Zhejiang University, Hangzhou, Zhejiang, China Shanben Chen, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, China Tan Kay Chen, Department of Electrical and Computer Engineering, National University of Singapore, Singapore, Singapore Rüdiger Dillmann, University of Karlsruhe (TH) IAIM, Karlsruhe, Baden-Württemberg, Germany Haibin Duan, Beijing University of Aeronautics and Astronautics, Beijing, China Gianluigi Ferrari, Dipartimento di Ingegneria dell’Informazione, Sede Scientifica Università degli Studi di Parma, Parma, Italy Manuel Ferre, Centre for Automation and Robotics CAR (UPM-CSIC), Universidad Politécnica de Madrid, Madrid, Spain Faryar Jabbari, Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA, USA Limin Jia, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Janusz Kacprzyk, Intelligent Systems Laboratory, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Alaa Khamis, Department of Mechatronics Engineering, German University in Egypt El Tagamoa El Khames, New Cairo City, Egypt Torsten Kroeger, Intrinsic Innovation, Mountain View, CA, USA Yong Li, College of Electrical and Information Engineering, Hunan University, Changsha, Hunan, China Qilian Liang, Department of Electrical Engineering, University of Texas at Arlington, Arlington, TX, USA Ferran Martín, Departament d’Enginyeria Electrònica, Universitat Autònoma de Barcelona, Bellaterra, Barcelona, Spain Tan Cher Ming, College of Engineering, Nanyang Technological University, Singapore, Singapore Wolfgang Minker, Institute of Information Technology, University of Ulm, Ulm, Germany Pradeep Misra, Department of Electrical Engineering, Wright State University, Dayton, OH, USA Subhas Mukhopadhyay, School of Engineering, Macquarie University, NSW, Australia Cun-Zheng Ning, Department of Electrical Engineering, Arizona State University, Tempe, AZ, USA Toyoaki Nishida, Department of Intelligence Science and Technology, Kyoto University, Kyoto, Japan Luca Oneto, Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genova, Genova, Genova, Italy Bijaya Ketan Panigrahi, Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, Delhi, India Federica Pascucci, Department di Ingegneria, Università degli Studi Roma Tre, Roma, Italy Yong Qin, State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing, China Gan Woon Seng, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore, Singapore Joachim Speidel, Institute of Telecommunications, University of Stuttgart, Stuttgart, Germany Germano Veiga, FEUP Campus, INESC Porto, Porto, Portugal Haitao Wu, Academy of Opto-electronics, Chinese Academy of Sciences, Haidian District Beijing, China Walter Zamboni, Department of Computer Engineering, Electrical Engineering and Applied Mathematics, DIEM—Università degli studi di Salerno, Fisciano, Salerno, Italy Junjie James Zhang, Charlotte, NC, USA Kay Chen Tan, Dept. of Computing, Hong Kong Polytechnic University, Kowloon Tong, Hong Kong
The book series Lecture Notes in Electrical Engineering (LNEE) publishes the latest developments in Electrical Engineering—quickly, informally and in high quality. While original research reported in proceedings and monographs has traditionally formed the core of LNEE, we also encourage authors to submit books devoted to supporting student education and professional training in the various fields and applications areas of electrical engineering. The series cover classical and emerging topics concerning: • • • • • • • • • • • •
Communication Engineering, Information Theory and Networks Electronics Engineering and Microelectronics Signal, Image and Speech Processing Wireless and Mobile Communication Circuits and Systems Energy Systems, Power Electronics and Electrical Machines Electro-optical Engineering Instrumentation Engineering Avionics Engineering Control Systems Internet-of-Things and Cybersecurity Biomedical Devices, MEMS and NEMS
For general information about this book series, comments or suggestions, please contact [email protected]. To submit a proposal or request further information, please contact the Publishing Editor in your country: China Jasmine Dou, Editor ([email protected]) India, Japan, Rest of Asia Swati Meherishi, Editorial Director ([email protected]) Southeast Asia, Australia, New Zealand Ramesh Nath Premnath, Editor ([email protected]) USA, Canada Michael Luby, Senior Editor ([email protected]) All other Countries Leontina Di Cecco, Senior Editor ([email protected]) ** This series is indexed by EI Compendex and Scopus databases. **
Yuriy S. Shmaliy · Anand Nayyar Editors
7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023) Advances in Computing, Control and Industrial Engineering VII
Editors Yuriy S. Shmaliy University of Guanajuato Salamanca, Guanajuato, Mexico
Anand Nayyar School of Computer Science Duy Tan University Da Nang, Vietnam
ISSN 1876-1100 ISSN 1876-1119 (electronic) Lecture Notes in Electrical Engineering ISBN 978-981-99-2729-6 ISBN 978-981-99-2730-2 (eBook) https://doi.org/10.1007/978-981-99-2730-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
The 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023) was held online in Hangzhou on February 25–26, 2023, due to the COVID-19 in the world. It is organized by the China University of Geosciences (Wuhan), co-organized by the Wuhan Textile University, supported by Jiangnan University, Wuhan University of Sciences and Technology, China, China Communications Industry Association, University of Haute-Alsace, France, Institute of Applied Physics of the Russian Academy of Sciences, Russia, IEEE Northern Territory Subsection, Charles Darwin University, Australia, University of Electronic Science and Technology of China, China, etc. There are five sessions in CCIE 2023: Electronic, Electrical Engineering and Automation, Artificial Intelligence and its Applications, Signal Processing and Pattern Recognition, Communication, Information, and Networking, Fault Diagnosis, Modelling and Simulation and Optimization Techniques. This volume comprises select proceedings of CCIE 2023 of 103 papers from 221 submissions. It includes the latest research on the subjects of electronic, electrical engineering, automation, machine learning, computer vision, pattern recognition, computational learning theory, data analytics, network intelligence, advanced control and instrumentation, computer-aided system modelling, simulation and design, fault detection and diagnosis, cybersecurity, communication & information technologies, optimization techniques, signal processing and their applications in real world for engineers to solve real problems. The topics covered in electrical includes: AC-DC/DC-DC Converters, power electronics applications, smart grid, PID control, voltage restorer, circuits & systems, IoT devices and management platform. Different control and automation techniques have been presented in some papers to enhance the control system performance. Machine learning section involves reinforcement learning, extreme learning machine (ELM), artificial neural network (ANN), convolutional neural networks (CNN), deep learning and other areas in machine learning. Different advanced algorithms have been introduced for computer vision, pattern recognition, scene understanding, 3D object recognition, image fusion, localization and tracking, medical image analysis and so on. Optimization techniques are used to enhance the industrial engineering v
vi
Preface
applications such as online, reinforcement, manifold, multi-task and semi-supervised and also include the optimizing industrial systems and components performance. Some papers are dedicated to the fault detection and diagnosis which include the progress in signal processing to process the normal and abnormal categories of realworld signals, for instance signals generated from biomedical image processing, speech, text, and videos. These contributions, which were selected by means of a rigorous international peer-review process, present a wealth of exciting ideas and methods that will open novel research directions among your community. At last, we appreciate all the authors who contributed their latest research in this volume. Thank our keynote speakers: Prof. Pascal Lorenz, University of HauteAlsace, France; Dr. Salin Mikhail Borisovich, Institute of Applied Physics of the Russian Academy of Sciences, Russia; Dr. Mohsen Kazemiann Yazd University, Iran; Prof. Jun Gou, University of Electronic Science and Technology of China, China; Prof. Hongjun Wang, Southwest Jiaotong University, China; and Prof. Xin Xu, Wuhan University of Sciences and Technology, China, for their excellent speech at CCIE online. Thank Prof. Yuriy S. Shmaliy, University of Guanajuato, Mexico and Prof. Anand Nayyar, Duy Tan University, Da Nang, Vietnam, Dr. Josh Sheng and all the reviewers who were involved, for their kind support and work on the edition of this 7th CCIE2023. We also express our sincere thanks to Springer Nature for their approval and help and with proofreading of the contributed papers and preparing this proceedings volume. All the best for you Salamanca, Mexico March 2023
Yuriy S. Shmaliy Conference Chair
CCIE2023 Organizing Committee
Keynote Speakers Prof. Pascal Lorenz, University of Haute-Alsace, France Dr. Salin Mikhail Borisovich, Institute of Applied Physics of the Russian Academy of Sciences, Russia Prof. Xin Xu, Wuhan University of Sciences and Technology, China Prof. Hongjun Wang, Southwest Jiaotong University, China Dr. Mohsen Kazemiann, Yazd University, Iran Prof. Jun Gou, University of Electronic Science and Technology of China, China
Conference Chairs Prof. Yuriy S. Shmaliy, IEEE Fellow, University of Guanajuato, Mexico Prof. Mamoun Alazab, Founder and Chair of IEEE Northern Territory Subsection, Charles Darwin University, Australia
Co-chairs Prof. Mark Burgin, (AMS, ACM, IEEE) UCLA, USA Prof. Sudan Jha, SM-IEEE, EIC, Principal Scientist Department of Computer Science & Engineering, Kathmandu University, Nepal Distinguished Prof. Xiao-Jun Wu, (Member of CCF, ACM, IEEE), Deputy President of Jiangsu Association of Artificial Intelligence, Jiangnan University, China
vii
viii
CCIE2023 Organizing Committee
Editors Prof. Anand Nayyar, (IEEE Senior Member, ACM Senior Member), Duy Tan University, Vietnam Prof. Wei Wei, (IEEE Senior Member, ACM Member, CCF Member), Xi’an Jiaotong University, China Prof. Jixin Ma, University of Greenwich, UK Dr. Josh Sheng, China University of Geosciences (Wuhan), China
Program Chairs Prof. Imran Ullah Khan (IEEE Senior Member), Harbin Engineering University, China Prof. Qingdang Li, Qingdao University of Science & Technology, China Prof. Dalin Zhang, Aalborg University, Denmark Dr. Jonathan Yang Liu, CEO of Shen Zhen GLI Technology Limited, China Prof. Huiyu Zhou, University of Leicester, UK Prof. Zhenkai Zhang, Jiangsu University of Science and Technology, China
Publicity Chairs Prof. Vijayakumar Varadarajan, University of New South Wales, Australia Dr. Jack Feng, Caterpillar Inc., USA Prof. A. Mitra, (IEEE Senior Member), UIT, University of Burdwan, India Prof. Kei Eguchi, Fukuoka Institute of Technology, Japan Dr. Cosimo Ieracitano, University Mediterranea of Reggio Calabria (UNIRC), Italy
Conference Secretary Prof. Yang Bai, China University of Geosciences (Wuhan), China
International TPC Members Dr. Bochun Wu (IEEE Member), Fudan University, China Prof. Sergey D. Kuznetso, Ivannikov Institute for System Programming of the Russian Academy of Sciences, Russia Prof. Shahrel Azmin Suandi, Universiti Sains Malaysia, Malaysia
CCIE2023 Organizing Committee
ix
Prof. Wei Quan, Changchun University of Sciences and Technology, China Prof. Hiranmayi Ranganathan, Arizona State University, USA Prof. Antonina Dattolo, University of Udine, Italy Prof. Fu-quan Zhang, Minjiang University, China Dr. Pallavi Meharia, Apple Inc., USA Prof. Yi Jin, Shanghai University, China Prof. Zsolt János Viharos, (IEEE Member), Laboratory on Engineering and Management Intelligence (EMI), Hungary Prof. A P Liu, China University of Geosciences, China Pror. Juan I. Arribas, University of Valladolid, Spain Prof. György Dán, KTH, Royal Institute of Technology, Sweden Prof. Peyman Ayubi, Islamic Azad University, Iran Dr. Di Yuan, Harbin Institute of Technology, China Prof. Bin Zhou, School of Sciences, Southwest Petroleum University, China Prof. Shailendra S. Aote, Shri Ramdeobaba College of Engineering & Management, India Dr. Arti Jain, Jaypee Institute of Information Technology, India Prof. Hanuman Verma, Rohilkhand University, India Engi. Maria Augusta Soares Machado, IBMEC, Brazil Prof. Subarna Shakya, Tribhuvan University, Nepal Dr. Jayant Jagtap, Symbiosis International (Deemed University), India Prof. Nurbek P. Saparkhojayev, Khoja Akhmet Yassawi International KazakhTurkish University, Kazakhstan Prof. Ripon Patgiri, National Institute of Technology Silchar, India Dr. Payungsak Kasemsumran, Maejo University, Thailand Prof. Huaming Jin, Wuhan University, China Prof. Shuiming He, China University of Geosciences, China Prof. Md. Milon Islam, Khulna University of Engineering & Technology (KUET), Bangladesh Prof. Deepak Laxmi Narasimha, University of Malaya, Malaya Prof. Rongshan Chen, China University of Geosciences, China Prof. Tom Pfeifer, Technische Universität Berlin, Germany Prof. Marius Portmann, University of Queensland, Australia Prof. Puccinelli, D., University of Applied Sciences of Southern Switzerland, Switzerland Dr. Francky Catthoor, Interuniversity Microelectronics Centre (IMEC), Belgium Prof. Yingjie Yang, De Montfort University, UK Prof. Ilario Filippini, Politecnico di Milano, Italy Prof. Amit Kumar, Chitkara University, Punjab, India Dr. Cong Pu, Marshall University, USA Dr. Rashid Ali Laghari, Harbin Institute of Technology, Harbin, China Prof. K. P. Sanal Kumar, Rashtreeya Vidyalaya College of Engineering, India Prof. Asif Ali Laghari, Sindh Madressatul Islam University, Pakistan Dr. Ayush Dogra, Council of Scientific and Industrial Research–Central Scientific Instruments Organisation, India
x
CCIE2023 Organizing Committee
Prof. Reinhard Langmann, University of Applied Sciences Düsseldorf, Germany Prof. Harald Jaques, University of Applied Sciences Düsseldorf, Germany Prof. Honggui Li, Yangzhou University, China Prof. Kei Eguchi, Fukuoka Institute of Technology, Japan Prof. Jinpeng Chen, Beijing University of Posts and Telecommunications, China Prof. Xingsen Li, Guangdong University of Technology (GDUT) Prof. Asit Kumar Das, Indian Institute of Engineering Science and Technology, India Prof. Lu Leng, Nanchang Hangkong University, China Prof. S. Geetha, VIT University, India Prof. M. Alojzy Kłopotek, Institute of Computer Science, Polish Academy of Sciences, Poland Prof. K. Thippeswamy, Visvesvaraya Technological University, India Prof. Tausif Diwan, Indian Institute of Information Technology, India Prof. Zhenghao Shi, Xi’an University of Technology, China Dr. Paulo Gil, Universidade NOVA de Lisboa, Portugal
Contents
Electrical, Electronic Engineering and Automation Statistical Studies of Acoustic Signals Measurements in Diagnostics of Power Transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mariia Volchanina, Anton Gorlov, Anton Ponomarev, and Andrey Kuznetsov Study of the Resonance Phenomena in Three-Phase Power Lines Supplying Non-traction Railway Consumers . . . . . . . . . . . . . . . . . . . . . . . . Tatyana Kovaleva, Olga Komyakova, Natalia Pashkova, Amanzhol Chulembaev, and Aleksandr Komyakov The Design and Implementation of Low Voltage DC Carrier Chip and System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wenbin Duan and Yinghong Tian Fault Location of HVDC Transmission Line Based on VMD-TEO . . . . . Yunhai Hou and Yubing Zhao
3
11
19 31
Design of Switching Power Supply for Micro Arc Oxidation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dan Lei, Xuan Zheng, Shulong Xiong, and Tan Ding
41
Condition Monitoring System of Key Equipment for Beamline in HLS Based on Multi-source Information Fusion . . . . . . . . . . . . . . . . . . Qun Liu, Xiao Wang, Liuguo Chen, Bo He, and Gongfa Liu
51
A High Performance Control Method of PMSM . . . . . . . . . . . . . . . . . . . . . Yan Xing and Zhihui Li A Dynamic Association Analysis Model Based on Maintenance Sequence for Railway Traction Power Supply System . . . . . . . . . . . . . . . . Kaiyi Qian, Xiaoguang Wei, Jianzhong Wei, Wenhao Li, Xingqiang Chen, and Bo Li
59
67
xi
xii
Contents
Efficiency Optimization Control of PMSM Based on a Novel LDW_PSO Over Wide Speed Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chong Zhou and Kun Mao
79
HSKP-CF Algorithm Based on Target Tracking for Mobile Following Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuecong Zhu, Xiaomin Chu, Yu Wang, Yunshan Xu, and Kewei Chen
87
Application of Fuzzy Fractional-Order PID Control for Single-Degree-of-Freedom Maglev Systems . . . . . . . . . . . . . . . . . . . . . . Tengfei Liu, Qilong Jiang, Yu Luo, and Tong Liu
99
Exploration of Adaptive Principle EMI Filter . . . . . . . . . . . . . . . . . . . . . . . Jianchang Ou, Mengyuan Lv, and Li Zhai Research on Voltage Drop Detection Algorithm of Dynamic Voltage Restorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yunhai Hou and Xin Ke Robust Variable Structure PID Controller for Two-Joint Flexible Manipulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yecheng Long, Jia You, Yiyang Huang, Zhenliang Chen, and Mengxin Shi Methodology for Forecasting the Electricity Losses for Train Traction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sergey Ushakov, Mikhail Nikiforov, and Aleksandr Komyakov Study on a Denoising Algorithm of Improved Wavelet Threshold . . . . . Yunhai Hou and Yu Ren Research and Practice of Protocol Conversion in Comprehensive Automation Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wu Xia, Xianggui Tian, Kunyan Zhang, Ming Lei, Qing Ye, and Chunhong Wu Research on Interaction Potential of Electric Vehicles in Power Grid Dispatching and Operation Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . Peng Liu, Xingang Yang, Boyuan Cao, Zhenpo Wang, and Encheng Zhou
113
125
133
145 155
167
175
Fault Location Method Based on Nearest Neighbor Clustering in Distribution Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ling Wang, Bin Tai, Shu Zhang, Hao Chen, and Yunhao Zhang
189
Research and Application of Rule Engine Technology and Data Distribution Method of Power Internet of Things Based on Flink . . . . . . Qing Liu, Yu Yan, Wenbin Wang, and Xiaochao Wang
201
Contents
Analytical Calculation of Self-elevation (Self-supporting) Crossing Frame External Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jipeng Tang, Wenjie Li, Yong Ma, Qiang Shi, and Nianpeng Wu Charging Strategy of Electric Vehicle Based on Multi-dimensional Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Liu, Boyuan Cao, XinGang Yang, Zhenpo Wang, and Encheng Zhou Multistability Behaviors and Adaptive Sliding Mode Synchronization of Fractional-Order Chua’s Circuit Based on Coupled Memristors in Flux-Charge Domain . . . . . . . . . . . . . . . . . . . . Buwei Wu, Yongbing Hu, Weifeng Xiang, and Busen Gao Researches on Control Strategy of Household Energy Based on Demand Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yuchun Chen, Yuhang Liu, Yunkai Zhang, Ran Zhang, Zhongjian Chu, and Feng Fu Development and Research Progress of Crystal Oscillator . . . . . . . . . . . . Yongjie Xue, Yang Zhang, and Huaping Xiang Design of a Long-Term Control Management with Xilinx Processor and Memories Hibernation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ying Zhang and Di Peng An Evaluation Method to the Record Text for the Defects of the Relay Protection Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shaoming Zheng, Zhongshuo Liu, Peng Dong, Xinping Yang, Yiting Yu, Shuhong Wang, Chang Tao, and Ancheng Xue
xiii
213
221
233
255
265
281
289
Analysis of the Basic Configuration of the Power System of Pure Electric Bus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoshan Yao, Guang Yang, Jie Ma, Liang Hu, and Xiaoyu Cao
297
A Kind of Reconfigurable Memristor Circuit Based on Asynchronous Sequential Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deyong Tan, Xun Fu, Juhong Peng, and Weiming Yang
307
Artificial Intelligence and Its Applications Transfer Learning for Computational Chaos System Based on Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ruiting Lu, Dongji Zhang, Yingying Cui, and Yongping Zhang
319
Robust Self-training with Label Refinement for Noisy Domain Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhengtao Cao and Shaoyuan Li
329
xiv
Contents
Rolling Force Prediction Based on PELM . . . . . . . . . . . . . . . . . . . . . . . . . . . Jing Yang, Jie Zhang, Yan Ren, Lin Yu, Dong Lu, Xuekang Yang, and Jiahao Zhou Usage of Single-Camera Video Recording to Measure Sea Surface Roughness with Machine Learning Methods . . . . . . . . . . . . . . . . Mikhail B. Salin and Artem V. Vitalsky Research on Collision Avoidance Model of Intelligent Vehicle Anthropomorphic Steering Based on Neural Network . . . . . . . . . . . . . . . . Fengyi Sun, Zhiwei Guan, Ruzhen Dou, Guoqiang Wen, Qiang Chen, and Shujian Wang Prediction of Low-Visibility Events on Expressways Based on the Backpropagation Neural Network (BPNN) . . . . . . . . . . . . . . . . . . . Minghao Mu, Xinqiang Liu, Haisong Bi, Zheng Wang, Jiyang Zhang, Xu Huang, and Jian Wan
339
347
355
365
Design of Elevator Group Control Scheduling System Based on Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yongxi Yao, Fusheng Zhang, Deren Gu, Xinyi Zhang, and Xu Lee
373
Age Estimation and Gender Recognition Based on Canny-MobileNetV2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pengfei Hu, Yu Wang, Fangyan Dong, and Kewei Chen
381
Analysis of Path Optimization Problem Based on Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aihua Gu, Zhenzhuo Wang, Yue Ran, Mengmeng Li, Shujun Li, Qifeng Xun, and Jian Dong SegResnet: COVID-19 Detection Method Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoyu Tang, HuiLong Chen, Hui Ye, and Jiayi Feng A Neural Network Beamforming Method Based on Network Compression Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaohui Yang, Jingfei Jiang, Yunhao Li, Jiang Liu, Yanting Che, and Guo Liu Research on Classification Methods Based on Improved TF-IDF and Double Hidden Naive Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanfen Luo Adaptive Data Statistics Method Based on Fence Mechanism for Cloud-Edge Collaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Changzhi Teng, Weiwei Miao, Xiaochao Wang, Zeng Zeng, Rui Zhang, and Sibo Bi
391
401
407
417
427
Contents
Edge Learning-Based Efficient Data Imputation of Water Quality . . . . Yongsheng Wang, Zhen Chen, Limin Liu, Jing Gao, Guangwen Liu, and Zhiwei Xu Monitoring Low-Visibility on the Expressway Based on Multi-channel Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . Minghao Mu, Haisong Bi, Xinqiang Liu, Zheng Wang, Chengduo Qian, and Shanshan Ding Supervised Learning Guided Reinforcement Learning for Humanized Autonomous Driving Following Decision Making . . . . . Xuegao Liu and Yue Ren
xv
437
449
457
Signal Processing and Pattern Recognition Off-Line and Real-Time Estimators for Monocular Position-Based Visual Servoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jorge A. Ortega-Contreras, Isaí Espinoza-Torres, Eli G. Pale-Ramón, José A. Andrade-Lucio, Oscar Ibarra-Manzano, and Yuriy Shmaliy Optical Character Recognition Using Hybrid CRNN Based Lexicon-Free Approach with Grey Wolf Hyperparameter Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ashish Sharma, Simran Kaur, Sandeep Vyas, and Anand Nayyar Face Recognition Authentication System with CNN and Blink Detection Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . W. S. Ow, M. A. Ilyas, Nazhatul Hafizah Kamarudin, M. B. Othman, Zuliani Binti Zulkoffli, and Yih Bing Chu Adaptive Remote Sensing Image Fusion Method Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tongdi He and Shunhu Wang Automatic Correcting System of Answer Sheet Based on Image Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liang Wang, Limei Qin, Shumei Huang, Xin Liu, Junjie Chen, and Chunming Wei Textil-5k: A Real-World Dataset for Textile Surface Defects Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fangsheng Shu and Zengbo Xu An Efficient Specific Pedestrian Tracking Fusion Algorithm . . . . . . . . . . Jinquan Li, Meng Yu, Shanshan Xie, and Yifan Cai An Automatic Segmentation Fitting Method for Freehand Sketches Based on Greedy Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiuli Zhang, Lei Chen, and Zhong Wan
467
475
491
503
513
523 535
547
xvi
Contents
Research on Recognition Method of Cashmere and Wool Mixed Fiber Based on DFS-YOLOv5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Can Zeng, Youna Liu, Jinhua Zhu, Fangyan Dong, and Kewei Chen Improved ViT: Rethinking the Self-attention of Vision Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rongshan Hu, Yiquan Wu, Tingting Huang, Shuqiang Yang, and Huafeng Qin Line Shape Recognition Based on High Resolution Image Road Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chuanli Kang, Jiale Yang, Siyao Zhang, and Suqi Peng
557
567
579
Radial Based Dental Arch Computing and Optimization . . . . . . . . . . . . . Mengyao Shi, Benxiang Jiang, Songze Zhang, and Hongjian Shi
593
Aerial Object Tracking Using Filtering Methods . . . . . . . . . . . . . . . . . . . . E. G. Pale-Ramon, Y. S. Shmaliy, L. J. Morales-Mendoza, M. González-Lee, J. A. Ortega-Contreras, and K. J. Uribe-Murcia
603
Segmentation and Detection of Industrial Surface Defect Based on Deep Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yi Hou and Songrong Qian
617
Dynamic Gesture Recognition for Data Glove Based on Multi-stream One-dimensional Convolution . . . . . . . . . . . . . . . . . . . . . . Zhenyu Hu, Jie Shang, and Xun Wang
639
Research Progress of Semantic Image Synthesis . . . . . . . . . . . . . . . . . . . . . Binyao Yan, Xiangjun Zhao, Hao Zheng, Zhilin Sun, and Jie Sun An Improved Conditional Euclidean Clustering Point Cloud Segmentation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hui Li, Tan Meng, Xiumei Zhang, Junjie Wei, Yumin Ma, and Yue Liu Chinese Character Writing Evaluation System Based on Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renwei Li, Zheyan Zhang, Weiwei Shi, Fangyan Dong, and Kewei Chen Fabric Edge Cutting Algorithm Based on Multiscale Feature Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mengtian Wang, Maosen Wang, Jun Liu, Shaozhang Niu, Wen Zhang, and Jiaqi Zhao Research on Vegetation Classification Algorithm Based on Hyperspectral Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong Liu, Chunxia Liu, Zhaolong Wang, Xiang Xu, and Jian Jin
649
655
663
675
689
Contents
xvii
Communication, Information, and Networking Research on User Redundancy Replication Framework Supporting High Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shuhua Mao
699
Research on Presentation Generation Method of Credential Selective Disclosure in Self-Sovereign Identity . . . . . . . . . . . . . . . . . . . . . . . Yu Qi, Jiarui Zhang, and Han Zhang
705
BSDIFF Difference Algorithm Based on LZMA2 for In-Vehicle ECUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhihao Li, Guihe Qin, Yanhua Liang, Danzengouzhu, and Lifeng Yang
719
Research on Reliable Transmission Based on UDP in Industrial SDN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tengbin Lin, Die Liu, and Jiuqiang Xu
727
Radio Spatiotemporal Pheromone for RSS-Based Device-Free Activity Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zining Huang, Chunhai Luo, Yirong Zheng, Kaide Huang, Zhiyong Yang, and Yong Yang A Network Intrusion Detection Model Based on Static Property Training and Dynamic Property Correction . . . . . . . . . . . . . . . . . . . . . . . . . Dongqing Jia and Xiaoyang Zheng Research on Passive Receiver Multi-channel Pairing Algorithm . . . . . . . Tao Huang
745
757 765
Flow Monitoring Alarm Module Application for In-Vehicle CAN Bus Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wanning Liu, Guihe Qin, Lifeng Yang, and Yanhua Liang
773
Research on 5G LAN in Fixed Mobile Converged Network Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Feifei Wang, Ying Sun, and Yaping Cao
781
Performance Evaluation of Decode and Forward Cooperation in WSNs in Indoor Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jianjie Tian, Yulong Shang, and Tianjiao Zhang
793
An Optimized Security Sensing Method Based on Pignistic Evidence Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xuemei Yang and Chunyan Bi
805
Wireless Communication Channel Management for 6G Networks Applying Reconfigurable Intelligent Surface . . . . . . . . . . . . . . . Ashish K. Singh, Ajay K. Maurya, Ravi Prakash, Prabhat Thakur, and B. B. Tiwari
817
xviii
Contents
Fault Diagnosis, Simulation and Optimization Techniques Research on Causal Model Construction and Fault Diagnosis of Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qun Zhu, Zhi Chen, Jie Liu, Zhuoran Xu, Juan Wen, Jinghua Yang, and Yifan Jian Research on Fault Diagnosis Method Based on Structural Causal Model in Tennessee Eastman Process . . . . . . . . . . . . . . . . . . . . . . . . Haoyuan Pu, Jie Liu, Zhi Chen, Xiaohua Yang, Changan Ren, Zhuoran Xu, and Yifan Jian
833
851
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Haotan Li, Zhi Chen, Xuecen Zhao, Yuan Min, and Yifan Jian
865
D2D Cooperative MEC Multi-objective Optimization Based on Improved MOEAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qifei Zhao, Gaocai Wang, Zhihong Wang, and Yujiang Wang
881
Research on Feature Space Migration Fault Diagnosis for Missing Data Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ying Zhang, Tingwei Peng, and Ruimin Luo
897
A Performance Comparison Among Intelligent Algorithms for Solving Capacitated Vehicle Routing Problem . . . . . . . . . . . . . . . . . . . . Jingchen Li and Jie Tang
909
Life Evaluation and Optimization of Equipment in the Production Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Peng Yu
917
Intelligent Fault Diagnosis of Rotating Machinery Based on Improved RseNet and BiGRU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zeyu Ye, Xiaoyang Zheng, and Chengyou Luo
929
Research on Micromouse Maze Search Algorithm Based on Centripetal Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Liang Lu, Xiaoxi Liu, Pengyu You, and Zhe Zhang
937
Optimal Design of Distributed Fault Diagnosis for Electro-Hydraulic Control System of Mountain Micro Pile Drilling Rig . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lei Jiang, Yongxing Zou, Baiyi Zhu, and Ruijun Liu The Crawler Strategy Based on Adaptive Immune Optimization . . . . . . Yang Liu and Zhaochun Sun
949 957
Contents
xix
Test Design of Sea Based Intelligent Missile Networking in the Proving Ground . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ao Sun, Yan Wang, and Zejian Zhang
969
Wastewater Quality Prediction Model Based on DBN-LSTM via Improved Particle Swarm Optimization Algorithm . . . . . . . . . . . . . . . Jian Yang, Qiumei Cong, Shuaishuai Yang, and Jian Song
979
Research on Autonomous Collision Avoidance Method of Typical General Aviation Aircraft Based on Cognitive System . . . . . . . . . . . . . . . . Jie Zhang, Xiyan Bao, and Hanlou Diao
991
A Study on Platform of Intelligent Metering Builds on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xinyi Li, Hang Sun, Shijie Wen, Chen Chen, Hongzhong Zhang, and Longhai Deng
999
Intelligent Planning of Battlefield Resources Based on Rules and Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1007 Chuanhui Zhang, Qianyu Shen, Peiyou Zhang, and Jiping Zheng New Features Extraction Method for Fault Diagnosis of Bearing Based on Legendre Multiwavelet Neural Network . . . . . . . . . . . . . . . . . . . 1019 Chengyou Luo, Xiaoyang Zheng, Dongqing Jia, and Zeyu Ye Research on Slide-Stainer Layout Dynamic Optimization . . . . . . . . . . . . 1027 Debin Yang, Bingxian Liu, Kehui Wang, Kun Gui, Kewei Chen, and Fangyan Dong Preparation of Nano-Diamond Thin Film by Single-Screw Extrusion Structure 3D Printer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1037 Xiuxia Zhang, Kewang Li, Jinquan Chu, and Shuyi Wei Design of Automobile Lubricating Oil Storage Tank with Pretreatment Chamber . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043 Haiyu Li Research on Multi UAV Monitoring Interface Based on “Task-Situation-Operator” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1051 Hao Zhang, Fengyi Jiang, Zefan Li, Qingbiao Kuang, and Yuan Zhong Resource Preallocation Based on Long and Short Periods Combined Task Volume Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063 Senhao Zhu A Loss Management Method for the Spaceborne Multi-control Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1071 Ying Zhang, Feng Tian, Cunsheng Jiang, Shihui Wang, and Yizhu Tao
xx
Contents
Reliability Simulation of Phased-Mission System with Multimode Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079 Zhicheng Chen and Jun Yang Algorithm for the Operation of the Data-Measuring System for Evaluating the Inertial-Mass Characteristics of Space Debris . . . . . . 1087 A. V. Sedelnikov, M. E. Bratkova, and E. S. Khnyryova
Electrical, Electronic Engineering and Automation
Statistical Studies of Acoustic Signals Measurements in Diagnostics of Power Transformers Mariia Volchanina , Anton Gorlov , Anton Ponomarev , and Andrey Kuznetsov
Abstract The article presents statistical studies of acoustic signals measurements when diagnosing power transformers of the railway power supply system. Statistical processing of acoustic monitoring data was carried out on the example of transformers with different levels of insulation condition. Comparisons of histograms of the experimental distribution of signal amplitudes and dominant frequencies with the nearest theoretical distribution laws are given, performed in the STATISTICA program according to control data obtained from the automated system DADS-16 (Digital Acoustic Diagnostics System). The performed studies have shown a close correlation of defects registered by the acoustic method with the distribution of signals in the form of laws of distribution of random variables. It is shown that for power transformers with mechanical oscillations, both during the passage of the train and at idle, the distribution of amplitudes and dominant frequencies of the recorded signals corresponds to a uniform law. The distribution of amplitudes and dominant frequencies is not centered around a certain average value. For power transformers in which partial discharges are present, due to the degradation of the insulation of the windings under the influence of high voltage, the best approximation of both amplitudes and dominant frequencies showed a lognormal distribution. The signals are centered around a characteristic mean value. When the train passes, the acoustic system registers both high-frequency signals from partial discharges (PD) and lowfrequency signals from body vibrations. There are two components in the distribution law - uniform and lognormal distribution densities. Thus, according to the type of distribution of the recorded signals, their amplitude and dominant frequency, it is possible to determine the insulation defects of power transformers. Keywords Electric Rail Transport · Power Transformers · Acoustic Method · Partial Discharges · Insulation Diagnostics · Statistical Studies · Signal Parameters
M. Volchanina · A. Gorlov · A. Ponomarev · A. Kuznetsov (B) Department of Theoretical Electrical Engineering, Omsk State Transport University, Omsk 644046, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_1
3
4
M. Volchanina et al.
1 Introduction When diagnosing power transformers by instrumental methods, the issue of reliable interpretation of the recorded signals remains relevant. In addition, it is important to identify various defects from the recorded signals. The paper deals with the processing of acoustic control signals of power transformers. It is proposed to identify the laws of distribution of random variables, which include the parameters of the recorded signals. Informative parameters in the recognition of acoustic signals are the range of signal amplitude, dominant frequency, rising edge, pulse duration, pulse attenuation, number of oscillations, signal activity, etc. The number of recorded pulses can reach hundreds and thousands of elementary signals, which in the general case are random. The analysis of domestic and foreign literature revealed the presence of a lot of works devoted to methods for diagnosing high-voltage equipment [1–3]. In these works, systems for collecting and processing data from acoustic monitoring of power transformers are considered. The paper [1] presents a genetic algorithm for locating the source of partial discharges. The article [2] considers the design of electrode for partial discharge location simulation in oil insulated power transformer and the application of AE method. Different electrodes generate signals of different shapes, and the task is to recognize and describe the parameters of various defects. One of the methods for describing the parameters of acoustic signals is given in [3]. One such method is to use the Wavelet Laplace function. Statistical methods of data processing and modeling of signals during the operation of the power supply system of railway transport are given in [4–7]. In these articles, the issues of statistical studies of signals in power supply systems are considered [4]. The use of artificial neural networks and fuzzy logic to identify the operating modes of the railway power supply system is described in [5]. Technology for reducing the consumption and losses of electrical energy in the power supply systems and transformers is given in the article [6]. Methods for modeling the propagation of wave processes in power supply systems are shown in [7].
2 Materials and Methods The article proposes an approach to processing distribution laws in order to identify differences in acoustic signals from partial discharges and low-frequency vibrations from vibration of the mechanical parts of power transformers during operation. Statistical processing of acoustic monitoring data was performed on the example of transformers with different insulation conditions. The studies were carried out on the input transformer T2 with a capacity of 16 MVA at a traction substation in normal technical condition. Measurements made using the acoustic control system showed the absence of partial discharges in the transformer, both during the passage
Statistical Studies of Acoustic Signals Measurements in Diagnostics …
5
Fig. 1 Location of sensors on the transformer housing
of the train and at idle. Insulation defects, accompanied by the occurrence of partial discharges, were not found at high voltage [8]. To test the operability of the DADS-16 installation, acoustic waves were excited by knocking with a metal striker on the transformer case from different sides from the installation site of the measuring transducers. Four measuring transducers (sensors) were installed on the outer walls of the transformer, forming an acoustic antenna, as shown in Fig. 1. By the difference in signal arrival times, it is possible to determine the coordinates of the source of partial discharges (PD). The DADS-16 installation is a microprocessor unit connected to a personal computer via a USB interface, to which four acoustic sensors are connected (with the possibility of increasing the number of sensors up to 16). The length of the connecting cables of the sensors is 10 m (maximum length is 100 m) [9]. Figures 2, 3 and 4 show the operating modes of the program “Total score”, “Signal shape” and “Histogram of dominant signal frequencies” when the striker is exposed to the left relative to the installed sensors. The tapping was carried out for 30 s, as shown in Fig. 2, where the horizontal axis is the time in seconds, and the vertical axis is the accumulated number of signals. Figure 3 shows a typical tapping waveform. The signal frequency is low, and the amplitude swing is large - from 244 to 1265 ADC units (from 45.6 to 59.9 dB). When tapping on the center of the wall of the power transformer tank, all four sensors should give a signal, however, in the experiment, the signal from the third and fourth sensors was not observed (34 ADC units) due to poor contact with the case. Figure 4 shows a histogram of the distribution of the maximum amplitudes of all 100 registered events. The horizontal axis shows the signal amplitudes in decibels, and the vertical axis shows the number of amplitudes of a given level. In this case, the total number of events by columns is 100. Fig. 2 The total score of the signals “Knock on the left”
6
M. Volchanina et al.
Fig. 3 Signal form “Knock on the left”
Fig. 4 Histogram of dominant frequencies of the “Knock on the left” signals
The tapping to the left of the sensors was carried out for 30 s, as shown in Fig. 2. The tapping was performed with two pauses of 3 s each, during which there were no signals. Figure 3 shows the operation of the first and second sensors, which the acoustic wave reaches earlier, while the third and fourth sensors do not produce a signal, since the duration of the signal recording after the event is less than the time the sound wave, reaches them. Figure 4 shows a histogram of the dominant frequencies of the recorded signals, while the signal amplitudes vary from 26 to 64 dB. Dominant frequencies are determined from the amplitude-frequency spectra of the recorded acoustic signals obtained by the Fast Fourier Transform (FFT) method. It follows from the figure that the frequencies of the registered pulses are distributed relatively uniformly in the low frequency range of the converters used. There are no characteristic frequencies corresponding to partial discharges. From the data presented in Figs. 2, 3 and 4, it follows that for each event a visual analysis is possible, confirming the presence of precisely the mechanical impact (knock) and its parameters. In addition, by the difference in the time of arrival of signals to the acoustic antenna sensors mounted on the transformer case, it is possible to establish the coordinates of the proposed location of the source.
Statistical Studies of Acoustic Signals Measurements in Diagnostics …
7
3 Results and Discussion For signals that do not contain PD, the amplitudes are not concentrated around the average value, so the distribution law of such signals is close to uniform (χ2 is 55.1), which is confirmed by Fig. 5, made in the STATISTICA program according to the control data obtained from the automated system DADS-16 [10]. Acoustic control for the presence of partial discharges was performed on the power transformer of the traction substation with a capacity of 20 MVA, which is constantly in operation. Partial discharges were detected on the power transformer, the cause of which is the deterioration of the insulating properties of the windings under high voltage. The measurements were carried out from the station (northern), eastern, southern sides during the passage of the train and at idle. Total completed 12 measurements with signal recording time from 30 to 60 s. The first measurement was taken from the east side of the traction substation transformer. During the measurement time of 60 s, 2347 signals from partial discharges were registered. In terms of the signal shape, partial discharges correspond to the signals described in the publications of other researchers [11–14]. Figure 6 shows the timing signals from four sensors. A histogram of the amplitudes of the recorded signals of the transformer in working condition is obtained. In contrast to the regime of the absence of electric discharges, a grouping of signals in the range from 41 to 44 units ADC is observed. Using the STATISTICA program, the histograms of the experimental distribution of signal amplitudes were compared with the closest theoretical distribution law. The Fig. 5 Comparison of the histogram of dominant frequencies with a uniform distribution law in the presence of mechanical vibrations
Fig. 6 Waveform “Measurement 1” in the presence of PD
8
M. Volchanina et al.
Fig. 7 Amplitude histogram and lognormal distribution law in the presence of PD
Fig. 8 Dominant frequencies histogram and lognormal distribution in case of PD
best approximation showed a log-normal distribution with a χ2 test equal to 18.23 (Fig. 7). A histogram of dominant signal frequencies in the presence of PD was also obtained. In the STATISTICA program, the histograms of the experimental distribution of signal frequencies were compared with the closest theoretical distribution law. The distribution law for dominant frequencies also corresponds to the lognormal theoretical law (Fig. 8). The χ2 parameter takes on a fairly large value of 1744, which is explained by a large data sample of 2345. When the train passes, the number of PDs per unit time begins to decrease, which can be explained by the presence of oscillations transmitted to the transformer case and their impact on the PD source. During measurement No. 8, partial discharges and vibrations from the impact of a passing train were registered simultaneously [15, 16]. The histogram was compared with a uniform distribution law and with a lognormal distribution law (the χ2 parameter takes values of 536 and 652, respectively). However, in this distribution there are two types of distributions, close to lognormal and uniform, the first of which characterizes the presence of a PD, and the second characterizes the low-frequency vibrations of the transformer case during the passage of a train [9, 10].
Statistical Studies of Acoustic Signals Measurements in Diagnostics …
9
4 Conclusion Based on the above, conclusions can be drawn. 1. The automated system of acoustic diagnostics DADS-16 with standard software allows registering partial discharges, vibrations of the transformer case and determining the quantitative characteristics of PD and vibrations. 2. For power transformers with mechanical oscillations, both during the passage of the train and at idle, the distribution of amplitudes and dominant frequencies of the recorded signals corresponds to a uniform law. The distribution of amplitudes and dominant frequencies is not centered around a specific mean. 3. On power transformers containing partial discharges, the cause of which is the deterioration of the insulating properties of the windings under the action of high voltage, the best approximation of both amplitudes and dominant frequencies is a lognormal distribution. The signals are centered around a characteristically pronounced mean value. 4. When the train passes, the acoustic system registers both high-frequency signals from the PD and low-frequency signals from body vibrations. There are two components in the distribution law - uniform and lognormal distribution densities. Thus, an additional parameter when diagnosing power transformers is the law of distribution of registered signals. According to the distribution law, it is possible to determine the presence of partial discharges, the presence of partial discharges and oscillations during the passage of the train, the absence of partial discharges in the presence of mechanical oscillations and vibrations of the transformer housing. These signs in the form of additional parameters can be used in automated diagnostic systems to increase the reliability of the diagnosis of the current state of the power transformer. A further development of the proposed method is to check the registered signals for misses to form homogeneous samples, which will increase the reliability and reproduction of the results of statistical data analysis. Acknowledgements The study was carried out with the financial support of the Russian Science Foundation within the framework of scientific project No. 23-29-00477.
References 1. Liu, H.L.: Acoustic partial discharge localization methodology in power transformers employing the quantum genetic algorithm. Appl. Acoust. 102, 71–78 (2016) 2. Jitjinga, P., Suwanasrib, T., Suwanasrib, C.: The design of electrode for partial discharge location simulation in oil insulated power transformer and the application of AE method. Proc. Comput. Sci. 86, 289–292 (2016) 3. Guillen, D., IdarragaOspinaa, G., Mombellob, E.: Partial discharge location in power transformer windings using the wavelet Laplace function. Electric Power Syst. Res. 111, 71–77 (2014)
10
M. Volchanina et al.
4. Cheremisin, V.T., Erbes, V.V.: Nonparametric statistical approach to evaluating the effectiveness of energy-saving devices. In: 14th International Conference on Environment and Electrical Engineering (EEEIC), Krakow, pp. 58 – 60 (2014) 5. Komyakov, A.A., Nikiforov, M.M., Erbes, V.V., Cheremisin, V.T., Ivanchenko, V.I.: Construction of electricity consumption mathematical models on railway transport used artificial neural network and fuzzy neural network. In: 16th International Conference on Environment and Electrical Engineering (EEEIC), Florence, pp. 1–4 (2016) 6. Cheremisin, V.T., Demin, Y.V., Komyakov, A.A., Ivanchenko, V.I.: Technology for reducing the consumption and losses of electrical energy in the power supply systems of railway consumers. MATEC Web Conf. 239, 1–7 (2018) 7. Kovaleva, T.V., Komyakova, O.O., Pashkova, N.V., Komyakov, A.A.: AC traction network simulation with the wave processes in the SimInTech environment. Smart Innov. Syst. Technol. 272, 417–426 (2022) 8. Cheremisin, V.T., Kuznetsov, A.A., Volchanina, M.A.: Measurement of acoustic signals on power transformer defects simulator. In: 2021 Proceedings International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM)/Sochi (2021) 9. Serjeznov, A.N.: Acoustic emission control of railway structures, p. 272. Nauka Publications, Novosibirsk (2011) 10. Borovikov, V.A.: STATISTICA program for students and engineers, p.198. Computer Press Publications, Moscow (2001) 11. Chulichkov, A.I., Cybulskaya, N.D., Cvetaev, S.K., Surkont, O.S.: Classification of acoustic signals of discharge processes in insulation based on the shape of their wavelet spectra. Moscow Univ. J. (Rus). Ser. Phys. 3, 103–105 (2009) 12. Markalous, S.M., Tenbohlen, S., Feser, K.: Detection and location of partial discharges in power transformers using acoustic and electromagnetic signals. IEEE Trans. Dielectr. Electr. Insul. 15(6), 1576–1583 (2008) 13. Shahnin, V.A., Chebryakova, Y.S., Mironenko, Y.V.: Statistical characteristics of partial discharges as diagnostic signs of the state of isolation of high-voltage equipment. Control. Diagnost. (Rus) 2, 59–65 (2015) 14. Maksudov, D.V., Fedosov, E.M.: Methods for selection of partial discharge signals. J. Ufa State Aviat. Techn. Univ. (Rus.) 2(12), 138–143 (2009) 15. Karandaev, A.S., Evdokimov, S.A., Devyatov, D.H., Parsunkin, B.N., Sarlybaev, A.A.: Diagnosis of power transformers by acoustic location of partial discharges. J. MSTU Nosov. 1, 105–108 (2012) 16. Volchanina, M.A., Kuznetsov, A.A., Gorlov, A.V.: Improving the reliability of diagnosing power transformers under conditions of seasonal temperature changes. Elect. Syst. Compl. 4(53), 33–38 (2021)
Study of the Resonance Phenomena in Three-Phase Power Lines Supplying Non-traction Railway Consumers Tatyana Kovaleva , Olga Komyakova , Natalia Pashkova , Amanzhol Chulembaev , and Aleksandr Komyakov
Abstract The article discusses the issues of modeling of the non-traction railway consumers electric energy quality indicators. Harmonic pollution occurs not only in the traction network but also on the side of the network winding of traction substations converter transformers. The purpose of this study is to evaluate the values of high-order voltage harmonics taking into account resonant phenomena in consumers powered by traction substations of railway transport via three-phase 10 kV power lines. The article focuses on high-order harmonics, the presence of which negatively affects the operation of modern electrical equipment with switching power supplies. To analyze resonance phenomena in three-phase circuits, a mathematical apparatus for calculating circuits with distributed parameters, implemented in the MathCAD program, was used. The impedance-frequency characteristics are compiled, the local maximum and minimum points corresponding to the resonance of currents and voltages are determined. The obtained results are confirmed by simulation modeling of the real power supply system of the West Siberian Railway in the SimInTech program. The calculation results show that the voltage harmonic of 1250 Hz is amplified by 16 times at the end of the supply line. This requires the use of filters for consumers that are critical to the level of higher harmonics in the network. Keywords Harmonic pollution · Railway power supply system · Harmonic resonance · Traction substation · Impedance
T. Kovaleva · O. Komyakova · N. Pashkova · A. Chulembaev · A. Komyakov (B) Department of Theoretical Electrical Engineering, Omsk State Transport University, Omsk 644046, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_2
11
12
T. Kovaleva et al.
1 Introduction Traction networks of railway transport are characterized by significant harmonic pollution [1], which is typical for AC 27.5 kV and DC 3 kV systems. To improve the quality of electricity in the traction network, smoothing filters, reactive power compensation devices with harmonic filtering, traction transformers with integrated filters [2, 3], etc. can be used. In a DC 3 kV system, harmonic pollution is associated with the operation of rectifiers at traction substations. Additional harmonics may arise due to the operation of the DC-AC traction drive system [4, 5]. Harmonic pollution occurs not only in the traction network, but also on the side of the network winding of converter transformers. The studies [6–12] focus on low-order harmonics with frequencies up to 550–650 Hz. DC traction substations of railway transport in Russia are characterized by the fact that third-party consumers are also fed from the mains voltage AC 10 kV. These can be non-traction railway consumers (for example, automatic blocking, a locomotive maintenance company), as well as residential buildings and agricultural enterprises. At these facilities a lot of electrical equipment with switching power supplies has recently been introduced. These include household electrical equipment (TVs, washing machines with electronic control units), digital machine control systems, etc. Such consumers are critical to the quality of the supply voltage, especially in the spectrum of high-frequency harmonics, starting from 950 Hz and above. An extra feature is that the supply lines of such consumers can be long (up to 40–50 km), which in some cases leads to an increase in the amplitude of harmonics with high frequencies [13, 14]. One of the reasons of the overvoltage is resonant phenomena in power lines, which adversely affect the reliability of electrical equipment in the energy infrastructure.
2 Materials and Methods The purpose of this study is to evaluate the values of high-order voltage harmonics, taking into account resonant phenomena in consumers powered by traction substations of railway transport via three-phase 10 kV power lines. The power line is an electrical circuit with distributed parameters. The length of this kind of line has a significant impact on the course of electromagnetic processes, since it is necessary to take into account the finite speed of propagation of these processes. Therefore, the main relations and equations of lines contain two independent variables, which are the time and a spatial coordinate. In addition to the parameters of the line wires distributed along the length, the distributed parameters of the medium are also taken into account. As a rule, the model of a two-wire homogeneous line as the object of study being the simplest representative of the considered circuits class is used. The relationships and regularities established for this kind of circuit can be transferred to other types of lines when necessary.
Study of the Resonance Phenomena in Three-Phase Power Lines … ua(x,t) Ra x ia(x,t)
ib(x,t)
Ca x Cab x Cbc x Cca x Cb x
ic(x,t)
Cc x
3i (x,t)
La x
ua(x+ x,t)
ia(x+ x,t) Lb x ub(x,t) Rb x
ub(x+ x,t)
ib(x+ x,t) uc(x,t) Rc x
Lc x
uc(x+ x,t)
13
Ca x Cab x Cbc x Cca x Cb x
ic(x+ x,t) R x L x
Cc x
3i (x+ x,t)
Fig. 1 Equivalent circuit of an elementary section of a three-phase transmission line
The basis of the mathematical apparatus for calculating a two-wire line with primary (R0 , L 0 , G 0 , C0 ) and secondary (Z B , γ) parameters constitutes telegraph equations, which are a system of two differential equations for current and voltage. To bring the equations for a three-phase power line to the form of equations for a two-wire line, a well-known algorithm is used [15]. The equivalent circuit of an elementary section of a three-phase line is shown in Fig. 1. The analysis of electromagnetic processes in a three-phase transmission line can be carried out using equations compiled according to Kirchhoff’s laws for an equivalent circuit with complete symmetry of the parameters, in which the equalities take place: L ab = L ba = L ac = L ca = L bc = L cb = L 0 ; C ab = Cba = Cac = Cca = Cbc = Ccb = C; L a = L b = L c = L; Ra = Rb = Rc = R. The value of active conductivity for lines with voltage less than 330 kV is assumed to be zero. Due to mathematical transformations using the method of symmetric components, the equations compiled according to Kirchhoff’s laws are transformed to the following form:
˙ − ddxU = (R + 3R3 + jω(L + 2L 0 ) I˙; d I˙ = j ωC0 U˙ . − dx
U˙ (x) = 21 U˙ 1 + I˙1 Z B e−γ x + 21 U˙1 − I˙1 Z B eγ x ; I˙(x) = 2Z1 B U˙ 1 + I˙1 Z B e−γ x − 2Z1 B U˙ 1 − I˙1 Z B eγ x .
(1)
(2)
Further calculation of both voltage and current in a three-phase transmission line can be carried out as for a two-wire line model. When analyzing resonances in such a line, the input resistance Z BX has a finite value and depends on its length l. The dependence Z BX (l) has maxima and minima. The minimum Z BX (l) corresponds to the voltage resonance, and the maximum corresponds to the current one [16]. Each of the resonances occurs at a certain line length, which is called the resonant length lP . In this case, at the end of the line, the value of voltage or current may exceed their values at the beginning of the line. The resonant length for each harmonic is different.
14
T. Kovaleva et al.
Fig. 2 Calculated dependence of the input resistance on the harmonics order of the supply voltage
The input resistance of the line depends on the harmonic frequency of the supply voltage and is determined by the expression ZB X = ZB
1 + ρe−2γ l 1 − ρe−2γ l
(3)
where Z B is the wave resistance; γ = α + jβ is the propagation coefficient; α is the −Z B = ρe jψ is the reflection attenuation coefficient; β is the phase factor; ρ = ZZHH+Z B coefficient; Z H is the load resistance. The minimum and maximum condition depends on the component e−2γ l . The expression (3) is transformed into the following form ZB X = ZB
1 + ρe−2αl e−j(2βl−ψ) , 1 − ρe−2αl e−j(2βl−ψ)
(4)
From expression (4) it follows that Z BX takes on a minimum and maximum value at e− j (2βl−ψ) equal respectively to -1 and 1. These values correspond to the function argument: 2βlP − ψ = ±nπ at e− j (2βlP −ψ) = −1 (n = 1, 3, 5, …) 2βlP − ψ = ±2nπ at e− j (2βlP −ψ) = −1 (n = 0, 1, 2, …) Calculated dependence of the input resistance of the investigated line 10 kV, made by a wire with primary parameters R0 = 0, 21 Ohm/km, L 0 = 1, 33 · 10−3 H/km, C0 = 17 · 10−9 F/km, 50 km long with active-inductive load (R = 68 Ohm, L = 0, 162 H), from the ordinal number of supply voltage harmonics is shown in Fig. 2.
3 Results and Discussion Analysis of the resulting dependence shows the presence of the local minima and maxima, e.g. the first maximum of the frequency response of the input impedance, corresponding to the current resonance, is fixed at a frequency of 450 Hz, and the first minimum at a frequency of 1250 Hz corresponds to the voltage resonance.
Study of the Resonance Phenomena in Three-Phase Power Lines …
15
The local minima and maxima of the input resistance correspond to the resonant lengths l PH and l PT : +ψ l π2β is the minimum line length at voltage resonance; l π+ψ is the 2β PTmin PHmin minimum line length at current resonance. The following inequalities make it possible to estimate the resonances possibility with increased voltages and currents: ηPH
−2γ lPHmin lPTmin −αlPHmin 1−ρe−2γ −αl??min 1+ρe−2γ lPHmin −2γ lPTmin 1+ρe 1−ρe = e ηPT = e
(5)
The multiplicity factors ηPH ηPT determine the probability of the voltage and current resonances at a certain frequency respectively. Inequalities (5) are satisfied when the effect of superposition of voltage and current waves prevails over their attenuation. In the MathCAD environment, the calculations of the resonant minimum line lengths lPHmin , l PTmin and the multiplicity factors ηPH , ηPT depending on the frequency to assess the possibility of voltage and current resonances were made. The calculation results are shown in Table 1. The given technique for the analysis of electromagnetic processes taking into account the distributed nature of the line parameters results in the estimation of the resonances possibility in the lines. For a detailed analysis of electromagnetic processes under resonance conditions at any point of each phase with an unbalanced load, it is necessary to create an appropriate model of a three-phase power transmission line. The dynamic modeling environment SimInTech is designed to create Table 1 Calculation results of the resonant minimum line lengths and multiplicity factors Frequency, Hz
l PHmin , km
l PTmin , km
ηPH
ηPT
50
1897
875
2.7
0.4
150
582
232
1.1
0.9
250
319
109
0.5
2
350
212
62
0.2
6
450
157
40
0.1
6.5
550
123
27
0.4
2.5
650
101
20
0.6
1.5
750
85
15
1
1
850
74
12
1.4
0.7
950
65
9
2.1
0.4
1050
58
8
3.7
0.3
1150
52
6
10.7
0.1
1250
47
5
11.5
0.08
1350
44
4
3.9
0.2
16
T. Kovaleva et al.
Fig. 3 Equivalent circuit of a three-phase AC power line in the SimInTech environment and the results of the spectral analysis
virtual complex models of object dynamics including three-phase ones. The use of such models is especially important in the study of wave processes in various systems. In the SimInTech environment, a virtual model of a three-phase 10 kV AC power line consisting of a three-phase voltage source, a special block « Three-phase power line», replacing the elementary section of a long line, a three-phase active-inductive load, measuring instruments, oscilloscopes and voltage spectrum analyzers (Fig. 3) was created. By changing the parameters of the “Three-phase transmission line” block, the length of the line, the number of elementary sections with specified primary parameters can be varied. The considered line with the length of 50 km in the above equivalent circuit is represented by one block consisting of 50 elementary sections, each of which is 1 km long. The primary parameters of the elementary sections of the considered line are similar to those given above. Experimental studies carried out on 10 kV buses of the traction substation of the West Siberian Railway made it possible to form a harmonic spectrum of the supply voltage. In the SimInTech environment, a model of a three-phase non-sinusoidal voltage source, which contains odd harmonics with frequencies in the range from 50 to 1550 Hz, was created. The spectral composition of the voltage both at the line input and at the load for phase A can be obtained using voltage spectrum analyzers (Fig. 3). An analysis of the obtained spectra at the beginning and at the end of the line shows an increase in the voltage of harmonics with frequencies of 1150 Hz by 8.3 times and 1250 Hz by 14.5 times, which indicates the presence of voltage resonance. Comparison of simulation results in the SimInTech environment with forecasts based on calculations in the MathCAD environment (Table 1) shows their complete identity.
4 Conclusion Thus, the prediction of the voltage and current resonances in three-phase power lines supplying non-traction consumers is carried out on the basis of the calculation in the MathCAD environment. The analysis of electromagnetic processes, including
Study of the Resonance Phenomena in Three-Phase Power Lines …
17
the evaluation of the values of higher voltage harmonics, taking into account the resonant phenomena in consumers, is carried out by modeling in the SimInTech environment. In this case, the equivalent circuit of the line can contain any number of blocks «Three-phase transmission line» of a certain length to control the voltage and current at any point of the line. The developed mathematical models make it possible to estimate the voltage of high-frequency harmonics for consumers powered by railway DC traction substations. In further studies, this will allow to assess the need to use filters for consumers that are critical to the level of higher harmonics in the network.
References 1. Ahn, Y.H., Lee, C.M.: Harmonic analysis and its countermeasure on electric railway in Korea. IFAC Proc. 36(20), 963–968 (2003) 2. Xiang, J.W., Xu, J.Z., Wu, Q.W., et al.: Traction transformer integrated LCL filtering method for high-frequency harmonic and resonance suppression in AC train. Int. J. Electr. Power Energy Syst. 148, 108922 (2023) 3. Rakesh, R., Sankar, D., Debashis, C., Goutam, K.P.: A high efficiency static var compensation scheme using modified magnetic energy recovery switch (MERS) with parameters selection of passive elements for low harmonic injection. Int. J. Electr. Power Energy Syst. 135, 107629 (2022) 4. Roberto, Z., Giovanni, A.: Field oriented control of a multi-phase asynchronous motor with harmonic injection. IFAC Proc. 44, 6160–6165 (2011) 5. Guiping, C., et al.: YN/VD connected balance transformer-based hybrid power quality compensator for harmonic suppression and reactive power compensation of electrical railway power systems. Int. J. Electr. Power Energy Syst. 113, 481–491 (2019) 6. Zakaryukin, V.P., Kryukov, A.V., Cherepanov, A.V.: Modeling of resonant processes at higher harmonics in AC traction networks Sovremennye tekhnologii. Syst. Anal. Modeling 3, 214–221 (2016) 7. Kovaleva, T.V., Komyakova, O.O., Pashkova, N.V.: Dependence of wave processes in the AC traction network on the parameters of the power supply system. Omsk Sci. Bull. 3(165), 23–27 (2019) 8. Lianxue, G., Ran, Z.: An improved harmonic resonance mode analysis method for distribution power grid. Energy Rep. 8(4), 580–588 (2022) 9. Zhao, J., Ma, X., Yang, H.: Improved modal sensitivity analysis-based method for harmonic resonance analysis. Electric Power Syst. Res. 193, 106978 (2021). https://doi.org/10.1016/j. epsr.2020.106978 10. Saim, A., Houari, A., Ait-Ahmed, M., Machmoum, M., Guerrero, J.M.: Active resonance damping and harmonics compensation in distributed generation based islanded microgrids. Elect. Power Syst. Res. 191, 106900 (2021). https://doi.org/10.1016/j.epsr.2020.106900 11. Wang, Y., Zheng, K., Zhao, J., Qin, S.: A novel premium power supply scheme against harmonic suppression for non-linear park. Elect. Power Syst. Res. 214, 108831 (2023). https://doi.org/ 10.1016/j.epsr.2022.108831 12. Mahmoudabad, R.K., Beheshtipour, Z., Daemi, T.: Decoupled fractional-order harmonic filter for power quality enhancement of grid-connected DG units. Elect. Power Syst. Res. 211, 108220 (2022). https://doi.org/10.1016/j.epsr.2022.108220 13. Hu, H., Shao, Y., Tang, L., Ma, J., He, Z., Gao, S.: Overview of harmonic and resonance in railway electrification systems. IEEE Trans. Ind. Appl. 54(5), 5227–5245 (2018)
18
T. Kovaleva et al.
14. Song, K., Mingli, W., Yang, S., Liu, Q., Agelidis, V., Konstantinou, G.: High-order harmonic resonances in traction power supplies: a review based on railway operational data, measurements, and experience. IEEE Trans. Power Electron. 35, 2501–2518 (2020) 15. Kovaleva, T.V., Pashkova, N.V.: The wave processes study in the overhead system and power lines. Izvestiia Transsiba 2(22), 71–79 (2015) 16. Kovaleva, T.V., Komyakova, O.O., Pashkova, N.V.: Resonance in the alternating current traction network. Omsk Sci. Bull. 4(172), 32–35 (2020)
The Design and Implementation of Low Voltage DC Carrier Chip and System Wenbin Duan and Yinghong Tian
Abstract DC carrier technology can save signal lines, streamline engineering wiring, and reduce costs by uploading signals into the DC power line, and the technology is becoming more and more promising. We propose a framework structure of a low voltage DC carrier system with a central control module in this paper and analyze the voltage, power, and communication logic of the system. According to the findings of the above analysis, the receiver chip is designed, simulated, and then manufactured using the UMC180BCD process. The chip test results show that the chip can operate at 10–30 V with a quiescent current of only 74.4 uA. In this study, a DC carrier system based on this chip is constructed, and via the cooperation of the MCU, the function of simultaneous transmission of the operating voltage and control signal from the DC power bus is realized. The verification results based on MATLAB simulation and the PCB test platform show that the overall system works steadily and reliably. The DC carrier system described in this paper can be applied to intelligent light control, intelligent broadcasting, and other systems to save material and labor costs. Keywords DC carrier · Level shift · Chip design
1 Introduction There are two types of data uploading on power lines: high voltage and low voltage. High voltage often refers to those power line exceeding 220 V, such as power line carriers (PLC) [1–3], which have a wide range of applications in various industries, including remote meter reading and smart homes and the technology is relatively mature [4]. Low voltage is typically defined as 30 V or less. DC carrier technology, W. Duan (B) · Y. Tian East China Normal University, Shanghai, China e-mail: [email protected] Y. Tian e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_3
19
20
W. Duan and Y. Tian
also known as signal superposition on DC voltage signal, is a current research hotspot. With the continuous development and popularity of various new energy technologies (photovoltaic power generation, fuel cells, and lithium batteries) [5, 6], the AC power era is quietly changing to the DC power era. In the DC power environment, power storage, power wiring, signal transmission, conversion, and reception are different from the traditional AC approach [7]. The study on DC carriers in the DC power environment is getting more and more attention, albeit it is still in its early stages and far from adequate compared to that on the AC power environment. A low-voltage DC carrier system was built by Bing Qi et al. [8], and a DC carrier communication architecture for spacecraft power distribution systems was developed by Shaokun Ma et al. from the Shandong Institute of Aerospace Electronics Technology [9]. Yuangang Chen et al. also applied DC carrier communication technology to the fiber optic control unit of the Guo Shoujing telescope [10]. Figure 1 shows the basic structure of the DC-PLC proposed in the literature [7], and it consists of two sections: one for power supply, which includes the power supply, power lines, voltage regulator, and load; and the other for communication, which includes the transmitter, power lines, and receivers. As each receiver has its unique address code and is linked to the same transmitter via the DC power line, the receiver will only respond to data signals from the transmitter when the address is accurate. Conventional DC carrier technology modulates the data with a high-frequency signal and loads it onto the power line for transmission by inductive coupling or photoelectric coupling [11], and a voltage converter is required to convert the voltage to a certain voltage to power the load when the voltage required by the load does not always match the supply voltage. As a result, the DC carrier technique based on the construction of DC-DC converter emerged [5, 12, 13]. This technology modulates the data “0” and “1” onto the power line through the different ripple frequencies and amplitudes, which are generated by DC-DC converters at different switching frequencies, and then passes it along to the receiver in the form of current or voltage. However, the use of these two methods of DC carrier technology will make them more complicated and expensive because they require inductors, transformers, and other large devices as well as clock signals which are essential for the modulation process. Therefore, it cannot be widely accepted in some application scenarios, such as smart lighting, smart broadcasting, and smart robotics, that do not require DC-DC converters. The simplified intelligent control system outlined in this study is based on DC carrier technology. The system uses a linear regulator with a broad input range to provide the operating voltage needed for the load, and a level-shifting circuit for Fig. 1 Basic structure of a DC-PLC
The Design and Implementation of Low Voltage DC Carrier Chip …
21
sending and receiving is used for data transfer between the transmitter and receiver. What’s more, the linear regulator and receiver of the simplified intelligent control system are integrated into one chip, making it simpler to assemble, more energyefficient, and more compact.
2 The Structure of the Low Voltage Carrier System Proposed Figure 2 depicts the structure of the low voltage carrier system. The control system can be divided into two parts: the central control module and the node load module. The central control module includes a DC power supply, an MCU, and a level shift circuit, and the node load module contains a receiver chip, an MCU, and a load. The central control module serves as a master and is powered by an AC-DC converter or a storage power source resembling a solar cell. However, because all node load modules are powered by this DC power source, the number of node load modules that can be mounted on the central control module is limited, and its maximum instantaneous output power restricts the number of node load modules working at the same time. As a slave, the node load module gets the data from the master and processes it to regulate the load’s operational condition. The power of these slaves is converted from the power line voltage by the linear regulator in the receiver chip. When data is sent, a 3.3 V digital signal from the MCU changes the DC power line voltage so that the presence of the voltage indicates data “1” and the absence of the voltage indicates data “0”. Likewise, when data is received, the receiving chip determines whether the data is “0” or “1” based on the DC power line voltage. Of course, it is necessary to add an MCU between the receiving chip and the load to distinguish each load (sensor, actuator, and resistive load) and identify the address set for each node load module. When the data has been processed, the matching load will then get the load control signal. Fig. 2 Basic structure of the low voltage carrier system
22
W. Duan and Y. Tian
2.1 Analysis of Operating Voltage and Power Consumption Consider a DC power cable with a length L and unit resistance RΩ/m. Assume that n node load modules are connected equally spaced along the cable. The voltage drop at the last node load module can be calculated by Eq. (1). ∆V = L R/n ∗ [n I t + (n − 1) I t + · · · + I t ] = ((n + 1)L R I t)/2
(1)
where It = I MCU + I L O AD + I R EC E I V E_C H I P , I MCU denotes the quiescent current consumed by the MCU, I L O AD represents the load current, and I R EC E I V E_C H I P means the quiescent current of the receiver chip. According to the formula above, in order to mount more node load modules on a longer cable, you must, on the one hand, increase the operating voltage range of the node load module; on the other hand, since it is challenging to reduce the current consumed by the MCU and the load, you can only accomplish this by reducing the quiescent current of the receiver chip.
2.2 Level Conversion Circuit of the Central Control Module As shown in Fig. 3, the base-emitter voltage of the triode Q1 exceeds its own threshold voltage when the MCU of the central control module sends a high level, that is, when the DATA_TX pin is high level. Then, the MN transistor will enter the cutoff zone as a result of the conducted Q1 pulling down its gate voltage. Due to sufficient current flowing through the resistor R1, the MP transistor’s source-gate voltage will be higher than the absolute value of the threshold voltage. As a consequence, the BUS1 or BUS2 are linked to the VDC. By the way, the voltage drop across resistor R1 is not so large as to break down the MP transistor due to the presence of the Zener diode D1. When the DATA_TX pin sends a low level, the Q1 is not turned on, and there is no current flowing through R1, so there is also no voltage drop on the resistor. Hence, the voltage of MP’s gate is equal to the VDC, and the MP is in the cutoff region, but the voltage of MN’s gate is adequate to switch on the MN transistor to connect the BUS and GND. The D1 can also protect the MN. Fig. 3 Level conversion circuit of the central control module and diagram of data conversion
The Design and Implementation of Low Voltage DC Carrier Chip …
23
2.3 System Communication Logical Code Stream The logic code flow sent from the central control module to the node load module is shown in Fig. 4. Because the node module requires electricity, the central control module must make sure that the bus is typically high voltage in free time. The central control module sends a low level of 200us to inform the node modules that they need to be ready to receive a signal, and then sends a 16-bit data with a code width of 50us, including a 1-bit start bit, a 5-bit address bit, an 8-bit data bit, a 1-bit parity bit, and a 1-bit stop bit. The 5-bit address bit indicates that a central control module can connect to 32 node load modules, and the 8-bit data bit contains the data for up to 32 node load modules. The 8-bit data bit contains the control signal for the load.
3 Design of Receiver Chip The receiver chip is based on the UMC180BCD process and is connected to the central control module through a DC power line with a maximum bus input voltage of 30 V. The two primary functions are to supply the load and the MCU with the required voltage for operation, and to convert the logic data on BUS1 and BUS2 into logic data that can be processed by the MCU. The overall layout of the receiver chip and the packaged receiver chip are shown in Fig. 5.
Fig. 4 Diagram of logic code flow Fig. 5 Layout and Package of the receiver chip
24
W. Duan and Y. Tian
3.1 Power Supply Module The power supply module includes a rectifier circuit, a pre-buck circuit, a bandgap reference, and a LDO, as shown in Fig. 6. The half-bridge rectifier circuit is designed to avoid polarity differentiation between BUS1 and BUS2, which greatly reduces the workload of the installation process. To prevent significant voltage variations, a bigger electrolytic capacitor must be connected to VDD since the bus voltage is changing while the data is being sent. To make sure that the electrolytic capacitor can supply adequate power consumption during the time when the bus is transmitting data “0,” the size of the electrolytic capacitor is decided by the current of the load and the voltage difference between the load and VDD. The pre-buck circuit reduces VDD to about 5 V for the bandgap reference and LDO; the bandgap generates a reference voltage of about 1.2 V; the LDO1 outputs a voltage of 3.3 V for MCU operation; the LDO2 generates the working voltage required by the load, and the feedback resistor of the LDO2 is placed outside the chip in order to adapt to different working voltages of the load, while the source of the PMOS power tube of the LDO2 is connected to VDD to increase the output voltage range. Simulation Results of the Pre-buck Module. Figure 7 shows the DC sweep results of the pre-buck module. As can be seen, the pre-buck module is capable of successfully lowering the bus voltage VDD at various PVT corners. Figure 7(a) displays the PRE_VDD output at heavy load (1 mA) and Fig. 7(b) presents the output at light load (1uA) of the pre-buck module. The highest voltage is 5.42 V at the corner of “120°Clight load-FF-30 V” and the lowest voltage is 3.14 V at the corner of “-40°C-heavy load-SS-9.6 V”. Therefore, the PRE_VDD output of the pre-buck module does not break down the MOS transistor with a withstand voltage of 5.5 V during the change of the input VDD from 10 to 30 V, and the pre-buck module can normally supply power to the bandgap and LDO module. Simulation Results of the Bandgap. Figure 8(a) shows the DC temperature sweep results of the output voltage VREF of the voltage-mode bandgap module at three Fig. 6 Diagram of power supply module in the receiver chip
Fig. 7 DC sweep simulation of pre-buck module
(a) heavy load current
(b) light load current
The Design and Implementation of Low Voltage DC Carrier Chip …
(a)Temperature sweep simulation
25
(b) Monte Carlo simulation
Fig. 8 Simulation results of bandgap
Table 1 TC and average of VREF in the three types of corners
Corner VREFmean /V Tc /(ppm/°C)
TT 1.171 44.67
SS 1.167 46.11
FF 1.176 54.33
types of corners. Due to the first-order compensation method, the overall curve looks similar to an open-down quadratic function. The maximum differential voltage appears at the FF corner, which is 10.22 mV. The temperature coefficient can be calculated by Eq. (2). TC =
V R E F max − V R E F min × 106 ( ppm/◦ C) V R E F mean (Tmax − Tmin )
(2)
The results of the calculations are shown in Table 1. The largest of these temperature coefficients occurs at the FF corner, which is 54.33 ppm/°C. Figure 8(b) displays the Monte Carlo simulation results of the bandgap’s output voltage. AS the standard deviation of VREF is 14.94 mV, so the actual output voltage of the bandgap will occur in the voltage range of 1.117 V–1.226 V. Simulation Results of the LDO. Figure 9(a) presents the VDD power supply scanned from 0 V–30 V at a 25°C-TT corner for LDO1 and LDO2, whose output voltage is 5 V, and the linear adjustment rate can be calculated as 0.09 mV/V for LDO1 and 0.13 mV/V for LDO2. Figure 9(b) displays the load current scanned from 0 to 10 mA for LDO1 and LDO2 at three types of corners, so it is easy to calculate the load regulation. The maximum load regulation of LDO1 is 113uV/mA, and the maximum load regulation of LDO2 is 167 uV/mA. Figure 10 illustrates the Monte Carlo simulation results for the output voltages of LDO1 and LDO2. As a result, it is clear that when the bus voltage is changed from 10 to 30 V, the output voltages of LDOs can still provide the MCU and load with a consistent power supply.
26
W. Duan and Y. Tian
(a) DC sweep simulation
(b) Load current sweep simulation
Fig. 9 Sweep simulation results of LDO1 and LDO2
(a)Results of LDO1
(b) Results of LDO2
Fig. 10 Monte Carlo simulation results of LDO
3.2 Data Conversion Module The data conversion module, shown in Fig. 11, consists of two transmission tubes, two level shift circuit modules, and a logic OR gate. When BUS1 is used for data transmission, BUS1 switches on TG1, while BUS2 is connected to the ground of the receiver chip, also the ground of the node load module. When BUS2 is used for data transmission, TG2 is turned on, and BUS1 is connected to the ground. Afterwards, the voltage signal of the bus is sent to a level shift circuit to be converted into a logic signal that can be processed by the MCU and passed through an OR gate to avoid the polarity problem of the bus during system construction. Fig. 11 Structure diagram of the data conversion module in the receiver chip
The Design and Implementation of Low Voltage DC Carrier Chip …
(a) start-up simulation
27
(b) Transient response of LDO
Fig. 12 Simulation of a power module
3.3 Receiver Chip Overall Simulation and Performance Parameters Figure 12 presents the transient simulation of the receiver chip. The start-up simulation results of each module at power-on are shown in Fig. 12(a), which also displays the output voltage of each module is successfully stabilized at the set value when the bus voltage rises at a rate of 10 V/ms. Figure 12 (b) shows the transient response simulation of the LDO output voltage when the load current switches between 0 and 10 mA at a rate of 1 mA/us. The overshoot voltage of LDO1 is 175 mV and the downshoot voltage is 211 mV, while the overshoot voltage of LDO2 is 202 mV and the downshoot voltage is 208 mV, both the two LDOs can recover the overshoot and downshoot and stabilize the output voltage within a few tens of microseconds. The receive function is verified at TT corner, 25 °C, and 10pF load capacitance, and the results are displayed in Fig. 13. The transmission rate of the bus voltage in Fig. 13(a) is 20 kbps. The output signal has a rising edge delay of 331 ns and a falling edge delay of 121 ns, and the receiver module works normally. The transmission rate of the bus voltage in Fig. 13(b) is 1Mbps, which shows that the receiver module can still receive the bus data. Although the output high level and duty cycle are not small deviations, the simulation of 1Mbps still lays a certain foundation for the extended application of the receiver chip.
28
W. Duan and Y. Tian
(a) transmission rate: 20kbps
(b) transmission rate: 1Mbps
Fig. 13 Simulation of the data conversion module Fig. 14 PCB testing schematic
4 System Verification In order to verify the feasibility and stability of the DC carrier-based intelligent control system proposed in this paper, it is illustrated by MATLAB simulation and PCB testing. As shown in Fig. 14, the central control module is connected to two node load modules, whose addresses are set to “10,101” and “00,001” respectively, and the data of the corresponding addresses are sent by the central control module for verification. Figure 15 shows the process of sending data to the load with addresses “10,101” and “00,001” in the MATLAB simulation. The logic data is accurately converted into high and low voltage on the bus by the central control module, and it can be seen that only when a node load module behaves the corresponding address, the MCU in the node load module will send data to the load. Figure 16 presents the waveform detected by the oscilloscope in the actual PCB test. The bus voltage in the left oscilloscope window is 27.4 V, and the bus voltage in the right oscilloscope window is 9.1 V. It is clear that the intelligent control system based on DC carrier technology designed in this paper can function normally when the DC supply voltage is between 10 and 30 V. Additionally, the central control module is capable of transmitting the proper logic data to the destination node load module through the DC power line.
The Design and Implementation of Low Voltage DC Carrier Chip …
29
Fig. 15 Process of sending data with addresses “10,101” and “00,001” in MATLB
Fig. 16 Process of sending data with addresses “10,101” and “00,001” in PCB test
Table 2 The simulation and test results of the receiver chip
Performance index
simulation
test
Operating voltage/V
10–30
9.1–31
Quiescent current/uA
47.5(25°C-TT)
74.4
Transmission rate/bps
20 k
20 k
Transmission delay/ns
226
286
Output low level/V
0.042
0.07
Output high level/V
3.259–3.291
3.31
The simulation and test results for the receiver chip presented in this work are listed in table 2. It is clear that in terms of quiescent current and transmission delay, the test results are poorer than the simulation results. Leakage current and process angle deviation are the major causes of the rise in quiescent current, whereas the parasitic effects of the PCB board and layout are the main causes of the increase in transmission delay.
5 Summary A low voltage DC carrier system is implemented, which has the function of supplying power and sending data from the central control module to the node load module at the same time. A receiver chip of the system is also designed and realized based on the UMC180BCD process. These techniques can be applied in intelligent light control, intelligent broadcasting, and other similar systems to save material and labor costs.
30
W. Duan and Y. Tian
References 1. Yang, C.Y., Chen, Y.: Design of a low-voltage DC carrier communication circuit. Electron. Des. Eng. 29(8), 101–105 (2021) 2. Lu, M.J., Liang, L., Yu, X.Y.: The design of the digital multiplexer based on power carrier communication on sports venues. Phys. Procedia 24, 2273–2278 (2012) 3. Pavlidou, N., Han Vinck, A.J., Yazdani, J., Honary, B.: Power line communications: state of the art and future trends. IEEE Commun. Mag. 41(4), 34–40 (2003) 4. Sung, G.N., Wu, C.M., Huang, C.M.: The sensor network using DC power line communication bus. In: 2015 IEEE Symposium on Computer Applications & Industrial Electronics (ISCAIE), pp. 197–200 (2015) 5. Kohama, T., Hasebe, S., Tsuji, S.: Simple bidirectional power line communication with switching converters in DC Power distribution network. In: 2019 IEEE International Conference on Industrial Technology (ICIT), pp. 539–543 (2019) 6. Daldal, N., Uzun, B., Bekiro˘glu, E.: Measurement and evaluation of solar panel data via dc power line. In: 2022 10th International Conference on Smart Grid (icSmartGrid), pp. 280–284 (2022) 7. IEEE Standard for DC Power Transmission and Communication to DC Loads. IEEE Std 2847-2021, pp. 1–86 (2022) 8. Qi, B., Wang, C., Li, B.: Design of low-voltage DC power line carrier communication system. Smart Grid 5(8), 822–826 (2017) 9. Ma, S.K., Zhang, L.L.: Design and analysis of DC carrier communication architecture for spacecraft distribution system. Spacecraft Eng. 31(5), 80–86 (2022) 10. Chen, Y.G., Yang, M.S., Wang, M.H.: Design of LAMOST control system based on DC carrier communication. Measur. Control Technol. 40(10), 79–84 (2021) 11. Mandourarakis, I., Agorastou, Z., Koutroulis, E., Karystinos, G.N., Siskos, S.: On-chip power line communication for cascaded h-bridge power converters. In: 2022 11th International Conference on Modern Circuits and Systems Technologies (MOCAST), pp. 1–5 (2022) 12. Sun, Z.C., Liter, S.: A novel power line communication controller designed for point-of-load dc-dc converters. In: 2010 IEEE International Conference on Power and Energy,pp. 667–671 (2010) 13. Stefanutti, W., Saggini, S., Mattavelli, P., Ghioni, M.: Power line communication in digitally controlled DC–DC converters using switching frequency modulation. IEEE Trans. Industr. Electron. 55(4), 1509–1518 (2008)
Fault Location of HVDC Transmission Line Based on VMD-TEO Yunhai Hou and Yubing Zhao
Abstract By targeting the issues of modal aliasing, endpoint effect and stop condition in empirical mode decomposition (EMD) during Hilbert transform (HHT) decomposition in traveling wave sensing, the paper proposes a fault localization scheme for transmission lines of HVDC based on the variational mode decomposition (VMD). The collected traveling wave signal at the defect is decomposed by VMD, and then the Teager energy operator (TEO) is combined with the HHT to analyze the signal. A two-terminal traveling wave ranging algorithm is proposed to deal with the problem that the location of traveling waves is affected by the speed of the wave and the location of the fault. The initial traveling wave signal from the fault is broken down into modulus components for computational purposes. It is not necessary to detect the time of generation of the fault traveling wave or the time of the reflected traveling wave. The fault point is calculated by substituting the highfrequency component in the Teager energy-value spectrum into the two-terminal traveling wave ranging formula. The MMC-HVDC model is built in PSCAD. The results of the simulation demonstrate that this method can accurately calculate the instantaneous frequency, instantaneous amplitude and travel wave arrival time, and improve the accuracy of transmission line fault localization. Keywords Variational modal decomposition · Double ended traveling wave ranging · Modular multilevel converter HVDC transmission line · Teager operator
1 Introduction The development of the economy and society has also led to a significant increase in the scale of the electricity grid. The increase of long-distance transmission makes HVDC transmission in an important position. In the event of a DC transmission line outage, the power grid outage will have a large impact on the system hardware, which Y. Hou (B) · Y. Zhao School of Electrical and Electronic Engineering, Changchun University, Changchun, Jilin, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_4
31
32
Y. Hou and Y. Zhao
will affect the normal production and life of power users. Based on this background, it is urgent to consider how to quickly find and protect the fault point of the transmission line in order to ensure the continuity and stability of the transmission line [1]. The traveling wave method is now mainly used for transmission line fault localization [2]. HHT method has some shortcomings such as over-envelope, under-envelope and modal aliasing, which will affect the detection of traveling wave [3]. In this paper, we propose a novel traveling-wave detection method for the afore mentioned problems: a high-voltage DC transmission-line telemetry method based on VMD-TEO data. The VMD method is stable and can effectively decompose the traveling wave defect signal. The Teager energy harvester can detect the signal in real time and accurately extract the energy mutation point. Combined with the principle of double-ended travelling wave fault localization, a new fault localization algorithm for transmission lines that is not affected by changing line length and wave speed is proposed to calculate the location of the fault point. In this paper, we use simulation experiments to prove the correctness of the VMD-TEO method for different types of faults in MMC-HVDC [4].
2 VMD Principle and Algorithm 2.1 Basic Principles of VMD The variational mode decomposition method is a method of signal processing first proposed by Konstantin Dragomiretskiy [5]. Since the multicomponent signal consists of multiple single-component FM and AM signals, in the variational framework, the frequency domain and each component of the signal can be separated adaptively by means of a variational mode decomposition (VMD), iteratively search the optimal solution of the variational model, and determine the frequency center and bandwidth. The decomposition process of variational mode is mainly divided into the construction of variational problems and the solution of variational problems [6]: (1) Construction of variational problems: The purpose of the variational problem is to find the objective function. The process is to perform VMD decomposition on the original signal f(t) to obtain K finite and central frequency modal functions. The objective function is then to minimize the sum of the estimated bandwidths of each of the modal components. The expression of the constrained variational model is: || ⎧ Σ || ⎪ ⎨ min{u k },{ωk } ||∂t δ(t) + k Σ ⎪ ⎩ s.t. u k = f k
j πt
||2
|| ∗ u k (t) e−jωk t || 2
(1)
Fault Location of HVDC Transmission Line Based on VMD-TEO
33
where: {u 1 , u 2 , ..., u k } is the K IMF components; {u 1 , u 2 , ..., u k } is the center frequency; ∂t is the HHT transform; δt is the unit impulse function (2) Variational problem solving: After completing the construction of the first step, two parameters of Lagrange penalty operator λ(t) and quadratic penalty factor α are introduced. The Lagrange penalty operator λ(t) transforms the constraint problem into an unconstrained problem, and the quadratic penalty factor α has strong robustness to noise. The two parameters are combined to get the extended Lagrange expression as follows: ||2
Σ|| || || L({u k }, {ωk }, λ) = α ||∂t δ(t) + πtj ∗ u k (t) e−jωk t || 2 k || ||2 || || Σ Σ || + || || f (t) − u k (t)|| + λ(t), f (t) − u k (t) k
(2)
k
2
To address above extended optimal solution, namely the saddle point of the Lagrange function, the alternating direction multiplier algorithm (ADMM) n+1 n+1 and λn+1 repeatedly update the expressions of u n+1 k , ωk k , uk : u n+1 k
|| ||2
|| || = arg min α ||∂t δ(t) + πjt ∗ u k (t) e−jωk t || 2 u k ∈X ||2 || || || Σ λ(t) || + || || f (t) − u i (t) + 2 || i
(3)
2
(3) Let = , thenis transformed into frequency domain space: || ||2 Uˆkn+1 = arg min α ||j(ω) (1 + sgn(ω + ωk ))uˆ k (ω + ωk ) ||2 uˆ k ,u k ∈X ||2 || || || Σ ˆ λ(ω) || ˆ + || f (ω) − uˆ i (ω) + 2 || || i
(4)
2
(4) Let ω = ω − ωk have: || ||2 Uˆkn+1 = arg min α ||j(ω − ωk ) (1 + sgn(ω))uˆ k (ω) ||2 uˆ k ,u k ∈X ||2 || || || Σ ˆ λ (ω) || ˆ + || || f (ω) − uˆ i (ω) + 2 || i
The nonnegative frequency interval integral is:
2
(5)
34
Y. Hou and Y. Zhao
|2 | 2| ˆ k (ω)| + uˆ n+1 = arg min ∫∞ 0 4α(ω − ωk ) u k uˆ ,u ∈X |2 | k k | | Σ ˆ λ(ω) | ˆ 2| f (ω) − uˆ i (ω) + 2 || dω
(6)
i
The non-negative frequency interval quadratic optimization is: uˆ n+1 k (ω)
=
fˆ(ω) −
Σ i/=k
uˆ i (ω) +
ˆ λ(ω) 2
(7)
1 + 2α(ω − ωk )2
where: uˆ n+1 is the Current remaining amount; ωkn+1 is the u k (t) power spectral k Σ ˆ ˆ centroid; fˆ(ω) is the i/=k uˆ i (ω) Wiener filtering; uˆ n+1 k (ω), f (ω), λ(ω) is the Fourier n+1 transform of u k , f (t), λ(t). Similarly, update the center frequency: ωkn+1
|| ||2 ∗ || || j ∂t δ(t) + = arg min || u k (t) e−jαt || || || πt ωk 2
(8)
Optimized to frequency domain: |2
| 2| ˆ k (ω)| dω ωkn+1 = arg min ∫∞ 0 (ω − ωk ) u
(9)
ωk
Get the solution:
ωkn+1
| | | ˆ n+1 |2 ∫∞ (ω)| dω 0 ω|Uk = | | | ˆ n+1 |2 ∫∞ U (ω) | | dω 0
(10)
k
2.2 VMD Algorithm Process 1. 2. 3. 4.
Initialization, n = n + 1, execution cycle When ω ≥ 0, Update according to the formula, update according to the formula k = k + 1, Determine whether k = K? If yes, then jump out of the cycle, otherwise repeat III to k = K jump out of the cycle 5. Update as follows: λˆ n+1 (ω) ← λˆ n (ω) + τ fˆ(ω) −
Σ k
uˆ n+1 k (ω)
(11)
Fault Location of HVDC Transmission Line Based on VMD-TEO
35
6. Repeat II–V until the iteration termination condition is satisfied: Σ|| || || || ||uˆ n+1 − uˆ n ||2 /||uˆ n ||2 < ε k 2 k 2 k
(12)
k
7. End the iteration, get K IMF components.
2.3 Teager Energy Operator Teager energy operator (TEO) is a signal analysis method proposed by H. M. Teager [7], it’s a non-linear operator with the characteristics of small quantity of calculation and fast and accurate detection of signal energy. It is generally used for real-time detection of signals. It can reflect the momentary amplitude and instantaneous frequency. On the instantaneous frequency spectrum, the first mutation point corresponds to the time at which the fault traveling wave reaches the point of detection. The TEO of continuous time x(t) signal is described briefly below: 2 ψ[x(t)] = x ' (t) − x(t)x '' (t)
(13)
where: x(t), x' (t), x'' (t) of the Original signal, first derivative, second derivative, respectively; Ψ is the energy operator For the discrete signal x(n), the energy operator is expressed as: ψ[x(n)] = x 2 (n) − x(n + 1)x(n − 1)
(14)
2.4 VMD-TEO Traveling Wave Detection Method The main steps are as follows [8]: (1) Synchronous acquisition of the voltage signal of the traveling wave reaching the M and N terminal detection points for the first time after the fault occurs. (2) The line-mode and zero-mode components of traveling wave are obtained by phase-mode transformation. (3) The VMD decomposition of the line mode component is carried out to obtain K IMF components, and high-frequency components are screened out. (4) The Teager energy value is calculated. According to the extreme point in the Teager energy value curve, the time when the traveling wave reaches the measuring endpoint is obtained, and the breakpoint is calculated.
36
Y. Hou and Y. Zhao
2.5 Two-Terminal Traveling Wave Ranging Algorithm The traditional two-terminal traveling wave breakpoint method uses wave velocity of fault traveling wave and time to reach the detection point to locate the fault location [9]. Due to the influence of wave velocity on the positioning accuracy of fault traveling wave, and wave velocity of diverse lines is diverse, wave velocity at different times and different positions on the same line is also different. Therefore, a double-ended traveling wave ranging method without the influence of wave velocity is proposed. The double-end wave break principal diagram is show in Fig. 1: The formula for measuring the length of M and N endpoints is as follows:
dM = dN =
L+v(tM −tN ) 2 L+v(tN −tM ) 2
(15)
where: tm and tn is the first traveling wave head of the break reaches M and N moments; L is the Line MN end length; v is the wave speed After the fault occurs in the transmission line, the fault traveling wave transmits to the M and N terminals. The detected signal is decomposed to the modulus component through the Karen Pur change to obtain the distance of the M terminal:
dM = dN =
L+v1 (tM 1 −tN 1 ) 2 L+v0 (tN 0 −tM 0 ) 2
(16)
According to the traveling wave process of traveling wave transmission, it is obtained: ⎧ ⎨ L = v1 (tM1 − t0 + tN1 − t0 ) (17) L = v0 (tM0 − t0 + tN0 − t0 ) ⎩ v0 (tM0 − t0 ) = v1 (tM1 − t0 ) The break length d M is obtained by the above formula: dM =
t M 0 − tM 1 L tM 0 − tM 1 + tN0 − tN1
(18)
where: tm1 , tn1 , v1 is the reach timing of line mode wave to M-terminal, N-terminal time, wave velocity, respectively; tm0 , tn0 , v0 is the zero-mode wave arrival time at M-terminal, N-terminal time, traveling wave velocity Fig. 1 Double-ended traveling wave fault schematic diagram
Fault Location of HVDC Transmission Line Based on VMD-TEO
37
Fig. 2 MMC-HVDC topology structure diagram
Therefore, the equation that traveling wave location is determined by the timing when wave reaches both ends and length of established line and is independent of transmission velocity and the actual length of the transmission line, which reduces the influence of wave speed on the location result and boost fault location precision.
3 Simulation and Result Analysis 3.1 Set Up the Simulation Model A simulation model of bipolar MMC-HVDC transmission line is established in PSCAD/EMTDC, the MMC-HVDC [10] topology is illustrated in Fig. 2. The connection line voltage is 220 kV AC three-phase power supply, the power supply is connected to the transformer capacity of 100MVA, the voltage ratio is 220 kV/ 20 kV, the leakage reactance is 0.2pu, the right side of the transformer is connected to the modular multilevel converter (MMC). The AC filters on both sides will filter the harmonics, connected the middle to DC transmission line, its voltage, current is 600 kV, 1 kA, respectively, and the distance of the transmission line is 400 km [11]. By analyzing the simulation model, the line-mode component of fault current is obtained by decoupling circuit. Sampling frequency is 0.1 MHz, that is 5000 sampling points, and the simulation time is 2 s. Finally, the above experimental data are analyzed by VMD in MATLAB [12].
3.2 Simulation Validation After VMD decomposition, TEO instantaneous energy-spectrum is obtained by analyzing high frequency components, and the reach timing of the break wave head is found. The VMD parameter settings: modal number K = 3, quadratic penalty factor α = 2000, bandwidth τ = 0. When faults occurred, the VMD decomposition results are shown in Fig. 3: Because the IMF1 component is very stable, the relevant parameters have little effect on it, so only the IMF1 component is analyzed. The first maximum value is
38 Fig. 3 Line mode component signal and VMD decomposition result diagram
Y. Hou and Y. Zhao 2 0 -2 0
0.5
1
1.5
2
2.5
1 0 -1 0
0.5
1
1.5
2
2.5
0
0.5
1
1.5
2
2.5
3 2 1 0 0
0.5
1
1.5
2
2.5
3
104
3
104
4 2 0 3
104
3
104
Fig. 4 M-side IMF1 component TEO transient energy spectrum
Fig. 5 N-side IMF1 component TEO transient energy spectrum
timing when the first wave head of the break wave first to the bus measuring point. The Teager energy value of each component is calculated, and the instantaneous energy spectra of M, N components TEO are illustrated in Fig. 4, Fig. 5: The instantaneous energy spectrum of TEO is analyzed. The first extreme point of M-terminal and N-terminal appears at 143 sampling points and 136 sampling points, and time of the first sampling point is 2 s. Therefore, the timing when the first wave head arrives the M-terminal and N-terminal after fault occurs is 2.00143 s and 2.00136 s respectively. The fault location result is 200.116 km, the actual break length is 200 km, and the break length is 0.116 km. Fault location result is ideal. The fault location results of diverse fault distances, transition resistances and fault types are as follows, in which PG-cathode grounding, NG-negative grounding and PN-two-pole short circuit are listed [13]: The data in Table 1 show that the range of fault type and transition resistance affecting traveling wave fault location method infinitesimally. The maximum error percentage is only 0.22%. The error of center point of transmission line is the smallest, and the error away from center point becomes larger. The increase of sampling frequency will reduce the error and make the ranging more accurate.
Fault Location of HVDC Transmission Line Based on VMD-TEO
39
Table 1 Fault location results of different fault distances and different fault types Fault distance/KM 30
100
150
200
350
Fault Type
Location result/KM
Error/KM
PG
Transition Resistors/Ω 50
30.224
+0.224
NG
100
30.224
+0.224
PN
200
30.224
+0.224
PG
50
100.097
+0.097
NG
100
100.097
+0.097
PN
200
100.097
+0.097
PG
50
150
0
NG
100
150
0
PN
200
150
0
PG
50
200.116
+0.116
NG
100
200.116
+0.116
PN
200
200.116
+0.116
PG
50
349.792
−0.208
NG
100
349.792
−0.208
PN
200
349.792
−0.208
4 Conclusion This paper introduces the principle and characteristics of VMD and Teager energy operator. After combining the two, a traveling wave detection method based on VMD-TEO is proposed. The detection method is applied to two-terminal traveling wave fault location algorithm, it isn’t impacted by traveling wave velocity and actual distance of the line, further calculate fault point. Through experimental simulation of diverse transition resistances and diverse error types, the consequence indicate that the algorithm based on VMD-TEO has higher accuracy for fault location in MMC-HVDC.
References 1. Chen, D.F., Fang, X.: Research on fault location algorithm of high voltage transmission line. Electrical Switching 53(1), 27–29 (2015) 2. Pan, M. Y.: Research on fault location of transmission line based on wavelet transform. Harbin: Harbin Institute of Technology (2019). 3. Liao, X.H., Zhao, X.J., Liang, H.N.: A fault location method for power cable based on HilbertHuang transform. Power System Protection and Control 45(3), 20–25 (2017) 4. Tang, G., Xu, Z., Xue, Y.L.: Design of a multi-terminal flexible DC transmission control system based on modular multilevel converters. High Voltage Technology 39(11), 2773–2782 (2013) 5. Zhou, X., Wang, L.T.: Traveling wave fault location method of high voltage transmission line based on VMD-DE. Electrical Transmission 36(12), 85–87 (2017)
40
Y. Hou and Y. Zhao
6. Xie, Z.D., Fu, S.: Fault location method of mine high voltage power cable based on VMD-TEO. Journal of Heilongjiang University of Science and Technology 31(05), 1–6 (2021) 7. Ai, X.Y., Liu, H., Tan, C.: Research on fault location of transmission line based on EEMD and TEO. Journal of Hubei University of Technology 35(01), 45–51 (2020) 8. Gao, Y.F., Zhu, Y.L., Yan, H.Y.: Research on lightning fault location of high voltage transmission line based on VMD and TEO. Journal of Electrical Engineering Technology 31(1), 25–26 (2016) 9. Gao, Y.F., Zhu, Y.L., Yan, H.Y.: A new double-ended traveling wave fault location method for transmission lines. Power System Protection and Control 44(08), 8–13 (2016) 10. Wang, S.S., Zhou, X.X., Tang, G.F.: Mathematical model of a modular multilevel voltage source converter. Chinese Journal of Electrical Engineering 31(24), 1–8 (2011) 11. Zhang, J.P., Zhao, C.Y., Sun, H.F.: Simulation of MMC-HVDC control strategy based on improved topology. Chinese Journal of Electrical Engineering 35(5), 1032–1038 (2015) 12. Wang, L.: Research on fault location of HVDC cable hybrid line based on VMD and TEO. Hubei University of Technology, Wuhan (2019) 13. Chen, C., Ge, Z.C., Xiang, Y.H.: Research on a fault location method for middle end of high voltage transmission line. Electrical Switch 52(01), 55–58 (2014)
Design of Switching Power Supply for Micro Arc Oxidation Process Dan Lei, Xuan Zheng, Shulong Xiong, and Tan Ding
Abstract As a new type of metal processing technology, micro arc oxidation is widely used in the fields of aviation, astronautics and so on. The process has a specific and high requirement on the output level of power supply. The switching power supply designed in this paper converts 220 VAC into multiple DC outputs through full-bridge rectification, input filtering, flyback high-frequency conversion, output rectification and filtering, one of which is closed-loop 12 VDC, and the other four are open-loop 22, 24, 26 and 28VDV. The design uses advanced DC-DC transformation technology, advanced device SPP06N80C3 and market mainstream chip UC3845, high power efficiency, high output voltage precision, small ripple, can be reliable power supply for the micro arc oxidation process power drive and heat dissipation and other core devices. Keywords Full-bridge rectification · Flyback high-frequency conversion · UC3845 · SPP06N80C3
1 Introduction Micro-arc oxidation (MAO), also known as plasma electrolytic oxidation (PEO), is developed from the anodic oxidation technology. The process relies on the matching adjustment of electrolyte and electrical parameters. Under the action of instantaneous high temperature and high pressure generated by arc discharge, the modified ceramic coating consisting mainly of matrix metal oxide and supplemented with electrolyte components is grown on the surface of aluminum, magnesium, titanium and other metals and alloys. The anti-corrosion and wear-resisting properties are significantly better than the traditional anodic oxidation coating [1]. Therefore, its application in Marine ships and aeronautical components has been widely concerned. In order to guarantee the process effect, its power supply is very important. D. Lei · X. Zheng (B) · S. Xiong · T. Ding Wuchang Shouyi University, Wuhan 430064, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_5
41
42
D. Lei et al.
The power supply in micro arc oxidation process mainly consists of two parts, one is the input power supply and the other is the process power supply. The power supply of driving and cooling devices is the core of the process power supply, which requires high efficiency, high precision and strong stability. Therefore, this paper aims at the above goals and designs a switching power supply which can be used in micro arc oxidation process by using advanced power electronics technology.
2 Overall Scheme Design In micro-arc oxidation process, power supply requirements of driving and cooling devices are shown in Table 1 below: Based on the above requirements, the overall scheme of switching power supply is shown in Fig. 1 below. 220 VAC is converted into DC by the input circuit, and then by flyback conversion to obtain high frequency signal, and finally through the output rectifier filtering to obtain the required direct current. In order to achieve sufficient stability of 12 VDC, real-time voltage detection and feedback should be carried out at the output end, and PWM control should be carried out on the switch tube in the DC-DC conversion circuit through the control circuit, so as to achieve the purpose of closedloop regulation. Table 1 Power supply specifications
Serial number
Project name
parameter
1
Input voltage
220 VAC
2
Efficiency
87% (MAX)
3
Closed-loop output voltage
12 VDC
4
Open loop output voltage
22, 24, 26, 28 VDC
5
Rated output current
100 mA
6
Rated power
6W
7
Closed-loop output voltage accuracy
≤2%
8
Closed-loop output voltage ripple
100 mVp-p
9
Protection function
Overcurrent, short circuit
10
Operating temperature
Room temperature
Design of Switching Power Supply for Micro Arc Oxidation Process
43
Fig. 1 Switch power supply plan as a whole
3 Main Circuit Design 3.1 Input Circuit Design Rectification and filtering are the main parts of the input circuit. In the design, a full-bridge rectifier is used, and a filter circuit with both electrolytic capacitor and patch capacitor is used to obtain enough clean direct current. Electrolytic capacitor is suitable for low and intermediate frequency filtering; On the other hand, the patch capacitor has a small volume, high voltage resistance, and the ESR (equivalent series resistance) of high frequency resonant points is very low, which is usually suitable for medium and high frequency filtering.
3.2 Flyback Conversion Circuit Design The flyback conversion circuit is shown in Fig. 2. The circuit has the advantages of simple circuit structure, electrical isolation of input and output, wide range of voltage regulation and easy multiplex output. When the switching tube TR1 is switched on, the current on the primary side of the transformer starts to rise and the output diode on the secondary side is cut off (see Fig. 3). The transformer stores energy and the load is provided by the output capacitor (see Fig. 3). When the switch tube TR1 is cut off, the transformer original side of the induction potential reverse, at this time the output diode is switched on, the energy in the transformer through the output diode power supply to the load, while charging the capacitor. In the design, MOS tube SPP06N80C3 is selected as the switch tube, the price is moderate, the drain and source voltage Vdss can reach 900 V, the continuous drain current Id can reach 6 A, the power Pd can reach 83 W, the on-resistance is 900 mΩ, the threshold voltage is 3.9 V, can well meet the requirements. The parameter calculation and fabrication of high frequency transformer are very important in flyback converter design. Taking closed-loop output as an example, we set power P = 1.2 W, duty ratio of switching tube D = 0.3, switching frequency
44
D. Lei et al.
Fig. 2 Flyback conversion circuit
Fig. 3 Output circuit
fs = 100 kHz, efficiency eff = 0.9. Meanwhile, let IPK be the peak current flowing through the inductance, Lm be the excitation inductance of the primary side, Leq be the secondary side inductance, I1 be the primary side current, I2 be the secondary side current, n = I2 /I1 , then, the inductive energy storage in a period T is: W =
1 2 I · Lm 2 PK
(1)
According to Formula (1), the power in a cycle can be written: P1 =
1 2 I · L m · fs 2 PK
(2)
In the actual circuit, there is loss in power transmission. According to the assumed efficiency eff, the actual power can be calculated as: P=
1 2 I 2 PK
· L m · fs ef f
(3)
And because: Vin di = dt Lm
(4)
Among them: Vin is the voltage at both ends of the inductor. Therefore, the peak current of the inductance on the original side can be written as: IP K =
Vin · Ton Lm
(5)
Design of Switching Power Supply for Micro Arc Oxidation Process
45
Among them: Ton is the opening time of the switch tube in one cycle T. By combining Eqs. (4) and (5), the excitation inductance of the original side can be obtained, as follows: Lm =
2 · fs Vin2 · Ton 2P
(6)
Meanwhile, according to the inductance conversion formula of the primary and secondary sides of the transformer, it can be obtained: L eq · I22 L m · I12 = 2 2
(7)
Then, the secondary side inductance can be obtained: L eq =
Lm n2
(8)
According to the calculation of the above process, the ratio of the number of turns of the secondary side to the number of turns of the primary side is about 1:10 in the closed loop. When the corresponding turns ratio is properly raised, the voltage level of 22–28 V can be adjusted when the loop is opened. Insulation should be done in the winding process of the transformer.
3.3 Output Circuit Design Taking closed-loop as an example, the output circuit is shown in Fig. 3. Schottky diode realizes high frequency rectification, capacitor realizes output filtering, and inductor plays the role of continuous flow. In order to reduce the temperature rise of inductor, three groups of parallel mode is used in the design. Other four open loop output circuit design idea is the same, only need to fine-tune the parameters can be.
4 Control Circuit Design 4.1 Control Chip Selection and Peripheral Circuit Design The main control chip is UC3845. The chip is a high performance fixed frequency current mode controller designed for DC-DC applications, providing designers with a cost-effective solution with minimal external components [2]. The UC3845 features a fine-tuned oscillator, precise duty ratio control [3], temperature-compensated reference, a high-gain error amplifier, overcurrent and short-circuit protection. Current
46
D. Lei et al.
Fig. 4 UC3845 peripheral circuit design
Fig. 5 Optocoupler isolation circuit
sampling comparator and high current totem pole output are ideal for driving power MOSFET [4, 5]. In practical application, the oscillation frequency and duty ratio can be adjusted by adjusting the resistance RT between pin 4 and pin 8 and CT between pin 4 and grounding pin [6, 7]. The peripheral circuit design of UC3845 is shown in Fig. 4.
4.2 Optocoupler Isolation Circuit Design The optocoupler isolation circuit amplifies the control signal and ensures the safe isolation between the main circuit and the control circuit. EL817S-C is selected as the optical decoupler chip, and the specific circuit is shown in Fig. 5.
4.3 Implementation of Closed Loop Control After the design of the peripheral circuit of the main control chip and the optical coupling isolation is completed, only sampling voltage at the output end, after signal processing, feedback to pin 2 of UC3845, and compare with the given voltage through the error amplifier inside the chip to achieve pulse width modulation (PWM), to ensure the stable accuracy of 12 VDC.
Design of Switching Power Supply for Micro Arc Oxidation Process
47
5 Test and Analysis After PCB design (Fig. 6), board making and welding, the switching power supply is obtained, as shown in Fig. 7. The test platform is loaded with four groups of 47 Ω resistors. The load size can be adjusted according to the number of series groups.
5.1 Output Voltage Test and Accuracy Analysis The output voltage of switching power supply under different load can be measured by multimeter. With closed-loop as the main object, we measured three circuit boards respectively, and the data were recorded in Table 2 below. According to the formula: output voltage accuracy = (actual output voltage − rated voltage)/rated voltage, the minimum value can be calculated as 0.083%, the maximum value as 1.16%, both within 2%. Fig. 6 PCB design
Fig. 7 Physical picture of switching power supply
Table 2 Output voltages under different loads
Circuit board 1
Load A
Load B
Closed loop output
12.01 V
11.97 V
Circuit board 2
Load A
Load B
Closed loop output
11.94 V
11.86 V
Circuit board 3
Load A
Load B
Closed loop output
12.04 V
12 V
48
D. Lei et al.
Fig. 8 Output current of closed loop
Fig. 9 Output current of open loop (one circuit)
5.2 Output Current Test and Power Analysis With load A, the closed-loop output current measured by a multimeter is 98.5 mA (as shown in Fig. 8), and the output current of one open loop is 47.21 mA (as shown in Fig. 9). According to the formula: P (power) = UI, the closed-loop output power can be calculated to be about 1.2 W, plus the open loop, there are five outputs in total, and the total power of the power supply is about 6 W.
5.3 Efficiency Analysis By using a multimeter, the primary side current of the transformer can be measured as 6.09 mA, and the primary side voltage as 219.06 V. Taking closed loop as an example, the efficiency calculation formula η = VVoi ·I·Iio can be used to calculate the result of about 88.6%. Where, V0 is the output voltage, I0 is the output current, Vi is the voltage of the primary side, and Ii is the current of the primary side.
Design of Switching Power Supply for Micro Arc Oxidation Process
49
Fig. 10 Output voltage ripple
5.4 Ripple Test Change the probe of the oscilloscope into AC gear and measure the output voltage. The waveform is shown in Fig. 10, that is, the output voltage ripple. According to the data displayed by oscilloscope, the output voltage ripple is 98 mV.
5.5 Practical Use Test The switching power supply was installed on a motherboard of the micro-arc oxidation power supply for practical test. After power-on, the measured driving waveform was normal, as shown in Fig. 11. The power supply works normally as a whole, as shown in Fig. 12. Fig. 11 Driving waveform
Fig. 12 Power supply working scenario
50
D. Lei et al.
6 Conclusion According to the actual measurement and engineering operation, the switching power supply works normally. Under the 220 VAC input of the mains, the output voltage accuracy has been measured within 2% for many times, the output power is about 6W, the output ripple is only 98 mV, and the efficiency is up to 88.6%, which fully meets the requirements of micro-arc oxidation process.
References 1. Li, H.C., Wang, H.A., Ma, X.F.: Research status of micro-arc oxidation of aluminum alloy. Hot Working Technol. 50(22), 1–5+13 (2021) 2. Ge, X.H.: Design and simulation of single-ended flyback circuit based on UC3845. J. Anhui Electron. Inf. Tech. Coll. 17(6), 19–22+31 (2018) 3. Gu, W.K.: Design of single-end flyback switching power transformer. Commun. PowerTechnol. 38(1), 59–62 (2021) 4. Cao, Z.X.: Research and design of multi-output flyback switching power supply. Electron. Measur. Tech. 43(4), 11–15 (2020) 5. Yin, C.B.: Modeling of electromagnetic interference and analysis of suppression method for flyback switching power supply. Home Appliance Technol. (2), 78–82 (2021) 6. Zhang, X.F., Tan, W.J.: Brief introduction of type selection and design steps of flyback transformer for switching power supply. Electron. Test. (9), 113–114 (2017) 7. Xiong, P.: Design of isolated Flyback switching power supply. Electron. World(23), 114–116 (2021)
Condition Monitoring System of Key Equipment for Beamline in HLS Based on Multi-source Information Fusion Qun Liu, Xiao Wang, Liuguo Chen, Bo He, and Gongfa Liu
Abstract Based on the existing beamline control system of Hefei Light Source (HLS), the paper introduces the beamline condition monitoring system. By monitoring the vacuum, vibration acceleration, air pressure, voltage, water flow, temperature, humidity and other signals for key equipment such as shutter and vacuum valve on the beamline for HLS, obtain the multi-source information of key equipment status, and save them in a file. Through the comprehensive analysis, the system can not only have typical conventional fault alarm function in time but also can comprehensively judge their health status. Especially, by analyzing vibration acceleration signal of the opening and closing process of valve or shutter, the long-term health trend for the key equipment can be predicted. Due to the application of this system, the ability of fault diagnosis and maintenance efficiency for key equipment of beamline are all improved. At the same time, it also reduces the workload to engineer for operation and maintenance. Keywords Monitoring System · Multiple-information · Beamline · Alarm · Fault Diagnosis · Key Equipment
1 Introduction Hefei Light Source is a special vacuum ultraviolet and soft X-ray synchrotron radiation light source. Synchrotron radiation is an excellent new light source with high intensity, high brightness, continuous frequency spectrum, good directivity and polarization, pulse time structure and clean vacuum environment. It can be used in many basic research and application research fields such as physics, chemistry, material science, life science, information science, mechanics, geology, medicine, pharmacy, agriculture, environmental protection, metrology, lithography and ultra Q. Liu (B) · X. Wang · L. Chen · B. He · G. Liu National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei 230029, Anhui, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_6
51
52
Q. Liu et al.
Fig. 1 The distribution of beamlines in HLS
micro processing. In the synchrotron radiation device, the beamlines distributed along the outside of the electronic storage ring are the “bridge” between the user experimental station and the electronic storage ring. HLS currently has 10 beamlines and experimental stations. The distribution of beamlines is shown in Fig. 1. The beamline system is a complex system involving optics, vacuum, motion mechanism, control and other disciplines. It is composed of optical shutter, vacuum valve, ion pump, mirror box, vacuum gauge and other key equipments. The shutter is mainly used to block or absorb synchrotron radiation light and protect downstream equipment and personnel safety. The vacuum valve is mainly used to isolate the vacuum of different vacuum pipelines. Since the vacuum system of the beamline is connected with the storage ring vacuum system, in order to prevent a beam line station from being exposed to the atmosphere and affecting the normal operation and experiment of the storage ring and other beamline stations, when a vacuum leak occurs at a certain place, the vacuum valves can fall down automatically at both ends of the leak source, which effectively isolates the beamline station and the storage ring, the vacuum at the leak source and other sections of the beamline. Therefore, the comprehensive monitoring of the operation status of these key equipments and the rapid fault diagnosis are important guarantees to ensure the safe and efficient operation of the beamline [1–3]. Monitoring system biomacromolecule beamline in BSRF, the status of the cooling water was monitored and an alarm was given in time [4]. The BL13W and BL17U beamlines of SSRF monitored the temperature of the monochromator of beamline and carried out the fault diagnosis of the equipment [5]. The above are only used to monitor a single data source. In general, there is a relatively lack of condition monitoring and fault analysis and diagnosis system for multi-source information fusion of key equipment of the beamline in China. Once problems occur in these key equipment of the beamline, such as vacuum valve and shutter, maintainers have to go to the site to check and solve the problems at the first time. Multi source information fusion is a multi-level and multifaceted processing process, including detection, correlation, combination and estimation of multi-source data, so as to improve the accuracy of state and fault judgment. By obtaining various sensing information of
Condition Monitoring System of Key Equipment for Beamline in HLS …
53
Fig. 2 The overall architecture of system
the key equipment on the beamline and conducting comprehensive analysis, we can accurately judge the normal faults of the equipment, and also predict the longterm health trend of the equipment, so as to improve the operation and maintenance capability of the equipment [6–9].
2 Design Scheme 2.1 Overall Architecture The overall architecture of the system is shown in Fig. 2 below. It mainly includes three layers: upper computer layer, data acquisition control layer and equipment layer. The equipment layer is the lowest layer, which is mainly a multi-channel sensor for vibration acceleration, voltage, air pressure, temperature and humidity, and vacuum gauge set by key equipment of the beamline such as shutter, vacuum valve, vacuum gauge, etc. The data acquisition control layer in the middle layer is mainly the data acquisition control unit developed by ourselves, which mainly includes multi-channel signal input module, MCU module, communication interface and status indication module, etc. The top layer is the monitoring and analysis software platform of the system.
2.2 Data Acquisition Control Unit The data acquisition control unit is the core part of the whole design, which mainly includes the input module of multi-source sensor signals, the micro control unit module (MCU), the interface communication and status indication module, etc.
54
Q. Liu et al.
Fig. 3 Air pressure input circuit
Fig. 4 AC220V voltage input detection circuit
Multi -Source Input Module The beam line shutter and vacuum valve need to provide 0.6 MPa air pressure and 220 V AC voltage for normal operation. The output signal of the air pressure sensor is a 4–20 mA current signal. Therefore, current voltage conversion and amplification are required. LMV358 chip is selected. The specific circuit design is shown in Fig. 3 and AC220V voltage input signal detection circuit is shown in Fig. 4. In this design, the chip ADXL355 is used to detect non-linear and non-stationary vibration signals generated during valve and shutter opening and closing as shown in Fig. 5. The digital output ADXL355 are low noise density, low 0 g offset drift, low power, 3-axis accelerometers with selectable measurement ranges. It can offer industry leading noise, minimal offset drift over temperature, and long term stability enabling precision applications with minimal calibration [10]. The ± 2 g ranges is selected for our needs.SHT30 chip is selected to monitor temperature and humidity, as shown in Fig. 6. MCU Module. MCU module is the core part of data acquisition control unit. It controls the operation of each module and combines the work between each module to complete the function.The Micro control unit (MCU) module uses STM32 chip, as shown in Fig. 7. Fig. 5 Vibration acceleration signal input circuit
Fig. 6 Temperature and humidity input circuit
Condition Monitoring System of Key Equipment for Beamline in HLS …
55
Fig. 7 The MCU module uses STM32 chip
Interface Communication and Status Indication Modules. A variety of communication interfaces are designed to communicate with the upper computer, such as wired Ethernet interface, wireless internetwork interface, USB or RS232 Serial communication. Each communication interface circuit is shown in Fig. 8, 9 and 10. The indicator circuit of voltage and gas pressure status is shown in Fig. 11 below (Fig. 9).
Fig. 8 The wired Ethernet interface control chip chooses W5500 chip
Fig. 9 The chip of ESP8266 is selected for wireless internetwork interface
Fig. 10 Circuit diagram of USB and RS232 serial communication interfaces
56
Q. Liu et al.
Fig. 11 The indicator circuit of voltage and pressure status
Fig. 12 Main front panel
3 The Monitoring Software of Host Computer The monitoring software of the host computer mainly stores the comprehensive status information and alarm information of key equipment such as vacuum valves and shutters on the beamline in the form of files. The stored information includes the vibration waveform of the switching process of the key equipment, voltage and air pressure of working condition, temperature and humidity of the working environment, and alarm records. On the other hand, the host computer monitoring software needs to preprocess and analyze the data collected and stored, and provide a visual interface for equipment information and fault status. Here we use the Labview software of NI Company. The main front panel and program block diagram interface of the monitoring software of Labview are shown in Figs. 12 and 13. As shown in Figs. 14 and 15 below, according to the parameters monitored by the system, we can calculate the switching time of the vacuum valve and shutter under different working conditions of air pressure and voltage.
Condition Monitoring System of Key Equipment for Beamline in HLS …
57
Fig. 13 Main Block diagram
Fig. 14 Switching time of vacuum valve and shutter under AC 220 V and different gas pressure
Fig. 15 Switching time of vacuum valve and shutter under 0.6 MPa gas pressure and different AC voltage
4 Conclusion The system comprehensively monitors and visually displays the vacuum degree, working conditions, valve switching status, opening and closing time, and alarm status of the key equipment such as vacuum gauge, vacuum valve, shutter, etc. for the beamline. It can quickly give fault alarms of routine faults for equipment, such as abnormal vacuum degree, insufficient air pressure, insufficient voltage, and old loss of solenoid coils for vacuum valve. At the same time, through feature extraction and analysis of the vibration acceleration signal during the opening or closing process of
58
Q. Liu et al.
the valve and shutter, the health state trend of the online valve of the beam line can be predicted according to the change of the valve opening and closing time and the cumulative opening and closing times. Acknowledgements This research is supported by Fundamental Research Funds for the Central Universities (WK2310000086).
References 1. Zhou, H., Pan, P., Yu, J.: Condition monitoring and fault diagnosis of power equipment. In: IOP Conference Series: Earth and Environmental Science vol. 252 (2019) 2. Liu, Z.P., Zhang, L.: A review of failure modes, condition monitoring and fault diagnosis methods for large-scale wind turbine bearings. Measurement 149, 1–22 (2020) 3. Takahashi, S., Sano, M., Watanabe, A.: Prediction of vacuum deterioration caused by vacuum accident in the beamline. Vacuum 155, 325–335 (2018) 4. Guo, X., et al.: The centralized cooling—Water interlock monitoring system of beam lines at BSRF. Nuclear Electron. Detect. Technol. 32(1), 407–410 (2012) 5. Sun, H., et al.: Analysis of beamline running state and forewarning by using MATLAB in SSRF. Nuclear Tech. 39(7), 21–25 (2016) 6. Zhao, Y.,et al.: Git-based version control for beamline control system at the shanghai synchrotron radiation facility. In: 2018 5th International Conference on Systems and Informatics, ICSAI 2018, pp. 134–138 (2019) 7. Li, Y.X., et al.: Fault diagnosis of multi-source information fusion. Comput. Digit. Eng. 44(7), 1250–1254 (2016) 8. Zhou, Y.J., et al.: Fault diagnosis of refrigeration equipment based on data mining and information fusion. J. Vibr. Meas. Diagn. 41(2), 392–398 (2021) 9. Hajizadeh, N.R., Franke, D., Svergun, D.I.: Integrated beamline control and data acquisition for small-angle X-ray scattering at the P12 BioSAXS beamline at PETRAIII storage ring DESY. J. Synchrotron Radiat. 25, 906–914 (2018) 10. ADI Official Website. https://www.analog.com/en/products/adxl355.html. Accessed 7 Dec 2022
A High Performance Control Method of PMSM Yan Xing and Zhihui Li
Abstract The permanent magnet synchronous motor has been widely used in the industrial field. Direct torque control has become popular issue in AC speed regulator system because of fast response and high robust. Aiming at the problem of large flux and torque ripple in the traditional direct torque control of permanent magnet synchronous motor, a permanent magnet synchronous motor based on sector optimization was proposed. This method designs six new resultant vectors based on the six original vectors, so there are twelve voltage vectors in all. Under different conditions, the space voltage vectors are automatically switched through the switch table, so as to decrease flux and torque ripples effectively. Through Matlab/Simulink software, the speed, flux and torque response of permanent magnet synchronous motors under different control methods are simulated and calculated, which proves the effectiveness of the control method proposed in this paper in improving the stability of the direct torque control system and improving its dynamic response performance. Keywords Permanent magnet synchronous motors (PMSM) · Twelve voltage vectors control · Direct torque control (DTC) · Flux and torque ripples
1 Introduction Takahashi, Noguchi and Deepbrock proposed direct torque control in 1986 [1, 2], which established the stator coordinate system, and obtained the control signal of the controller by comparing the difference between the given flux linkage, torque and the actual flux linkage and torque. This method has the advantages of insensitivity to motor parameters, fast torque response, strong robustness and simple control Y. Xing (B) Liaoning General Aviation Academy, Shenyang Aerospace University, Shenyang, China e-mail: [email protected] Y. Xing · Z. Li School of Electronic and Information Engineering, Shenyang Aerospace University, Shenyang, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_7
59
60
Y. Xing and Z. Li
structure [3–5]. Thus, it has become popular issue in the domestic and overseas [6–8]. However, in the conventional DTC system, switch table of voltage vectors has some disadvantages, such as large torque ripple and poor speed control performance [9–12]. In this paper a twelve vector control method is proposed in order to improve the performance of the system. Firstly, the performance of stator flux in DTC system is researched in detail, and then the reason of large system ripple is analyzed. Secondly, the basic principle of the twelve vector strategy is described, and the corresponding switch table of twelve space voltage vectors and its implementation method are given. Theoretical analysis and simulation experiments show that the flux increment of the twelve vector method is basically symmetrical at the section boundary, which can effectively reduce the flux and torque ripple.
2 Flux Performance Analysis of Traditional DTC The relationship between stator voltage and stator flux can be written as below, when the voltage drop of stator resistance is ignored, u s = Ri i s +
dψs dψs ≈ dt dt
(1)
Assuming that T is the sampling period and flux vector ψ s rotated counterclockwise at sector θ 1 , then vector U 2 and U 3 can control |ψ s | increase and decrease respectively. Record the included angle between stator flux vector and U 2 as θ uψ 1 ; the included angle between stator flux vector and U 3 as θ uψ 2 ; voltage space vector amplitude is |us |, |∆ψs | ↑= |u s |T cos θuψ1 ∈ (0, 0.866|u s |T )
(2)
|∆ψs | ↓= |u s |T cos θuψ2 ∈ (0, 0.866|u s |T , 0)
(3)
According to Eq. (2) and Eq. (3), the change curve of flux linkage amplitude in sector θ 1 can be shown in Fig. 1. The relationship between stator voltage and stator flux can be obtained as follows, when the voltage drop of stator resistance is considered, dψs = u s − Ri i s = E s dt
(4)
A High Performance Control Method of PMSM | Δψ s | us T
61 | Δψ s | us T
0.866
-30º
-10º
10º
30º
0
0.643
θ
-0.866
0.342 -0.342
0 -30º
-10º
10º
30º
θ
-0.866
(a) The curve of the increase in amplitude
(b) The curve of the decrease in amplitude
Fig. 1 The changing curve of stator flux amplitude at traditional DTC Fig. 2 Effect of voltage vector to stator flux amplitude
R si s Es us γ
ψ s1
α | ψ s1 |↑
| ψ s1 |↓
30°
γ
30° | ψ s 2 |↓
θ1
γ
| ψ s 2 |↑ α ψ
θ1
s2
γ
-30° -30°
us
Rs is Es
In summary, the influence of stator voltage vector on stator flux amplitude can be obtained when considering stator resistance, as shown in Fig. 2. Where α is the angle between stator flux vector and sector boundary line, γ is the angle between stator voltage vector and stator back EMF vector.
3 Problems of Switch Table in Traditional DTC In the traditional DTC system, the space voltage vector U3 can be selected to reduce the flux linkage and increase the torque when ψ s at sector θ 1 . In fact, the influence of U 3 on ψ s1 , as shown in Fig. 2, is increase the flux linkage and torque when α is less than γ. Thus, it is difficult to select a voltage vector that can make the flux reduce. In addition, it can be seen from Fig. 1 that the increase and decrease of flux amplitude in a sampling period are significantly different. Therefore, a sector is divided into three parts for analysis. Then the variation range of stator flux amplitude caused by voltage vectors in one sector can be shown in Table 1.
62
Y. Xing and Z. Li
Table 1 Variation range table of flux in sector θ i θ
Increase range of flux amplitude
Decrease range of flux amplitude
(−30°,−10°)
(0, 0.342|us |T )
(−0.866|us |T, −0.643|us |T )
(−10°,10°)
(0.342|us |T, 0.643|us |T )
(−0.643|us |T s , −0.342|us |T )
(10°,30°)
(0.643|us |T, 0.866|us |T )
(−0.342|us |T, 0)
Fig. 3 Twelve sectors subdivision graph of stator flux space
4
5
U5 6
U4
3
U3
U6
U2
7 U7
8
2
U1 U8
U12 U9 9
U10 10
1
12
U11 11
4 DTC Based on Twelve Vectors and Its Performance 4.1 DTC Based on Twelve Vectors According to the previous analysis, it can be seen that the effect of space voltage vector on stator flux is unbalanced in the same sector, which will inevitably lead to uneven changes of stator flux vector trajectory in a sector, and affect the performance of flux and torque control. To solve these problems, a direct torque control algorithm based on twelve vectors is proposed in this paper. In this algorithm, six new voltage vectors are synthesized, where a new voltage vector is synthesized on the angular bisector of two adjacent basic voltage vectors. Twelve space voltage vectors are obtained by combining the new voltage vectors and original six basic voltage vectors. The twelve new space voltage vectors are distributed 30° apart in space, and their amplitudes are equal to each other and smaller than the amplitudes of the original six basic voltage vectors. And the original six sectors are subdivided into twelve sections, as shown in Fig. 3. According to the influence of voltage vector on flux and torque in each segment, the switch table of DTC based on twelve space voltage vectors can be shown in Table 2.
A High Performance Control Method of PMSM
63
Table 2 Switch table of twelve space voltage vectors ϕ
τ
θ1
θ2
θ3
θ4
θ5
θ6
θ7
θ8
θ9
θ 10
θ 11
θ 12
1
1
U3
U4
U5
U6
U7
U8
U9
U 10
U 11
U 12
U1
U2
0
0
U 11
U 12
U1
U2
U3
U4
U5
U6
U7
U8
U9
U 10
1
U5
U6
U7
U8
U9
U 10
U 11
U 12
U1
U2
U3
U4
0
U9
U 10
U 11
U 12
U1
U2
U3
U4
U5
U6
U7
U8
| Δψ s | us12 T
| Δψ s | us12 T
0.707
-15º
0º
15º
θ
-0.259
0.259
-15º
0º
15º θ
(a) The curve of the increase in amplitude
-0.707
(b) The curve of the decrease in amplitude
Fig. 4 The changing curve of stator flux amplitude at DTC based on twelve vectors
4.2 Performance Analysis of Stator Flux The curve of flux amplitude in a section when flux vector rotated counterclockwise is drawn according to Eq. (5) and (6), as shown in Fig. 4. |∆ψs | ↑= |u s12 |T cos θuψ1 ∈ (0.259|u s12 |T , 0.707|u s12 |T )
(5)
|∆ψs | ↑= |u s12 |T cos θuψ2 ∈ (−0.259|u s12 |T , −0.707|u s12 |T )
(6)
From Fig. 1 and Fig. 4 it can be seen that within a sampling period, the flux change range of traditional DTC is (0, 0.086|us |T ), and the flux change range of DTC based on twelve vectors is (0.259|us12 |T, 0.707|us12 |T ), that is the stator flux change range of DTC based on twelve vectors is smaller than that of traditional DTC. Besides, the increasing and decreasing range of flux vector in each section is relatively balanced in DTC based on twelve vectors, so that the flux running track is improved.
5 Experimental Results and Analysis In order to verify the validity of the proposed method, we do the simulation with the PMSM system under traditional DTC and DTC based on twelve vectors. Set the Simulation conditions as: the referenced speed is 1200r/min; the referenced stator
64
Y. Xing and Z. Li 1400
6
0.2
600 400
0 -0.1
200 0 -200
Te / Nm
800
0.1
0.2
0.3
0.4
0.5
5
4 3
-0.2 -0.1
t/s
-5
0
0.1
0
0.2
0
0.1
/ Wb
(a) Motor speed
0
2 1
-0.2 0
10
5
0.1
/ Wb
n / (r / min)
1000
is / A
1200
0.2
0.3
0.4
-10 0.3
0.5
0.32
0.34
0.36
0.38
0.4
t/s
t/s
(b) Stator flux linkage (c) Electromagnetic torque
(d) Stator current
Fig. 5 Simulation waveforms of traditional DTC 6
1400
0.2
600 400
0 -0.1
200 0 -200
Te / Nm
0.1
800
/ Wb
n / (r / min)
1000
0.1
0.2
0.3
t/s
0.4
0.5
5
4 3
-0.2
0
/ Wb
0.2
0
2 -5
1
-0.2 0
10
5
is / A
1200
0
0
0.1
0.2
0.3
t/s
0.4
0.5
-10 0.3
0.32
0.34
0.36
0.38
0.4
t/s
(a)Motor speed (b) Stator flux linkage (c) Eelectromagnetic torque (d) Stator current Fig. 6 Simulation waveforms of DTC based on twelve vectors
flux is 0.2Wb; and the load torque is 1Nm. The simulation results are shown from Fig. 5 to Fig. 6. The simulation waveforms of traditional DTC method and twelve-vector DTC method under the same conditions, including speed waveform, stator flux waveform, torque waveform and stator current waveform are shown in Fig. 5 and Fig. 6 respectively. It can be seen from the figures that the speed of the two control methods both reaches the reference value of 1200 r/min within 0.2 s and then runs stably at the reference value. It is thus clear that the DTC based on twelve vectors maintains the advantage of fast dynamic response. Because the twelve vectors control method divides the sector more precisely, the flux ripple of twelve vectors control system is greatly smaller than that of traditional DTC, as shown in Fig. 5(b) and Fig. 6(b), which shows that the stator flux running track in the figure is closer to the ideal flux circle, and the ring width of the actual flux circle is narrowed. Due to accurate stator flux control, stator flux ripple decreases and electromagnetic torque ripple is reduced accordingly, as shown in Fig. 6(c). It can be seen from Fig. 6(d) that the three-phase stator current of the DTC based on twelve vectors is symmetrical. Comparing with Fig. 5(d), the stator current in Fig. 6(d) has less harmonic content and lower ripple, which proves that the performance of the control system is improved. It can be seen from the previous comparison that the proposed twelve vectors control method has the same response speed as the traditional DTC, however the flux and torque ripple are significantly reduced, and the current harmonic content is significantly reduced. Fourier analysis of stator current waveform in the two control methods is carried out, and the results are shown in Fig. 7.
A High Performance Control Method of PMSM
65
Fundamental (40Hz) = 2.226 , THD= 7.90%
Fundamental (40Hz) = 2.349 , THD= 10.19% 100
Mag (% of Fundamental)
Mag (% of Fundamental)
100 80 60 40 20 0
80 60 40 20 0
0
200
400 600 Frequency (Hz)
800
1000
(a) Traditional DTC
0
200
400 600 Frequency (Hz)
800
1000
(b) DTC based on twelve vectors
Fig. 7 Stator current output harmonic with different control strategies
Table 3 The comparison of system performance with different strategies
Performance
Method Traditional DTC
Flux ripple(Wb) Torque ripple(Nm) THD(%)
DTC based on twelve vectors
0.21
0.12
0.45
0.23
10.19
7.90
To facilitate the analysis, the comparison of system performance with different strategies is shown in Table 3.
6 Conclusion According to the analysis in this paper, the flux linkage and torque ripple in the traditional DTC system are closely related to the characteristics of the applied voltage vector. It is necessary to improve the selection and control mode of the voltage vector in the traditional DTC system, in order to obtain a high-performance and low ripple PMSM direct torque control system. In this paper a novel DTC method based on twelve voltage vector was proposed. The switch table and the variation range of flux linkage amplitude of twelve space voltage vectors are derived, and the performance of the twelve vector DTC system of PMSM is theoretically analyzed. The motor speed response and torque and flux waveforms under different control methods are simulated through Simulink. The simulation results show that the twelve voltage vector control method designed in this paper can not only respond to the reference speed and torque quickly, but also reduce the flux and torque ripple, which reflects the fast tracking ability of the control system. So the optimization effect is obvious and the system stability is improved in the method proposed in this paper.
66
Y. Xing and Z. Li
Funding This work has been supported by the Liaoning Department Committee fund (NO. LJKZ0224).
References 1. Takahashi, I., Noguchi, T.: A new quick-response and high-efficiency control strategy of an induction motor. IEEE Trans. Ind. Appl. 22(5), 820–827 (1986) 2. Depenbrock, M.: Direct self-control (DSC) of inverter-fed induction machine. IEEE Trans. Power Electron. 3(4), 420–429 (1988) 3. Xing, Y., Wang, X., Liu, Y., et al.: A novel stator flux linkage observer of permanent magnet synchronous motor. J. Northeastern Univ. Nat. Sci. 34(6), 766–769 (2013) 4. Andreescu, G.D., Pitic, C.I., Blaabjerg, F., et al.: Combined flux observer with signal injection enhancement for wide speed range sensorless direct torque control of ipmsm drives. IEEE Trans. Energy Convers. 23(2), 393–402 (2008) 5. Wang, X.L., Luo, Z.P., Zhang, S.Q., et al.: Direct torque control of squirrel cage motor based on sector optimization. Micromotors 55(9), 80–88 (2022) 6. Kan, C., Chu, C., Hu, Y., et al.: Study on the asynchronous performance of changing-poles BDFM with the field-circuit coupled based on the time stepping finite element analysis. IEEE Trans. Electr. Electron. Eng. 14(9), 1389 (2019) 7. Zhang, W., Xu, A.D., Han, L.L.: Minimising torque ripple of SRM by applying DB-DTFC. IET Electr. Power Appl. 11(13), 1883–1890 (2019) 8. Wang, X., Xing, Y., Liu, Y., et al.: Research into speed sensorless direct torque control system of permanent magnet synchronous motor. J. Northeastern Univ. Nat. Sci. 33(5), 618–621 (2012) 9. Li, Y.D., Wang, S.T., Ba, W.L.: Novel PDTC design for induction motor driving system. Micromotors 53(9), 89–94 (2020) 10. Yan, W.C., Jiang, Z., Fan, H.S.: Sensorless enhanced direct flux control for interior permanent magnet synchronous motors. Micromotors 53(3), 72–77 (2020) 11. Cheng, Q.M., Chen, L., Cheng, Y.M., Sun, W.S., Li, T.: Direct torque control of three-level direct matrix converter-fed PMSM based on dynamic torque hysteresis. Proc. CSEE 39(5), 1488–1498 (2019) 12. Niu, F., Wang, B., Babel, A.S., et al.: Comparative evaluation of direct torque control strategies for permanent magnet synchronous machines. IEEE Trans. Power Electron. 31(2), 1408–1424 (2016)
A Dynamic Association Analysis Model Based on Maintenance Sequence for Railway Traction Power Supply System Kaiyi Qian, Xiaoguang Wei, Jianzhong Wei, Wenhao Li, Xingqiang Chen, and Bo Li
Abstract Traction power supply system, as the power source of electrified railway, has high requirements for reliability. The strategy of planned maintenance combined with emergence maintenance is adopted to ensure its safe operation and numerous information monitoring systems have been deployed to collect and analyze its operating data. Recently, researchers began to use association analysis to study system operating data. The existing methods treat the entire historical data as input, which results in that the association rules are an intermixed reflection of all past operating states and it is unable to observe the inevitable evolution of system due to maintenance. This paper aims to propose an association analysis model reflecting system changes. Firstly, the partition method (PMSO) based on maintenance sequence and the organization of maintenance workshop is proposed to reasonably generate transaction database. Secondly, the database is distributed by sequence of overhaul and the unit frequency is applied as threshold to obtain the association rules. Thirdly, we introduced a novel decision-making method based on temporal proximity to guide the maintenance. A case study based on real operating data of railway lines in south-west China is presented, which confirms our dynamic association analysis model based on maintenance sequence (DAAMS) has ability to distinguish the mutation of system and observe the evolution with maintenance. Based on the mining results, relation networks are built and maintenance suggestions are given based on the priority of rules. Keywords Association Analysis · Traction Power Supply System · Maintenance Decision · Railway
K. Qian (B) · J. Wei · W. Li · X. Chen · B. Li China Railway Design Corporation, Tianjin 300308, China e-mail: [email protected] X. Wei Electrical Engineering Department, Southwest Jiaotong University, Chengdu 611756, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_8
67
68
K. Qian et al.
1 Introduction With the development and popularization of big data in the past decades, traditional industries have begun to seek for changes brought about by data science [1–3]. Association analysis, one of big data technology, has been widely used due to its function of revealing internal relationships of items, which can help researchers understand the connections between items of complex system. This method was first proposed by Agrawal in 1993 [4], which obtains association rules in the form of “A → B” by mining frequent itemsets based on massive historical data. In the past two decades, many improved algorithms have been proposed by researchers, which can be categorized into three basic models: breadth search in horizontal format [4], depth search in horizontal format [5] and vertical data format [6]. Railway traction power supply system (RTPSS), the power source of electrified railway, has been trying to find interesting knowledge among abnormal operating records to guide maintenance strategy using association analysis. Owing to the extensive deployment [7] of numerous information monitoring systems, such as integrated automation system, SCADA system, 6C system and auxiliary monitoring system, the database of operating records that contains various equipment state parameters and maintenance records has been built gradually by all local railway administrations in recent years, which provides the possibility for association analysis. Nevertheless, due to the complexity and high reliability of RTPSS [8], the abnormal operating data which can inspire the optimization of maintenance strategy is extremely rare, not to mention that multiple abnormal data occur to a certain device simultaneously, which brings difficulty to the establishment of transaction records before association analysis. To address the sparsity of operating data, several improved algorithms have been proposed. Yigu et al. [9] have proposed multiplelevel top-K high utility fault-sets mining (MTHU) to discovery association rules with abnormal operating states that have great influence on metro OCS (a subsystem of RTPSS) by adjusting the utility settings of different equipment parts. MTHU has capability to obtain targeted itemsets easier by giving higher utility, which involves subjective factors and is greatly affected by the professional knowledge of different users. Kaiyi et al. [10] have proposed multi-dimensional partition method (MDP) by using spatial and temporal information recorded with operating data to reduce the sparsity of database. The scale of space and time can be flexibly selected according to the organizational structure, which may lead to the loss or confusion of inner connection in the same transaction record. To the best of the authors’ knowledge, the state-of-art of association analysis for RTPSS always treat the entire historical data as input, which results in that the association rules are an intermixed reflection of all past operating states and it is unable to observe the inevitable dynamic changes on RTPSS due to maintenance. In addition, according to the actual operation process, RTPSS adopts the strategy of planned maintenance combined with emergence maintenance. Systematic troubleshooting will be implemented every quarter and overhaul every year. Under such
A Dynamic Association Analysis Model Based on Maintenance …
69
frequent maintenance, RTPSS maintains a high degree of reliability, but the consistency between data will be affected. Therefore, after a long period of time, due to the replacement of equipment, there will be barely inner connection between data, which causes the inaccurate results of association rules, furthermore, maintenance decisions will be affected. To address issues mentioned above, the sequence of maintenance records in operating database has been taken into consideration to guide the generation of transaction database and optimize the process of association analysis. In this paper, we proposed a dynamic association analysis model based on maintenance sequence (DAAMS) for RTPSS to obtain the relationships between abnormal states and observe the evolution of RTPSS over time under the influence of maintenance. This model consists of the following three parts. 1) The partition method based on maintenance sequence and the organization of maintenance workshop (PMSO) are proposed to reasonably partition the data and generate the transaction database. 2) The database is distributed into several sub-databases based on the sequence of overhaul and the unit frequency is applied as threshold to obtain the association rules. 3) The temporal proximity is introduced as an indicator with unit frequency and confidence to determine the priority of association rules in our proposed maintenance decision-making method.
2 Operating Data Preprocessing The database of operating records contains numerous equipment state parameters and maintenance sequence that collected by advanced information monitoring systems and manual input. The abnormal state data has been first selected as the original data by fault diagnosis. Considering the complexity of RTPSS and the sparsity of data, a multi-level hierarchical structure is applied, as shown in Fig. 1. All specific abnormal states are placed at the bottom of the hierarchical structure as specific items, and upper-level item is integrated according to physical connection and system architecture. Fig. 1 The multi-level hierarchical structure of abnormal states for RTPSS
70
K. Qian et al.
To facilitate the mining work, it is necessary to code each item according to the. hierarchical structure of RTPSS in Fig. 1. The hierarchical relationship between l a∗l−1 and a∗l can be represented by a tuple a∗l−1 , a , where l is the level of item, ∗ and all sub-items of a∗l−1 can be noted as S a∗l−1 . The encoding method is proposed based on the principle of minimum description length [11] [12] as follows: Let a∗0 be the root node of the multi-level hierarchical structure, and the code of 0 a∗ , noted as C(a∗0 ), be null. The code of any item a∗l will be defined by (1): C a∗l = C a∗l−1 + p a∗l |a∗l−1 ∈ a∗l−1 , a∗l , l ≥ 1
(1)
where p a∗l is the position of a∗l in the sub-level of a∗l−1 and the addition symbol here represents the concatenation of strings rather than the summation. With the proposed encoding method, the code of each level only has 2 digitals to encode a multi-level hierarchical structure containing more than 3000 specific abnormal states. In addition, since only specific abnormal states are recorded in the database, and the the frequency of integrated upper-level items need to be defined value V a∗l−1 is determined by the frequency of all items in S a∗l−1 , , as shown in (2): l l V a∗ , a∗ ∈ S a∗l−1 , l ≥ 2 V a∗l−1 =
(2)
3 Methodology In this section, we will perform step-by-step analysis to develop the dynamic association analysis model based on maintenance sequence (DAAMS) for railway traction power supply system.
3.1 Establishment of Transaction Database The original database (noted as OD) stores all selected abnormal state data, and each data contains the code of specific item ai , the location l i and the detection time t i . Before constructing the transaction database available for association analysis, we need to ensure that the criterion of data partition is reasonable, which means each transaction record of abnormal state data should share the irrefutable relevance in aspect of time and space. According to actual operation, as the RTPSS is arranged along the railway line, multiple maintenance workshops are set up for sectional maintenance. In spatial aspect, due to the potential technical differences between maintenance teams, only the data within the jurisdiction of each maintenance workshop can
A Dynamic Association Analysis Model Based on Maintenance …
71
be considered relevant. In terms of time, two individual data share logical connection only if they occurred in the same interval of maintenance. Hence, we proposed a partition method (PMSO) based on maintenance sequence and the organization of maintenance workshop. For each selected abnormal state data, a tuple (li , ti ), ai can be used to represent it when scanning the original database OD. Meanwhile, the organization of maintenance workshop can be perceived as finite intervals in the axis of space, which is shown as L M O = {(ln−1 , ln )|n ≥ 1}. . The maintenance sequence is the combination of overhaul, systematic maintenance, and emergency maintenance, which is defined as (3): TM S = {tm |m ≥ 1, tm ∈ {TO } ∪ {TS } ∪ {TE }}
(3)
where T O is the sequence of overhaul, T S is the sequence of systematic maintenance, T E is the sequence of emergency maintenance, and the element of these three sequences are noted as t o , t s and t e , respectively. As illustrated in Fig. 2, all selected abnormal state data can be put into a rectangular coordinate system consisting of time axis and space axis. It can be observed that only a few data will appear on the same time–space coordinate point. If each coordinate point is directly regarded as a transaction record without clustering, most of them only contain one item, which will lose lots of information and make it difficult to achieve association analysis. Td = Td ∪ C(ai ) :
V (ai )
(4)
(li ,ti )∈Ad
In our proposed method PMSO, maintenance sequence T MS and organization of maintenance workshop L MO are invoked for reasonable clustering. The coordinate plane is divided into D areas, determined as Ad = {(ld , td )|ln−1 < ld ≤ ln , tm−1 < td ≤ tm } and all data in one area constitutes a transaction record. The transaction database T can be generated by tuples (li , ti ), ai of original database OD. The transaction record T d is determined as (4) and Fig. 3 shows the specific process of generating transaction record in one area. Fig. 2 Schematic diagram of PMSO (N: number of records)
72
K. Qian et al.
Fig. 3 Process of generating transaction record
3.2 Distributed Database and Unit Frequency As mentioned in the introduction, all existing association analysis methods of RTPSS always invoke the entire database as input without considering the influence of frequent maintenance. In actual operation, overhaul will be carried out every year, which will correct any abnormal state in the RTPSS and make the system function take on a new look. The traditional mining process intermix the operating state of different overhaul period and has no ability to reflect the dynamic changes of RTPSS to guide the maintenance strategy. In this section, distributed database based on maintenance sequence will be introduced to address the issue above, and unit frequency is proposed as support, matching the method PMSO. The sequence of overhaul TO = {to |o ≥ 1} is a set of finite time points. The transaction database TD can be divided into several sub-databases TDi according to the intervals IT = {(to−1 , to )|o ≥ 1} determined by T o . For each transaction record T d , the projection of Ad on the time axis is examined, as shown in (5), to distribute the database. Td ∈ TDi , if pr oj( Ad ) ∈ (ti−1 , ti )
(5)
As illustrated in Fig. 4, these sub-databases collect all transaction records in the same overhaul period, avoiding the false association caused by data in different periods. In addition, a new indicator named unit frequency has been proposed to replace the support. Considering the different sizes of divided areas, the new indicator unites the time and distance, so that the frequency of abnormal states in different transactions can be compared with each other. With the idea of unitization, the unit frequency of any item ai in one transaction record is determined by (6) and the unit frequency of item ai in the database is defined by (7): Fig. 4 Sub-databases determined by sequence of overhaul
A Dynamic Association Analysis Model Based on Maintenance …
Fu (ai )d = Fu (ai ) =
73
V (ai )d V (ai )d = S( Ad ) (ln − ln−1 )(tm − tm−1 )
(6)
D D 1 V (ai )d 1 Fu (ai )d = D d=1 D d=1 S( Ad )
(7)
where S(.) means the function of area, Fu (ai )d represents the unit frequency of item ai in transaction record T d . With the unit frequency of item ai , the unit frequency of an itemset X = ai ∩ a j can be obtained by (8): D D 1 V ai ∩ a j d 1 Fu (X ) = Fu ai ∩ a j d = D d=1 D d=1 S( Ad )
(8)
where V ai ∩ a j d is the value of itemset in one transaction record. To maintain the downward closure property in the mining model, the following operation has been defined, as shown in (9): V ai ∩ a j d = min V (ai )d , V a j d
(9)
With this operation, the value of itemset will be less than that of any item, meeting the requirement of the Apriori algorithm framework. The confidence of two itemsets can be obtained by their unit frequency, as shown in (10): Con f idence(X 1 → X 2 ) =
Fu (X 1 ∩ X 2 ) Fu (X 1 )
(10)
3.3 Maintenance Decision-Making Method Since the database has been distributed by the sequence of overhaul, association rules of different periods will appear in the mining result. The traditional decisionmaking method has no ability to give the priority of such rules, we proposed a novel decision-making method based on temporal proximity, unit frequency and confidence. In general, the recent association rules are more noteworthy than the earlier ones, which means they have higher priority. By using DAAMS, association rules in the form of R: A → B help build relation networks of abnormal states, the networks of different periods will reflect the evolution of RTPSS over time under the influence of maintenance. Besides, to characterize the importance of an association rule and define the maintenance priority, an indicator named temporal proximity is introduced to solve the problem. And the temporal proximity of association rules is defined as:
74
K. Qian et al.
pr ot (R : A → B) = pti =
pti
(11)
TDi ∈TD
i , R ∈ TDi
(12)
0, R∈ / TDi
where pt i is the temporal proximity of each sub-database. All association rules of different periods will be sorted by descending order of temporal proximity first. If two rules have the same temporal proximity, the unit frequency and confidence will be used as auxiliary indicators to determine the priority.
4 Case Study In this section, a case study is conducted. The operating data is collected from railway lines of entire bureau in south-west China from 2016 to 2018. The abnormal state data has been first selected, and the partition method PMSO has been employed to construct a proper transaction database. According to the time span of data and the sequence of overhaul, the transaction database is suggested to be divided into three sub-databases. The mining work is implemented on each sub-database, as an example, the unit frequency is adjusted to generate the most important 9 association rules. The relation networks [13, 14] of abnormal states obtained from the three sub-databases are shown in Fig. 5. All nodes of itemset and corresponding abnormal states involved are presented in Table 1. The relation network can intuitively display the directive property of association rules. In Fig. 5a, we can observe that node C (Height difference of contact wire within span) is pointed to by most association rules. Nevertheless, in the other two networks, association rules mainly point to node F and Q (in Fig. 5b), node J, K and Q (in Fig. 5c). Wherein, node J (Bracket for top tube) and node K (Bracket for strut)
(a)
(b)
(c)
Fig. 5 Relation networks of the most important 9 association rules (Red solid arrow indicates association rules, green dotted arrow indicates hierarchy itemsets)
A Dynamic Association Analysis Model Based on Maintenance …
75
Table 1 Nodes of itemset and corresponding abnormal states/ Node Abnormal state of RTPSS corresponding
Node Abnormal state of RTPSS corresponding
A
Contact wire height & Plant invasion boundary
O
Registration device & Support device
B
Contact wire height
P
Registration device & Signboard
C
Height difference of contact wire Q within span
Registration device
D
Dropper and bridle wire & Support device
R
Steady arm & Bracket for strut
E
Dropper and bridle wire
S
Steady arm
F
Support device
T
Pole plate & Height difference of contact wire within span
G
Bracket for top tube & Beta-split U pin of bracket for strut
Pole plate & External environment
H
Bracket for top tube & Registration device
V
Pole plate
I
Bracket for top tube & Steady arm
W
Installation status of pole plate
J
Bracket for top tube
X
External environment & Height difference of contact wire within span
K
Bracket for strut
Y
External environment & Signboard
L
Beta-split pin of bracket for strut & Registration device
Z
External environment
M
Beta-split pin of bracket for strut
AA
Plant invasion boundary & Height difference of contact wire within span
N
Registration device & Dropper and bridle wire
AB
Plant invasion boundary
are sub-items of node F (Support device). Therefore, the relation networks of TD2 and TD3 are relatively similar, while that of TD1 is quite different. Compared to the existing methods, the proposed dynamic association analysis model can distinguish the mutation of system after periodic overhaul and observe the evolution with maintenance. For example, the most frequent abnormal state is changed from contact wire into support and registration device after the first overhaul. As the maintenance continues, the combination or sub-items of abnormal states of signboard gradually decreases. In the network of first overhaul period, node V and node W are sub-items of signboard, while node U and node T are combinations of signboard. Notably, node W is a specific abnormal state, which means this abnormal state occurs extremely frequently in the first period, so that the third-level node can be obtained as a frequent itemset. In the network of second period, we can see that only node V and the combination of signboard (node P and Y) are contained. Likewise, the related node of signboard does not appear in the third network. It can be inferred that the installation status of signboard is gradually improved in the
76
K. Qian et al.
process of maintenance. Such gradual change of system cannot be obtained through traditional methods, which is the advantage of our proposed method. To guide the maintenance and help decision making, the priority of association rules is determined by implementing our proposed method based on temporal proximity, unit frequency and confidence. Because the threshold setting of the three sub-databases are different, the unit frequency is displayed by the ratio to the set threshold. The top 15 association rules of the highest priority are shown in Table 2. Because of the introduction of temporary proximity, it is easier for association rules that are closer to the current time to be ranked first in priority, which meets the expectation of actual operation. If an association rule appears in the mining results of multiple sub-databases, the priority will advance accordingly. Unfortunately, there are no association rules in the case study that meet the above condition. When determining the priority of association rules, we adhere to the following principles. If two association rules have the same prot , the priority will be determined according to the unit frequency. However, when the two indicators are the same, confidence will be used to help determine the priority. For instance, rule G → Q and L → J share the same prot and F u ratio. While the confidence of the former is larger, which indicates that compared with the other association rule, node Q is more likely to become abnormal when node G occurs, so it has a higher priority. With the relation networks and the priority of association rules, the maintenance decision-making can be easily realized by exploring the subsequent abnormal states and tracking the previous abnormal states. Once an abnormal state has been detected, the corresponding node will be targeted in relation networks. The subsequent nodes Table 2 Top 15 priority of association rules Rules
prot
F u ratio
Confidence
Priority
M→Q
3
141.43%
99.51%
1
M→J
3
137.98%
97.09%
2
G→Q
3
137.29%
99.50%
3
L→J
3
137.29%
97.07%
4
M→H
3
137.29%
96.60%
5
S→K
3
107.62%
83.20%
6
I →K
3
100.38%
99.66%
7
Z →Q
3
100.38%
97.65%
8
R→J
3
100.38%
93.27%
9
E→F
2
138.86%
97.20%
10
V →F
2
127.87%
87.07%
11
N →F
2
116.88%
100.00%
12
D→Q
2
116.88%
84.17%
13
E→Q
2
116.88%
81.82%
14
E→O
2
141.43%
99.51%
15
A Dynamic Association Analysis Model Based on Maintenance …
77
need to be checked by the priority order and the preventive maintenance shall be carried out as appropriate. Then all previous nodes will go through the same process. The examination of subsequent and previous nodes ensures that, in relation networks, the abnormal state will not be transmitted and diffused backward, and the possible source can be eliminated and blocked. Besides, the nodes that consists of the upperlevel items in the hierarchical structure also need to check their subsequent and previous nodes, because they contain the abnormal state of lower-level node. For instance, if the abnormal state of steady arm has been detected, node S will be found in the networks. Then the examination shall be implemented in the following order: S → K (subsequent node) → Q (upper-level node) → M (previous nodes, priority 1) → G (previous nodes, priority 3) → Z (previous nodes, priority 8).
5 Conclusion Long term maintenance will have a cumulative effect on the railway traction power supply system, resulting in the inevitable evolutions of the system. In this paper, to discover these gradual changes and guide the maintenance decision-making, the dynamic association analysis model DAAMS is proposed with the partition method PMSO and distributed database. The temporal proximity is introduced to guide maintenance decision. Then DAAMS is conducted on real operating data collected from railway lines in south-west China. Based on the mining results, relation networks are constructed to help distinguish the mutation of system and observe the evolution with maintenance. At last, an example is given to show how to use our novel decision-making method and maintenance suggestions are given based on the analysis. Acknowledgements This work was funded by grants from the Key Projects of China State Railway Group Co., Ltd (No. N2022G008).
References 1. Xu, Z., Tang, N., Xu, C., et al.: Data science: Connotation, methods, technologies, and development. Data Sci. Manage. 1(1), 32–37 (2021) 2. Martinez, I., Viles, E., Olaizola, I.G.: Data science methodologies: current challenges and future approaches. Big Data Res. 24, 100183 (2021) 3. Mili´c, S.D., Ðurovi´c, ŽM., Stojanovi´c, D.: Data science and machine learning in the IIoT concepts of power plants. Int. J. Electr. Power Energy Syst. 145, 108711 (2023) 4. Agrawal,R., Srikant, R.: Fast algorithms for mining association rules. In: 20th International Conference Proceedings on Very Large Data Bases, pp. 487–499, Morgan Kaufmann, Santiago, Chile (1994) 5. Han, J., Pei, J., Yin, Y., et al.: Mining frequent patterns without candidate generation: A frequentpattern tree approach. Data Min. Knowl. Disc. 8(1), 53–87 (2004)
78
K. Qian et al.
6. Zaki,M.J., Gouda, K.: Fast vertical mining using diffsets. In: 9th ACM SIGKDD International Conference Proceedings on Knowledge Discovery & Data Mining, pp. 326–335, ACM, Washington, D.C., USA (2003) 7. Wang, T.: The intelligent Beijing-Zhangjiakou high-speed railway. Engineering 7(12), 1665– 1672 (2021) 8. Chen, L., Chen, M., Chen, Y., et al.: Modelling and control of a novel AT-fed co-phase traction power supply system for electrified railway. Int. J. Electr. Power Energy Syst. 125, 106405 (2021) 9. Liu, Y., Gao, S., Yu, L.: A novel fault prevention model for metro overhead contact system. IEEE Access 7, 91850–91859 (2019) 10. Qian, K., Yu, L., Gao, S.: Fault tree construction model based on association analysis for railway overhead contact system. Int. J. Comput. Intell. Syst. 14(1), 96–105 (2020) 11. Davis, R.A., Hancock, S.A., Yao, Y.: On consistency of minimum description length model selection for piecewise autoregressions. J. Econ. 194(2), 360–368 (2016) 12. Pandey, H.M., Chaudhary, A., Mehrotra, D., et al.: Maintaining regularity and generalization in data using the minimum description length principle and genetic algorithm: case of grammatical inference. Swarm Evol. Comput. 33, 11–23 (2016) 13. Qian, K., Gao, S., Yu, L.: Marginal frequent itemset mining for fault prevention of railway overhead contact system. ISA Trans. 126, 276–287 (2022) 14. Wei, X., Zhao, J., Huang, T., et al.: A novel cascading faults graph based transmission network vulnerability assessment method. IEEE Trans. Power Syst. 33(3), 2995–3000 (2018)
Efficiency Optimization Control of PMSM Based on a Novel LDW_PSO Over Wide Speed Range Chong Zhou and Kun Mao
Abstract In order to increase the efficiency of permanent magnet synchronous motor (PMSM) under flexible operating mode, this article presents a novel linear decreasing inertia weight particle swarm optimization (LDWPSO) to search for the optimal efficiency point. The equivalent circuit model and loss analysis are provided to figure out the initial value and boundary of the particle swarm, with the inertia weight adjust from the speed error. Therefore, the global search and local search capabilities of PSO shall be balanced. it can effectively improve the accuracy and rate of the optimization progress. A simulation work was carried out to compare the performance of search controller based on LDWPSO with that of traditional search controller based on gradient descent algorithm. The result shows that the LDWPSO based strategy performs better over a wide speed range, with higher efficiency and faster rate of convergence. Keywords Permanent magnet synchronous motor (PMSM) · Efficiency optimization · Linear decreasing inertia weight particle swarm optimization (LDWPSO) · Wide speed range
1 Introduction PMSM has become the most competitive motor system for electrical drive systems [1]. Due to the limited energy power supply such as power battery, the efficiency optimization control of PMSM is a key technology [2]. Generally, the efficiency optimization of PMSM can be classified to 3 ways: stator current minimum strategy (MTPA), loss minimization current control (LMC) and search method (SC) [3]. However, MTPA only considers the copper loss, and C. Zhou (B) · K. Mao School of Instrumentation and Optoelectronic Engineering, BeiHang University, Beijing 100191, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_9
79
80
C. Zhou and K. Mao
does not consider the core loss. LMC considers the system’s core, copper and stray consumption, [4], but the calculation of loss model is relatively complicated [5]. The traditional search method requires the system to achieve a relatively stable state of constant speed, otherwise the system may produce measurement fluctuations and errors [6]. And the algorithm used in the search process has a great impact on the dynamic and static performance of the control system. PSO has the advantages of fewer parameters and fast convergence speed, but it is not sensitive to the change of working conditions, and easy to precocious convergence or slow convergence due to improper selection of inertia weight coefficient [7]. In this paper, the optimal efficiency current point based on loss model is analyzed. Considering the influence of environmental factors such as temperature on system parameters, the error of the actual optimal efficiency point relative to the theoretical value is analyzed. An improved particle swarm search method is used to find the optimal efficiency points online, and the weight coefficients of particle swarm optimization algorithm are adjusted in real time by using the speed difference, which avoids the algorithm falling into the local optimal under the premise of ensuring the convergence speed. In the simulation, the proposed scheme is compared with the traditional search method based on gradient descent algorithm (GDA), and the performance is verified in the wide speed range.
2 Optimal Efficiency Angle Online Search Control 2.1 The Copper Loss and Core Loss Analysis The total loss of PMSM includes mechanical loss, stray loss and electrical loss. Among them, the copper and iron losses in electrical losses are mainly considered. Figure 1 shows the copper loss and iron loss models of PMSM considering the AC axial current under steady state. The total controllable electrical loss including the copper loss and the core loss can be expressed as follows. Ploss = PCu +PFe = AD + Rs Iq2 tan2 γ + B D tan γ + C D + Rs Iq2 A = ωe2 L 2d Iq2 Rc2 + ωe2 L q2 B = 2ωe3 Rc L d L q L d − L q Iq2 + 2ωe2 ψ L d ωe2 L q2 + Rc2 Iq C = ωe2 L q2 ωe2 L 2d + Rc2 Iq2 + 2ωe3 Rc ψ L q L d − L q Iq + ωe2 ψ 2 ωe2 L q2 + Rc2 D=
Rc Rc2 + ωe2 L d L q
2
(1)
Efficiency Optimization Control of PMSM Based on a Novel LDW_ …
(a)
81
(b)
Fig. 1 a The equivalent circuit models for PMSM in the rotating-coordinate system; b Copper loss and core loss with different d-axis current
In order to obtain the condition of minimum electrical loss, partial derivative of tanγ can be obtained. ∂ Ploss = 2 AD + Rs Iq2 tanγ + B D = 0 ∂(tanγ )
(2)
Thus, the d-axis current with the minimum electrical loss can be obtained. Id∗ =Iq tan γ ∗ = −
ωe2 ψ L Rc ωe2 L 2 Rc +ωe2 L 2 Rs + Rc2 Rs
(3)
2.2 An Offline Look-up Table of Optimized Current Based on Loss Model Considering the error of copper loss resistance and iron loss resistance, it can be assumed that the actual value of equivalent copper loss resistance and equivalent iron loss resistance is Rs∗ and Rc∗ , and the reference value used in calculation is: ' Rs = Rs∗ + ∆Rs = Rs∗ + α∆T ∈ Rs| min Rs|max
(4)
' Rc = Rc∗ + ∆Rc =Rc∗ + β∆T ∈ Rc| min Rc|max
(5)
The resistance temperature coefficient of the Cu-Al conductor is α = 0.004/°C, and the flux heat coefficient of the equivalent iron loss resistance is β = -0.0012/°C [7].
82
C. Zhou and K. Mao
Table 1 Range of the Id at minimum loss [Id-min, Id-max] (A) Iq = 10 A Iq = 20 A Iq = 30 A Iq = 40 A Iq = 50 A Iq = 60 A Speed = 20 krpm
[−5, −2]
[−5, −2]
[−6, −2]
[−6, −2]
[−6, −3]
[−6, −3]
Speed = 40 krpm
[−5, −2]
[−6, −3]
[−7, −3]
[−7, −3]
[−7, −3]
[−8, −3]
Speed = 60 krpm
[−6, −3]
[−7, −3]
[−8, −3]
[−9, −4]
[−10, −4] [−11, −4]
Speed = 80 krpm
[−8, −5]
[−9, −5]
[−10, −5] [−11, −6] [−13, −6] [−14, −6]
Speed = 100 krpm
[−10, −7] [−12, −7] [−15, −8] [−17, −8] [−18, −8] [−19, −8]
Table 1 is made offline according to Eq. (3). In practice, the range of Id at minimum loss can be obtained by referring to the table according to the speed and current. Id∗ ∈ [ Id| min Id|max ]
(6)
2.3 Online Search for Optimal Efficiency Angle with Modified PSO In the traditional PSO, a large inertia weight is conducive to global search, but it leads to insufficient local convergence. A small inertia weight is more conducive to local search. But it is easy to fall into local optimization. At constant speed, when the torque current changes, it can be seen from Fig. 2(a) that the optimal efficiency Angle increases with the increase of the torque current. Under constant current, when the rotational speed changes, it can be seen from Fig. 2(b) that the optimal efficiency Angle decreases with the increase of rotational speed, while the overall variation trend of motor efficiency is that the higher the rotational speed, the higher the efficiency.
(a)
(b)
Fig. 2 (a) Efficiency under different q-axis current and theta; (b) efficiency under different speed and theta
Efficiency Optimization Control of PMSM Based on a Novel LDW_ …
83
In order to solve the problem that the traditional particle swarm optimization algorithm is difficult to satisfy the different working conditions of PMSM at the same time, and ensure the particle to balance the performance of global search and local search under the condition of changing working conditions, the idea of linear decreasing inertia weight is applied to the traditional particle swarm optimization algorithm, and the speed difference is used as the reference value of weight decreasing. In the particle swarm search method with linearly decreasing weight of speed difference adopted in this paper, the ith particle updates its speed and position with the following formula. vid = φ × w d × vid−1 + c1r1 pbestid − xid + c2 r2 gbest d − xid
(7)
xid+1 = xid + vid
(8)
xid and vid represent the position and velocity of the ith particle in the dth iteration. pbestid and gbest d correspond to individual optimal solutions and population optimal solutions in the iterations before. r1 and r2 are random numbers between [0,1]. φ is the constriction factor. 2 | φ=| √ | | |2 − C − C 2 − 4C |
(9)
In addition, in order to better balance the global search and local search capabilities of the algorithm, the inertial weight is designed as (11). | | |ωr e f − ωback | w = wmin + (wmax − wmin ) × ωrated d
(10)
In general, wmin = 0.4, wmax = 0.9. ωr e f and ωback represent target speed and realtime speed, and ωrated is rated speed.In the variable speed stage, the inertia weight is increased to avoid prematurity, and in the steady speed stage, the inertia weight is rapidly reduced to obtain faster local convergence rate. The following is the implementation process of particle swarm search method based on linear decreasing weight of speed difference: 1) Firstly, according to the current speed and torque current information, the approximate value and error range of the optimal loss Angle are obtained by looking up the table method, and the position (offset Angle) and velocity information of the particle are initialized based on this. 2) The simplified fitness function (efficiency) is used to calculate the initial fitness value of each particle, so that the current value is the individual optimal value, and the initial group optimal value is obtained. 3) Update particle position and velocity according to linear inertia weight coefficient and learning factor.
84
C. Zhou and K. Mao
4) Calculate the new fitness function (efficiency). 5) Update individual optimal value and group optimal value according to the principle of better efficiency. 6) If the number of iterations reaches the maximum or the group optimal value meets the preset adjustment, terminate this search. Otherwise, return to step 3. 7) If the error between the target speed and the current speed exceeds the preset threshold, look up the table again according to the target speed and torque at speed change, obtain the position (offset Angle) and speed information of the new initialization particle, and return to Step 2.
3 Simulation Result The search control scheme based on LDWPSO to search the optimal efficiency point was simulated. The simulation models made a comparison between control system of gradient descent search control and LDWPSO search control. a load step signal of 40N·m is applied to the motor at a given speed of 10,000 RPM when the steady speed is reached, to figure out the response performance of the two strategies toward the torque change. Also, the three-phase stator current and the quadrature axis current is compared. The result shows that the gradient descent search method can track the torque step signal within 0.04 s, and the LDWPSO search control can track the torque within 0.03 s, both of which have fast response speed and small overshoot. However, after the load is applied, the current of gradient descent search method is about 47A, Id = -10A, and stator current is about 48A. Relatively, the current size of LDWPSO search method is about Iq = 41A, Id = -15A, and stator current size is about 44A, the efficiency of PMSM is improved by 9.1%. As shown in Figs. 3, 4 and 5. It can be concluded that compared with gradient descent method, the search method based on LDWPSO requires less stator current to output the same torque at the same speed, so it has higher efficiency and is closer to the optimal efficiency point.
Fig. 3 Load step response of search control with GDA(left) and LDWPSO(right)
Efficiency Optimization Control of PMSM Based on a Novel LDW_ …
85
Fig. 4 Stator current of search control with GDA (left) and LDWPSO (right)
Fig. 5 d-axis and q-axis current of search control with GDA (left) and LDWPSO (right)
4 Conclusion In this paper, an improved particle swarm optimization with adopted inertial weight is applied to the efficiency optimizing control of PMSM. The proposed scheme can reasonably switch between the search mode of global search and local search, which can help to improve the accuracy of optimization. Additionally, the loss model and parameter errors of PMSM are analyzed to lock the region of the optimal efficiency point. Simulation results reveal that the efficiency of PMSM is improved by 9.1% with the help of LDWPSO comparing to the gradient descent search control.
References 1. Khazaee, A., Zarchi, H.A., Markadeh, G.A.: Loss model-based efficiency optimized control of brushless DC motor drive. ISA Trans. 86, 238–248 (2019) 2. Park, M., et al.: High energy efficiency oriented-control and design of WFSM based on driving condition of electric vehicle. Mechatronics 81, 102696 (2022) 3. Tripathi, S.M., Dutta, C.: Enhanced efficiency in vector control of a surface-mounted PMSM drive. J. Franklin Inst. 355(5), 2392–2423 (2018) 4. Guo, Q., et al.: Maximum efficiency per torque control of permanent-magnet synchronous machines. Appl. Sci. 6(12), 425–425 (2016) 5. Pei, W., Zhang, Q., Li, Y.: Efficiency optimization strategy of permanent magnet synchronous motor for electric vehicles based on energy balance. Symmetry (Basel) 14(1), 164 (2022) 6. Yang, H., et al.: Loss Minimization control of PMSM using gradient descent algorithm. IEEE (2021) 7. Chen, Z., et al.: Operation efficiency optimization for permanent magnet synchronous motor based on improved particle swarm optimization. IEEE Access 9, 777–788 (2021)
HSKP-CF Algorithm Based on Target Tracking for Mobile Following Robot Yuecong Zhu, Xiaomin Chu, Yu Wang, Yunshan Xu, and Kewei Chen
Abstract Aiming at the human target tracking problem of mobile robot in real scene, a Correlation Filtering algorithm that integrates information of the Human Skeleton Key Points (HSKP-CF) is proposed. Based on the excellent performance of HOG and CN feature operators in dealing with target deformation and illumination, these two features in correlation filtering algorithm are fused to improve the performance of the tracker in the above two situations. HSKP-CF includes a filtering mechanism based on the key point information of the human body, and combined with the correlation filtering algorithm to make the model sensitive to the key point information of the human body while paying attention to the features selected by the Kernelize Correlation Filtering (KCF) tracker. Finally, a total of 6670 frames of video image sequences collected on the mobile robot platform are used for algorithm verification and comparative analysis. The outdoor tracking accuracy of the HSKP-CF exceeds the traditional KCF by 46.4% and reaches 93.6%, which shows the effectiveness of the method. Follow-up research needs to reduce the calculation of the model and lay a foundation for the deployment on mobile devices. Keywords Mobile robot · Target tracking · Correlation filtering · Human skeleton · Key points
Y. Zhu · X. Chu · K. Chen (B) Ningbo University, Ningbo 315000, China e-mail: [email protected] Y. Wang China Academy of Safety Science and Technology, Beijing 100020, China Y. Xu Ningbo Trust Technology Co. Ltd., Ningbo 315000, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_10
87
88
Y. Zhu et al.
1 Introduction An adaptive and practical target tracking algorithm is the core of human tracking task. The dynamic human body recognition and tracking technology of robots is not only of great significance in scientific research, but also of great application value in medical rehabilitation, military robots, intelligent driving and other social fields. At present, the implementation schemes of the following robots mainly include: laser radar based following robots [1–3], radio frequency components based following robots [4–6] and vision based following robots [7–9]. Each method has its advantages and disadvantages. The following robot based on laser radar has a long detection range and high positioning accuracy, but the cost of this method is high and the target features established are few, so it is difficult to identify and lock the target. The cost of the following robot based on RF components is low, but it requires users to wear active devices such as bracelets, and its detection range is small, so it needs to cooperate with other sensors to perceive the environment. In view of the task requirement of the tracking robot, its sensors should not only provide abundant feature information to realize the recognition of the following user, but also provide depth information in the field of view to identify and detect the passable areas and obstacles, but also low-cost, portability strong characteristics. Based on the above factors, a vision-based target tracking method is chosen. As the core algorithm of the mobile following robot, visual target tracking aims to identify and track the target in the subsequent video frames by giving the relevant information given by the first frame of the real-time input video image, and finally ensure the smooth completion of the task. In the early stage of target tracking, optical flow methods [10], filtering methods [11] and kernel-based methods [12] have emerged one after another. However, the further development of target tracking is limited by the high computational complexity and low accuracy. The emergence of correlation filtering method has broken the bottleneck of target tracking development [13]. In order to achieve a long-term and stable tracking of a specific human object in a complex scene for mobile robots, the kernel correlation filtering method [14] is selected, which is one of the fast computation algorithms, for optimization, based on the key point information of human body, a framework of human body recognition and tracking for mobile robot is proposed, and the algorithm is applied to the tracking of specific human body target in complex scene, the model is evaluated based on the video data collected by the mobile robot platform, and the algorithm is compared to verify the reliability of the model.
HSKP-CF Algorithm Based on Target Tracking for Mobile Following …
89
Fig. 1 The proposed person tracking and identification framework
2 Person Following System 2.1 Tracking and Identification Framework The framework of the human body tracking system designed is shown in Fig. 1. First, the camera on the mobile robot platform is used to collect the plane and distance information of the human body. Then, the collected target human body information is recognized. A key point information extraction method based on the OpenPose [15] framework is used to identify the human body, and the data is transferred to the Kalman filter for processing and fusion of each key point information. Then use the HSKP-CF algorithm to extract the features, which can implement the online updating and calculation of the model. The calculation results will be transmitted to the control system of the robot platform, and finally achieve the following function of the human target.
2.2 Research on Kernel Correlation Filtering Algorithm The kernel correlation filtering target tracking algorithm adopts the tracking-bydetection tracking framework, and the template is learned and updated during the tracking process. This algorithm has a very impressive performance in terms of tracking effect and tracking speed. The algorithm mainly uses a circular matrix to collect samples, and uses fast Fourier transform to accelerate the calculation of the algorithm. First, select the target to be tracked in frame t (usually the first frame), sample near the frame position, extract HOG features and train to obtain a ridge regressor. This regressor is able to compute the response values at image pixel locations. Secondly, in the t + 1 frame, sample near the target position of the previous frame, use the trained regressor to perform correlation operations on the sampling position and the image, and circularly shift after processing to obtain the sample set to be detected, and record each sampling point response value. Then judge whether the
90
Y. Zhu et al.
video frame is the last frame, if it is the last frame, then end the algorithm operation, otherwise continue to update the model, and continue to return the online training result to the regressor. Finally, the position of the maximum value in the response graph is used as the offset of the target, and the new position of the target is obtained according to the offset; then the new position area is used to train and update the filter for detection and tracking of subsequent frames. Then for the ridge regressor in the KCF algorithm, its linear regression function is f (z) = W T Z , given the sample x i and the corresponding label yi , the goal is to optimize the maximum sum of squares error to find the parameter w, that is min = w
i
( f (xi ) − yi )2 + α||w||2
(1)
Here, α is a regularization parameter, and its function is to prevent the occurrence of overfitting phenomenon, similar to the parameters in the deep learning method. The general solution of formula (1) is −1 w = X T X + αI X T y
(2)
Among them, each row of matrix X corresponds to a vector x i , each value of vector y corresponds to yi , and I is an identity matrix. In formula (2), X is not directly composed of samples x by row, but by constructing samples x into a circular matrix. That is, X = C(x), the purpose is to increase negative samples (each circularly shifted copy of x can be regarded as a negative sample). Therefore, the operations related to the circulant matrix X in formula (2) can be simplified by Fourier diagonalization, and converted to X T X = Fdiag xˆ ∗ . xˆ F T
(3)
At the same time, the KCF algorithm introduces a kernel-based method to solve the problem of linear inseparability of samples, and reconstructs the function f (z) = αi k(z, xi ) to obtain a closed-form solution, and transform it into the i
Fourier domain, the obtained filter model is βˆ =
yˆ α + kˆ x1 x1
(4)
Among them, k x1 x1 is the autocorrelation kernel of the training sample, which is a Gaussian kernel function, and its expression is k
x1 x1
1 2 2 T = exp − 2 ||x1 || + ||x1 || − 2x x σ
Here, σ is the Gauss bandwidth and x1 is the initial training sample.
(5)
HSKP-CF Algorithm Based on Target Tracking for Mobile Following …
91
The closed-form solution of the filter is obtained by extending the one-dimensional sample to the two-dimensional state Aˆ =
Yˆ Kˆ X X + α
(6)
Here, Aˆ is the two-dimensional extension of the filter, Yˆ is the two-dimensional extension of the sample label, and Kˆ X X is the two-dimensional extension of the autocorrelation kernel of the training sample. Finally, the target position is detected and the filter parameters are updated online. For each frame of the target to be tracked, the features of the target region are extracted and processed to form a set of samples to be detected, and the kernel functions of the samples to be detected and the training samples are calculated, the response is calculated with the filter, and the response of all samples to be detected is obtained. The position of the largest response value is taken as the target offset, and the latest position of the target is obtained according to the offset. The response formula for the calculated sample is R(Z ) = F −1 Kˆ X Z . Aˆ
(7)
Here, Kˆ X Z is the cross-correlation kernel of the training sample and the sample to be tested, F −1 stands for Fourier inverse transform, and . stands for bit-by-bit multiplication.
3 Kernel Correlation Filtering Algorithm for Fusion of Human Skeleton Key Points Based on the KCF target tracking algorithm, the skeleton information is fused and the filtering mechanism of skeleton key points is added to update the position of the key points and the filtering.
3.1 KCF Tracker Based on HOG and CN Feature Fusion The traditional KCF target tracking algorithm uses a single feature training filter to track, but the single feature has great limitations in the complex scene, and the FHOG feature is the improved feature of HOG, it can weaken the influence of illumination on the tracking results, but it is difficult to adapt to the complex tracking conditions such as fast moving and occlusion. The CN feature transforms the RGB space into
92
Y. Zhu et al.
a multi-dimensional color feature space, and normalizes it to reduce the dimension, but it is robust to the fast moving and occlusion of the target. Therefore, the FHOG and CN features of the KCF tracker are fused, and HC features are used to train and update the filter and appearance model, in order to improve the robustness of the algorithm, the basic algorithm based on KCF can adapt to more scenes of human target tracking first. In this study, the KCF tracker based on FHOG, CN and HC features are evaluated by OPE (One-Pass Evaluation) on the OTB100 dataset, it is found that HC features has the best performance. Detailed experimental comparison will be described in Sect. 4.
3.2 The Fusion of Human Skeleton Key Points and KCF Tracker In this study, the human key point information is fused with the KCF Tracker as auxiliary information named HSKP-CF, and its running flow diagram is shown in Fig. 2. First, determine whether the input video frame stream is the first frame. If it is the first frame, then initialize the key point information and the target tracking frame, Fig. 2 Algorithm flow chart of this research
HSKP-CF Algorithm Based on Target Tracking for Mobile Following …
93
and initialize the state of the KCF Tracker, then the KCF tracker is used to predict the position of the tracking box and the key points of the human body. Then, track the last frame and the current prediction box and calculate whether the prediction position of the tracking box is stable or not. And the tracking box prediction loss function is defined as n 1 loss(x, y) = n i
0.5 ∗ (yi − f (xi ))2 , i f |yi − f (xi )| < 1 |yi − f (xi )| − 0.5, other wise
(8)
Here, yi is the true marked tracking box position, f(x i ) is the prediction box position, and the loss value is calculated by calculating the European distance between the center points of the tracking box. The loss threshold is set to 0.4, when the loss value is less than 0.4, the prediction tracking box is judged as a stable state, otherwise it is unstable. If the tracking box is judged stable, then continue to update the tracking box and human key point information, and on-line optimization and update the status of the HSKP-CF tracker, if the position prediction is not stable, then use the key points to judge the stability of the predicted position, when the predicted position judge is stable, update the position information of the key points, and use the key points information to update the position of the tracking box, the new key point position is predicted by using the optical flow information of the key point, and the tracking box position is updated by using the cyclic displacement method.
4 Experiment and Evaluation Results 4.1 Data Preparation The open dataset OTB100 and self-built dataset is used in this study. Among them, OTB100 is a specific dataset for target tracking, it consists of 25% gray-scale data and 75% color data, it involves 11 attributes, such as illumination change, occlusion, motion blur, fast motion, etc. And the mobile robot platform is carried out in two typical indoor and outdoor environments for self-built data acquisition. The total duration of the self-built test video data is 6 min, including 4 min of outdoor video and 2 min of indoor video. Only one tracking target is calibrated in each video, and then the video is divided into frames to obtain a total of 6770 frames of image data. For each image data obtained, the data of the target tracking frame is labeled with GT-Box, and the coordinate data of the four vertices of the tracking frame are obtained from the labeled file for subsequent experimental data evaluation.
94
Y. Zhu et al.
4.2 Lab Environment The parameters of the laptop used in this experiment are as follows: operating system: window11; CPU: Intel Core i9-12900HK, memory 32G; GPU: NVIDIA A2000, memory 12G. Use python 3.9 for algorithm writing and data evaluation. The industrial computer on the mobile robot platform is NVIDIA Jetson nano, its operating system is Ubuntu 18.04.
4.3 Experimental Results Comparative Experiment of Feature Fusion In this study, the fast motion and occlusion attributes of OTB100 dataset are selected for the feature fusion experiment, as shown in Fig. 3, Figs. 3(a) and (b) represent the performance of three feature extraction methods under fast motion and occlusion respectively. The horizontal coordinate represents the average Overlap Rate, which is calculated after the intersection of the predicted b-box and ground truth is compared with the frame. The vertical coordinate represents the Overlap Rate of tracking success, and the different Overlap Rate threshold is selected, the ratio of the successful frames to the total frames in the sequence is defined as the tracking success rate, then the success rate under each threshold can be obtained, and the success plots can be formed by connecting each point. It is found that the performance of KCF tracker based on HC feature is better than that based on the other two features when the target is moving fast and occluded. Although the fusion of features increases the complexity of the algorithm, the multichannel can be simplified because of the property of the cyclic matrix in the Fourier
Fig. 3 KCF algorithm with different feature extraction methods on OTB100 dataset
HSKP-CF Algorithm Based on Target Tracking for Mobile Following …
95
domain, the performance and robustness of the improved algorithm are guaranteed while the algorithm can run at a high speed. Algorithm Precision Comparison In this research, a representative outdoor complex environment test video is firstly selected for data analysis, intercepted a total of 3000 image data for analysis, as shown in Fig. 4. Figure 4(a) analyzed the IoU value of the tracking process, IoU is the relationship value between the labeled target tracking box and the actual target box, mainly reflecting the quality of the prediction results through the IoU values of the prediction box and the real label box, it is equivalent to the result obtained by dividing the cross part of two regions by the union part of two regions. The higher the value is, the higher the frame overlap is. The maximum value is 1, and the minimum value is 0. When the value is 0, the target tracking is lost. Generally, IoU > 0.5 is considered a good prediction result. Figure 4(b) records the distance between the center point of the labeled target box and the tracking target box. The smaller the value, the better. According to the tracking frame size used in the test video, it is best that the value is less than 600. The larger the picture size used, the greater the tolerance threshold of this value. When the value is greater than 600, it can be determined that the tracking target is lost. By analyzing the indicators in Fig. 4, that the traditional KCF tracker does not perform well in complex outdoor environments. It is in the process of target loss for a long time, and the predicted tracking frame and label frame are only in a short period of time. It is in the ideal threshold, and the HSKP-CF algorithm performs excellently. The distance index of the center of the tracking frame is always within a reasonable threshold, and there is almost no lost target in the outdoor environment, which is extremely robust. Meantime, two different algorithms are used to analyze the indoor and outdoor test videos, and the results are shown in table 1.
Fig. 4 Comparison of tracking metrics
96
Y. Zhu et al.
Table 1 Comparison of tracking accuracy
Algorithm
Scene
Accuracy (%)
KCF
Indoor
87.6
Outdoor
47.2
Indoor
96.5
Outdoor
93.6
HSKP-CF
For the accuracy calculation of the two algorithms, this article defines the formula as Ptotal =
Nvalid Ntotal
(9)
Here, N valid is the number of the frame within the valid range (IoU values ∈ [0.5, 1.0]), and N total represents the total number of frames in the test video. In Table 1, for the experimental scheme, the indoor tracking accuracy of the HSKP-CF tracker is 8.9% higher than that of the traditional KCF algorithm, while in the outdoor complex environment, the tracking accuracy of the HSKP-CF is 46.4% higher than that of the traditional KCF algorithm. Although the overall accuracy of the HSKP-CF in is above that of KCF, the advantages of the algorithm in the complex environment are more obvious.
5 Conclusion In this paper, a Correlation Filtering algorithm that integrates the information of the Human Skeleton Key Points (HSKP-CF) is proposed. First, the HOG and CN features in correlation filtering algorithm are combined to improve the tracking effect of the tracker in special situations. Secondly, the key point information of the human body is fused with the CF tracker. When the CF tracker loses the target, the HSKPCF tracker can be used to capture the human body posture and combine multiple points to lock the target. Meantime, the CF tracker uses the key points to update the features of the tracked target to prevent the target from being lost. The two form a complementary filtering mechanism. The real scene dataset of indoor and outdoor environments is used to carry out simulation experiments in this paper. The experimental results show that the total accuracy of the HSKP-CF far exceeds the traditional KCF algorithm, whether in indoor or outdoor environments, and it performs particularly well in outdoor environments, with a tracking accuracy as high as 93.6%, which shows the high accuracy and stability of the method in real scene. The human target tracking system based on this paper can operate well in complex outdoor environments, enabling mobile following robots to provide following services in various places, but considering that the walking speed and appearance
HSKP-CF Algorithm Based on Target Tracking for Mobile Following …
97
of the target may change significantly during the following process, so the system algorithm also needs to be improved in real-time to overcome the above challenges. Future work will focus on improving the real-time performance and accuracy of the algorithm, and stably implement the algorithm on embedded devices to cope with complex and changeable environments.
References 1. Yong, H., Kailiang, Z., Xu, Li.: Boundary detection for corn combine harvester’s auto-follow row system based on laser radar. In: ASABE 2018 Annual International Meeting (2018) 2. Jung, E.J., Lee, J.H., Yi, B.J., et al.: Development of a laser-range-finder-based human tracking and control algorithm for a marathoner service robot. IEEE/ASME Trans. Mechatron. 19(6), 1963–1976 (2014) 3. Kasai, Y., Hiroi, Y., Miyawaki, K., et al.: Development of a mobile robot that plays tag with touch-and-away behavior using a laser range finder. Appl. Sci. 11(16), 7522 (2021) 4. Wu, C., Tao, B., Wu, H., et al.: A UHF RFID-based dynamic object following method for a mobile robot using phase difference information. IEEE Trans. Instrum. Meas. 70, 1–11 (2021) 5. González, J., Blanco, J.L., Galindo, C., et al.: Mobile robot localization based on ultra-wideband ranging: a particle filter approach. Robot. Auton. Syst. 57(5), 496–507 (2009) 6. Pradeep, B.V., Rahul, E., Bhavani, R.R.: Follow me robot using bluetooth-based position estimation. In: 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 584–589. IEEE (2017) 7. Tsai, T.H., Yao, C.H.: A robust tracking algorithm for a human-following mobile robot. IET Image Proc. 15, 786–796 (2021) 8. Pang, L., Zhang, L., Yu, Y., et al.: A human-following approach using binocular camera. In: 2017 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1487–1492. IEEE (2017) 9. Mingyi, Z., Xilong, L.: Vision-based target-following guider formobile robot. IEEE Trans. Industr. Electr. (2019) 10. Kalal,Z., Mikolajczyk, K., Matas, J.: Forward-backward error: automatic detection of tracking failures. In: 2010 20th International Conference on Pattern Recognition, pp. 2756–2759, (2010). https://doi.org/10.1109/ICPR.2010.675 11. Dardanelli,A., Corbetta, S., Boniolo, I., Savaresi, S.M., Bittanti, S.: Model-based Kalman filtering approaches for frequency tracking. IFAC Proc., 37–42 (2010) 12. Kung, S.: Kernel Methods and Machine Learning. In: Machine Learning (2014) 13. Zhang, Y., Wang, T., Liu, K., Zhang, B., Chen, L., et al.: Recent advances of single-object tracking methods: a brief survey. Neurocomputing 455, 1–11 (2021) 14. Henriques, J.F., Caseiro, R., Martins, P., et al.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2014) 15. Zhe, C., Tomas, S., Shih-En, W., Yaser, S.: Realtime multi-person 2D pose estimation using part affinity fields. In: Computer Vision and Pattern Recognition (CVPR) (2017)
Application of Fuzzy Fractional-Order PID Control for Single-Degree-of-Freedom Maglev Systems Tengfei Liu, Qilong Jiang, Yu Luo, and Tong Liu
Abstract For the problems of unsmooth track and large load perturbation that may lead to system instability, this paper establishes an magnetic levitation control model based on fuzzy fractional order PID (FFOPID) control, which is successfully applied to the magnetic levitation system for physical levitation and verifies the superiority of FFOPID control. The FFOPID control approach is to extend the integral differential link of the traditional integer order PID to the fractional domain, the flexibility and stability domain are improved, and then the fractional order PID (Fractional order PID, FOPID) parameters of kp , ki , kd , λμ are rectified online using fuzzy inference to improve the dynamic performance of the single point maglev system. Simulink simulations and experiments show that: the overshoot of FFOPID control under + 5 mm shock disturbance is 39.4%, while the overshoot of PID control is 47.7%. FFOPID control has better anti-disturbance ability compared with traditional PID control, and can still maintain a small regulation error under a large load disturbance; the average dynamic The average dynamic response time of FFOPID control is 0.58 s, and the average dynamic response time of PID control is 2.28 s. FFOPID control is more adaptable to the unsmooth track and has better dynamic performance. Keywords Fuzzy fractional order PID · Magnetic levitation system · Parameter adjustment · Simulink
1 Introduction Magnetic levitation is successfully used in maglev trains due to the advantages of low noise and no frictional resistance [1]. In order to ensure the stability of maglev trains under different operating conditions, the design and optimization of levitation control algorithms have been a hot topic of research for scholars at home and T. Liu (B) · Q. Jiang · Y. Luo · T. Liu Southwest Jiaotong University, Chengdu 611756, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_11
99
100
T. Liu et al.
abroad. Although the traditional PID control can ensure the stability of the closedloop system of magnetic levitation, however, the dynamic quality of the magnetic levitation system is more sensitive with the change of PID gain, which causes the problem that the optimal cannot be achieved between fast response and overshoot at the same time. In the practical application of maglev, it is desired that the control algorithm has the advantages of large and flexible stability interval, but the traditional PID cannot meet the requirements of maglev system such as anti-interference and stability well. In recent years, some scholars have also proposed new relevant algorithms applied to maglev control systems, such as fuzzy PID control, self-antidisturbance control, sliding film control and other algorithms, which have achieved good results to some extent [2–4]. However, fuzzy control is limited by the influence of variables and control rules [2]. Poor stability; self-turbulence control stability and anti-interference ability [3]. However, there are disadvantages such as difficult adjustment; although the slide film control has the advantages of no online identification and fast response, the controller is discontinuous and prone to jitter vibration 4. In recent years, with the development of fractional-order theory, fractional-order PID control algorithms have been widely used in magnetic bearings, automation control, computer measurement and other fields, and good control effects have been achieved in solving practical problems [5–7]. Zhang Bangchu applied the fractional-order PID controller to flight control, and proved through experiments that the fractional-order PID control is robust and less affected by external influence [7]; Xue Dingyu analyzed that the fractional-order calculus theory is more accurate than the integer-order calculus theory in modeling complex and variable systems, and proved the superiority of fractional-order control performance with examples [8]; Cao Yu applied the fractional-order PID to heavy-duty maglev platform, and achieved good dynamic performance and anti-disturbance capability through simulation [9]. Although the fractional-order PID controller can expand the regulation range of the maglev system, the introduction of λ and μ also brings difficulties in parameter tuning. This paper, the FFOPID control algorithm is applied to the single-point magnetic levitation system in order to solve the problem of instability of the magnetic levitation system caused by unsmooth track and large load perturbation, and to realize the stable control of the levitation system. The successful application of FFOPID control algorithm not only improves the dynamic performance of the levitation train under different working conditions, but also improves the adaptive performance of the parameters. Fuzzy fractional-order PID combines the advantages of fuzzy algorithm and fractional-order PID algorithm, which has a large stability domain, fast dynamic response and high stability compared with the traditional PID control algorithm. And it is proved by the simulation and experiment of load disturbance and track unevenness: FFOPID control has strong anti-disturbance ability and fast dynamic response performance in maglev system.
Application of Fuzzy Fractional-Order PID Control …
101
2 Mathematical Model of Magnetic Levitation Control For the purpose of analysis, a single solenoid mechanical structure diagram (e.g., Fig. 1) is used as an example. In Fig. 1, h(t) is the distance between the track and the ground, δ(t) is the air gap between the pole and the track, z(t) is the distance between the pole and the ground, F (i, δ) is the electromagnetic force, mg the weight of the electromagnet, fd is the external interference, U(t) is the loop voltage, and i(t) is the loop current. The mathematical model of the maglev system can be described in terms of three major equations of electromagnetism, kinematics and circuitry. ⎧ ⎪ ⎨
F(i, δ) = μ0 N4δ2i(t)(t)A ¨ = mg − F(i, δ) + f d m δ(t) ⎪ ⎩ U = Ri(t) + μ0 N 2 A × di − μ0 N 2 Ai(t) × 2δ(t) dt 2δ 2 (t) 2 2
(1) dδ dt
The above equation, μ0 is the air permeability, A is the magnetic pole area, and R is the coil resistance. ˙ i] chosen as the system state variable, With voltage as the system input and [δ, δ, the open-loop transfer function of the single solenoid suspension system is obtained as when external disturbances are neglected [10]. ∆δ(s) ∆U (s)
=
−K i m 1L 0 3 s + LR s 2 +K δ mRL 0 0
(2)
Kδ = −μ0 N2 Ai20 /2δ30 is the air gap stiffness factor, Ki = μ0 N2 Ai0 /2δ20 is the current stiffness factor, and L0 = μ0 N2 A/2δ0 is the coil inductance at the balance position. R The characteristic equation s3 + LR0 s2 + Kδ mL = 0 is missing, and the open-loop 0 system is unstable by the stability criterion, so the system needs to be stable by introducing feedback to form a closed-loop control. Fig. 1 Single electromagnet suspension structure
Ground Track F fd
mg
i(t) +
U(t) -
i
) Sensor
102
T. Liu et al.
3 Design of FFOPID Controller 3.1 FOPID Controller Fractional order calculus is an extension of integer order calculus, and the fractional order calculus defined by Grlinwald-Letnikov (GL) is the most widely used. It is defined by the following equation. α a Dt
f (x) =
x−a h
Γ (α+k) 1 lim Γ (α) h −α Γ (α+1) h→0 k=0
f (x − kh)
(3)
From Equation: a, t are the upper and lower bounds respectively, α is the order of the fractional order calculus, h is the calculation step, (·) is the Euler Gamma function, and a Dαt is the operator of the fractional order calculus, defined as follows: ⎧ dα R(α) > 0 ⎨ dt α , α 1, R(α) = 0 a Dt = ⎩t −α , R(α) 0 4
(13)
138
Y. Long et al.
5 Steady-state Accuracy and Flexible Accessory Vibration Analysis The stability criterion of the system is given above, but the flexible attachment vibration is ignored in the previous article, and only appears as a disturbance term in the steady-state stage, and it mainly affects the steady-state accuracy of the system, so it needs to be analyzed. Simplifying Eq. (10) yields the following results. ¨ q˙ ||ψ|| − λm ()||ψ||2 ¨ q˙ − ψT ψ ≤ λ M (Δ)λ M f q, V˙3 = (∆ψ)T f q, ⎡ where Δ = ⎣
l2
(14)
⎤ 1 ⎦, and λm () is the smallest singular value of the matrix . 1
Furthermore, λ M (Δ) = max(l2 , 1) ¨ q˙ = M12 q¨ + D22 q˙ f q, ¨ q˙ ≤ λ M (M12 )q¨ + λ M (D2 )q˙ λ M f q,
(15)
where q¨ and q˙ are the upper bound of the second derivative and the first derivative of the flexible attachment vibration. ¨ q˙ , there must be V˙3 ≤ 0. Therefore, when ||ψ|| ≥ λm () λ M (Δ)λ M f q, After giving the steady-state accuracy of the system, it is necessary to analyze the vibration of the flexible attachment to avoid the excitation brought by the controller and the vibration of the flexible attachment tends to diverge. First, simplifying the part of the flexible attachment in the dynamic model can obtain Eq. (16). M22 q¨ + D22 q˙ + Kq = −G2 − M21 θ¨ − D11 θ˙
(16)
It can be found that the actual excitation source of the system is the three items on the right side of the equation, and the system is a typical Largrange system. T y = −G2 − M21 θ¨ − D11 θ˙ , X = qT q˙ T
(17)
Flexible system pass-through Eq. (17) can be simplified as follows ˙ = AX + U where A = X
0 I2 −1 0 , U = M −1 −1 22 −M22 K −M22 D22 y
(18)
The Lyapunov function is selected as follows Vq =
1 T X PX 2
(19)
Robust Variable Structure PID Controller for Two-Joint Flexible …
139
where P is a positive-definite matrix and there is a positive-definite solution Q in the following equation. AT P + PA = −Q
(20)
Ignoring the stimulus source term in Eq. (37) and deriving it in Eq. (19) yields Eq. (21). 1 V˙q = − XT QX ≤ 0 2
(21)
It can be found that the pole of the system is negative and the system is stable, so the response of the system actually converges on the excitation source and indicates that the tracking of the excitation signal is achieved. Considering that the three excitation sources in (16) are bounded, the vibration of the flexible attachment is also bounded, and there is no vibration divergence of the flexible attachment.
6 Simulation In this paper, the multi-joint flexible robotic arm of the research group is physically simulated and verified, and the robotic arm is shown in the Fig. 1 below. During simulation verification, the remaining joints are locked and only the front two joints are in motion. The classical PID controller is used as a comparison to illustrate the effectiveness of the controller proposed in this paper. u = −k p (θ − θd ) − kd θ˙ − θ˙d − k I
(θ − θd )dt , kd = 2, k p = 0.5, k I = 0.05 (22)
Simulation is performed on the following reference signals (case 1 step input, case 2 ramp input, case 3 sinusoidal input). Fig. 1 A figure of flexible manipulator
140
Y. Long et al.
Fig. 2 A figure of error angular velocity under step signal
case1 : θd = π 2 π 3 case2 : θd = 0.05t −0.03t case3 : θd = 0.05 sin 0.1t −0.03 sin 0.05t
(23)
The physical simulation results of the control group are as shown in the following Figs. 2, 3, 4, 5, 6 and 7. Through the above physical simulation results, it can be found that the classical PD controller can achieve the convergence of the system, but the convergence time is long, no matter what kind of input signal system convergence requires about 1000 Fig. 3 A figure of error angle under step signal
Fig. 4 A figure of error angular velocity under slope signal
Fig. 5 A figure of error angle under slope signal
Robust Variable Structure PID Controller for Two-Joint Flexible …
141
Fig. 6 A figure of error angular velocity under sine signal
Fig. 7 A figure of error angle under sine signal
control cycles (about 1 s). Furthermore, the system has steady-state errors and the tracking errors for the three reference input signals are about 0.01 rad, 0.01 rad, 0.02 rad (0.5°, 0.5°, 1°). Next, the simulation results of the control algorithm proposed in this paper are presented. First, set the simulation parameters as follows: kd = 2, k p = 0.5, k I = 0.05, α = 0.2, β = 0.02
(24)
The simulation results for the three input signals are as shown in following Fig. 8, 9, 10, 11, 12, and 13 : Based on the above simulation results: firstly, it can be found that the variable structure PID control method proposed in this paper can also achieve effective tracking for the above three classical input signals, and the convergence time is increased by more than 80% compared with the classical PID controller. Secondly, it can be found that the tracking steady-state error of the system for the three input signals is about 0.005 rad, 0.005 rad, 0.009 rad (0.2°, 0.2°, 0.4°), and the steady-state accuracy is also improved compared with the classic PID controller. Fig. 8 A figure of error angular velocity under step signal
142 Fig. 9 A figure of error angle under step signal
Fig. 10 A figure of error angular velocity under slope signal
Fig. 11 A figure of error angle under slope signal
Fig. 12 A figure of error angular velocity under sine signal
Fig. 13 A figure of error angle under sine signal
Y. Long et al.
Robust Variable Structure PID Controller for Two-Joint Flexible …
143
Table 1 Comparison between classical PID controller and variable structure PID controller Classic PID
Variable structure PID
Steps
Steady-state accuracy
Steps
Steady-state accuracy
Step signal
1000
0.5°
150
0.2°
Ramp signal
1000
0.5°
180
0.4°
Sinusoidal signals
1000
1.0°
180
0.4°
In summary, the comparison results of the classic PID controller and the variable structure PID controller for the three reference inputs are shown in the following Table 1. As shown in Table 1, it can be found from the comparison of the classical PID controller and the variable structure PID controller that the convergence rate of the variable structure PID control under the same conditions is much greater than that of the classical PID control, and the steady-state accuracy is also better than that of the classical PID controller, which proves the effectiveness and superiority of the control method proposed in this paper.
7 Conclusion Based on the classical sliding PID maker, this paper proposes a variable structure PID controller, which maintains the advantages of simple structure, strong robustness and clear physical characteristics of the original classical PID control method, and improves the convergence rate of the system by designing the uniform speed sliding die surface, variable structure control parameters and integral separation algorithm, and solves the defect of slow convergence rate in the classical PID controller. In addition, the design and proof process of the controller does not require precise system model parameters, so the controller is also robust to the system model uncertainty. It is worth noting that the methods used in the study all consider the most extreme cases, which also means that the system performance is not fully utilized, which is what the authors need to focus on in subsequent studies.
References 1. Li, X.L.: Research on dynamic model, trajectory planning and control of flexible manipulator. Zhejiang Gongshang University (2019) 2. Yang, K., Zhao, L.: Finite time command filtering backstep control of state-limited flexible joint manipulator. Autom. Instrument. 36(10), 49–56+99 (2021) 3. Huang, M.T., Zhang, Z.H.: Research on event-triggered fault detection of dual-link flexible manipulator based on neural network. Comput. Knowl. Technol. 17(26), 88–90 (2021)
144
Y. Long et al.
4. Wang, Y.W., Lu, P.L., Zheng, S.F., Zhang, B.X., Li, S.P., Li, X.D.: Research on SMA-driven flexible manipulator BP neural network PID control method. Mod. Electron. Technol. 45(10), 176–181 (2022) 5. Zhou, Z.: Research on trajectory tracking control method of manipulator based on sliding mode variable structure. Qingdao University (2021) 6. Zhu, Y.: Research on robust fault-tolerant control method of distributed drive electric vehicle. Jiangsu Institute of Technology (2021) 7. Jiang, Y.H., Liu, J.W., Yang, P, Xie, X.L.: Active fault-tolerant control of quadcopter UAV based on gain scheduling PID. J. Shandong Univ. Sci. Technol. (Nat. Sci. Edn.) 36(04), 31–37 (2017) 8. Yang, L.: Design of closed-loop control system of two-phase hybrid stepper motor based on variable structure PID control. Tianjin University of Technology (2019) 9. Li, K., Mi, J.: Electromechanical control algorithm of bionic robot based on variable structure PID. J. Henan Inst. Eng. (Nat. Sci. Edn.) 28(02), 32–37 (2016) 10. Liang, Z., Zhao, J., Han, J.W., Wang, J.S.: Application of variable structure PID algorithm in the design of liquid cooling subsystem of electronic warfare aircraft. Inf. Commun. 10, 31–33 (2019)
Methodology for Forecasting the Electricity Losses for Train Traction Sergey Ushakov , Mikhail Nikiforov , and Aleksandr Komyakov
Abstract The value of electricity losses in the traction network is one of the main indicators of the railway transport energy efficiency. When forming the budget of railway companies that own the energy infrastructure of the traction power supply system, it becomes necessary to predict electricity losses for the medium and long term, considering the expected production, technological, climatic and other factors. In this paper the concept of predicting the value of electricity losses for train traction is proposed, the factors influencing the technological and commercial component of losses are established. The most significant factors are the amount of transportation work, the average mass of freight and passenger trains, the average load on the axle of a freight car, the average train speed, the empty run of freight cars, the electric locomotives idling without a locomotive crew, the amount of regenerative energy, the traction network parameters, the expected introduction of energy-saving technologies. It is proposed to determine the coefficients of factors influence on the value of losses on tRhe basis of simulation data with the subsequent formation of second-order regression models. The method for determining a reasonable level of electricity losses has been developed, considering the base and accounting periods conditions. The proposed method has been verified on the existing railway section. The actual level of electricity losses amounted to 13.2% with the forecasted value of 13.3%. This allows to speak about the high efficiency of the proposed method for predicting electricity losses. Keywords Railway transport · Forecasting · Electric energy losses · Electric rolling stock
S. Ushakov · M. Nikiforov Research Institute for Energy Saving in Railway Transport, Omsk State Transport University, Omsk 644046, Russia A. Komyakov (B) Department of Theoretical Electrical Engineering, Omsk State Transport University, Omsk 644046, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_15
145
146
S. Ushakov et al.
1 Introduction One of the criteria for evaluating the efficiency of using electricity for train traction in Russia is the level of electricity losses in the traction power supply system. The losses level is defined as the difference between the volume of receipt (supply) of electricity to the contact network through traction substations and the volume of its consumption by all consumers from the contact network according to the indications of the relevant consumer metering devices. At DC ranges losses in traction substations rectifier units are also included. According to data from open sources, train traction losses amount to 8.6% of electricity consumption, or 3.87 billion kWh per year. In recent years, there has been an upward trend in this indicator. Electricity losses include technological and commercial components [1]. The technological electricity losses level depends on the traction power supply system technical characteristics and its operating modes. The commercial component of electricity losses is influenced by the state of the electricity metering system at traction substations and electric rolling stock, methodological errors, human and other factors. Given that the share of electricity costs for train traction is about 10% of the total costs of the transportation process, when preparing the budget for the coming periods, it is necessary to assess the forecast level of electricity losses, considering the expected production, technological, climatic and other factors. Studies of Russian [2–5] and foreign [6–10] scientists are devoted to the issues of assessing electricity losses in the traction power supply system and developing measures to reduce them. However, the issue of predicting losses has not been deeply studied before.
2 Materials and Methods Forecasting the electrical losses value consists in determining the expected losses value level for the forecast period, considering the dynamics of its change in the forecast period and the prevailing seasonal patterns, as well as considering the planned values of operational indicators, changes in the traction power supply system parameters and plans for the introduction of energy-saving technical means and technologies that affect the change in losses. Determination of a reasonable level of losses consists in adjusting the basic forecast value, considering the actual values of influencing factors, implementing plans for the current and major repairs of the traction power supply system and the track, implementing investment projects for the reconstruction, modernization, electrification of existing and new sections of railways, and also considering the introduction of energy-saving technical means and technologies.
Methodology for Forecasting the Electricity Losses for Train Traction
147
When predicting the level of electricity losses for train traction, the following principles are used: 1. Forecasting and determination of a reasonable level of losses is carried out for a month, a quarter and a year, separately for the DC and AC polygons of the corresponding railway. 2. The change in the parameters of the traction power supply system of the accounting area in the forecast period is determined on the basis of plans for the current and major repairs of the traction power supply system and the track, investment projects for the reconstruction, modernization, electrification of existing and new sections of railways. 3. Loss forecasting is carried out in two stages: 1) Determination of the basic forecast losses level is based on the results of forecasting the: electricity supplied to the contact network from traction substations within the boundaries of the accounting area; electricity consumption according to the indications of metering devices installed on the rolling stock circulating within the accounting area boundaries and which depots are located inside the accounting area; electricity consumption according to the indications of metering devices installed on the rolling stock circulating within the accounting area boundaries but which depots are located outside the accounting area; electricity consumption by stationary consumers from the contact network within the boundaries of the accounting area. At the same stage the basic values of the influencing factors corresponding to the basic predicted losses level are determined. The basic values of the influencing factors and the predicted losses level are determined based on the method of time rows extrapolation, which consists in extending the trends established in the past to the future period in accordance with the procedure for forecasting the operational indicators based on time trends; 2) Determination of the predicted losses level by adjusting its basic predicted level, considering the planned values of operational indicators, changes in the traction power supply system parameters and plans for the introduction of energy-saving technical means and technologies that affect the change in the losses level. It should be taken into account that: the corrective value of the losses level, taking into account the difference between the planned values of operational indicators for the forecast period from the base ones, is determined using the coefficients of the influence of the relevant factors on the losses level, obtained on the basis of a regression analysis of a values sample formed from the results of simulation modeling of the traction power supply system, as well as a regression analysis of statistical reporting data about the losses levels and influencing factors for previous periods of time;
148
S. Ushakov et al.
The corrective value of the losses level, considering the change in the parameters of the traction power supply system in the forecast period, is determined using wellknown formulas for calculating the losses of electrical energy in traction power supply system devices; The corrective value of the losses level, considering the introduction of energysaving technical means and technologies in the traction power supply system in the forecast period, is taken in accordance with the feasibility study for their implementation. The planned operational indicators include: - the value of transportation work made by the rolling stock circulating within the accounting area boundaries; average weight of freight and passenger trains; average load on the axle of a freight car; average train technical speed in freight and passenger traffic; sectional train speed coefficient in freight traffic; share of empty run of freight cars; the electric locomotives idling time without a locomotive crew; electric rolling stock mileage when following the reserve; specific share of regenerative energy; share of electric rolling stock work in freight traffic. Forecasting the losses level is carried out on the condition that from the beginning of the forecast period the following has passed: when forecasting for a month - at least 15 days, when forecasting for a quarter - at least two months, when forecasting for the current year - at least six months. The calculation of the forecasted losses level for the year in order to determine the budgetary parameters of electricity consumption for train traction for the planned year is carried out on the basis of annual data for the last 3–5 years. The absolute value of the forecasted losses level within the boundaries of the accounting railway area, kWh, is determined by the formula: W forecasted = W basic + W corr.plan ,
(1)
W basic – absolute value of the basic forecasted losses level within the boundaries of the accounting railway area, kWh, W corr.plan – calculated corrective losses value, considering the difference between the planned operational indicators and traction power supply system parameters in the forecast period from their basic forecasted values, kWh. Absolute value of the basic forecasted losses level within the boundaries of the accounting railway area, kWh, is determined by the formula: basic basic W basic = WTS − WRS +
i
basic WRS ia −
basic basic WRS aj − Wstat. ,
(2)
j
basic WTS – basic forecasted volume of electricity supplied from traction substations to the contact network within the boundaries of the accounting railway area, kWh; at DC ranges, hereinafter, the volume of electricity supplied from traction substations to the contact network also includes losses in the rectifier units of traction substabasic – basic forecasted volume of electricity consumed by electric rolling tions; WRS
Methodology for Forecasting the Electricity Losses for Train Traction
149
stock within the boundaries of the work areas of locomotive crews belonging to the basic – depots located within the boundaries of the accounting railway area, kWh; Wstat. basic forecasted volume of electricity consumption by stationary consumers from basic – the contact network within the boundaries of the accounting area, kWh; WRSia basic forecasted volume of electricity consumed by electric rolling stock within the boundaries of the work areas of locomotive crews belonging to the depots located outside boundaries of the accounting railway area but circulating inside this area, basic – basic forecasted volume of electricity consumed by electric rolling kWh; WRSaj stock within the boundaries of the work areas of locomotive crews belonging to the depots located within boundaries of the accounting railway area but circulating outside this area, kWh. The forecast model used to determine the basic forecast values specified in this paragraph is a time trend that reflects the dynamics of changes in the corresponding operating indicator by time units. If the duration of the forecast period does not exceed one month, then the unit of time is a day, in other cases - a month. The construction of a time trend is performed on the basis of the initial data on the change in the forecasted indicator for the base period. The previous calendar period similar to the forecast one is chosen as the base one. To increase the reliability of forecasting, the base period should include several past same type periods similar to the forecast one (for example, when forecasting for January, the base period includes data for January of several previous years). The recommended number of same type periods is from three to five. The corresponding data for the year preceding the forecast period must be included in the base period. The calculated corrective value of the loss level, considering the difference between the planned operational indicators and traction power supply system parameters in the forecast period from the basic forecast values, kWh, is determined by the formula: corr.plan
corr.plan
W corr.plan = Woper.ind + WTPSS corr.plan
corr.plan
− WES
,
(3)
Woper.ind – corrective value of the losses level, considering the difference between the planned operational indicators in the forecast period from the basic corr.plan forecast values, kWh; WTPSS – corrective value of the losses level, considering the difference between the planned parameters of the traction power supply system in the forecast period from the basic forecast parameters corresponding to the normal corr.plan – corrective value of operation of the traction power supply system, kWh; WES losses, considering the reduction of electricity losses for traction due to the introduction of energy-saving technical means and technologies in the traction power supply system in the forecast period, kWh. Corrective value of the losses level, considering the difference between the planned operational indicators in the forecast period from the basic forecast values, kWh, is determined by the formula:
150
S. Ushakov et al. corr.plan
plan
Woper.ind = Atotal ·
β2j ·
plan 2
Xj
2 plan − X jbasic + β1j · X j − X jbasic ,
j
(4) plan
X jbasic – basic forecasted value of the j-th indicator; X j – planned value of the j-th indicator for the forecast period; β1j , β2j – coefficients of influence of the j-th plan
indicator on the specific value of losses, kWh per 10,000 gross ton-kilometers; Atotal – the planned total volume of train work in the forecast period within the boundaries of the work areas of the locomotive crews belonging to the depots located within the boundaries of the accounting railway area, 10,000 gross ton-kilometers. Some coefficients of factors influence on the losses value are determined by the constructing regression dependencies based on the use of simulation modeling of power consumption for train traction under the conditions of changing influencing factors on conditional sections of typical track profiles. The coefficients of factors influence that cannot be modeled, as well as factors that affect not only the technical, but also the commercial component of losses, are determined on the basis of reporting data using methods of mathematical statistics. Corrective value of the losses level, considering the difference between the planned parameters of the traction power supply system in the forecast period from the basic forecast parameters corresponding to the normal operation of the traction power supply system, kWh, is determined by the formula: corr.plan
WTPSS
=
plan basic WTPSSk − WTPSS , k
(5)
k plan
WTPSSk – the absolute value of technical losses in the traction power supply system with the parameters corresponding to the plan for carrying out repair work basic – the absolute value of technical losses on the k-th railway section, kWh; WTPSS k of electricity corresponding to the normal (standard) operation parameters of the traction power supply system at the k-th section of the planned work, in the period of the previous calendar year, similar to the forecast one, kWh. plan basic The values WTPSS and WTPSSk are determined by the same formulas, respeck tively, according to the normal (regular) parameters of the traction power supply system operation and according to the parameters of the planned work on the current maintenance and repair of the traction power supply system. Corrective value of losses, considering the reduction of electricity losses for traction due to the introduction of energy-saving technical means and technologies in the traction power supply system in the forecast period, kWh, is determined by the formula: corr.plan plan WES = WESi , (6) i
Methodology for Forecasting the Electricity Losses for Train Traction
151
plan
W E Si – reduction of electricity losses by the introduction of the i-th energysaving technical means or technology (i-th group of technical means or technologies), taken in accordance with the feasibility study, kWh. The relative value of the forecasted losses level within the boundaries of the work areas of locomotive crews is determined by the formula, %: δ forecasted =
W forecasted · 100, forecasted WTS
(7)
forecasted WTS – forecasted volume of electricity supplied from traction substations to the contact network within the boundaries of the accounting railway area, kWh. After the expiration of each subsequent unit of time of the forecast period, the forecast level of losses can be adjusted depending on the prevailing influencing factors. To do this, it is necessary to perform the forecasting procedure again, considering the longer duration of the past part of the forecast period.
3 Results and Discussion The above algorithm for predicting the electricity losses level for train traction provides for the planned indicators of the transportation process and their mutual influence. However, after the end of the period for which the forecast was made, a situation may occur when the actual level of losses will differ significantly from the forecast value. In this case, in order to identify unaccounted operational factors and assess their influence on the losses level, the calculation of the absolute value of the reasonable losses level in the actual conditions and its comparison with the actual level of losses in the forecast period is performed. The absolute value of the reasonable losses level, kWh, is determined by the formula: W reasonable = W basic + W corr.actual ,
(8)
W basic – absolute value of the basic forecasted losses level within the boundaries of the accounting railway area, kW·h; W corr.fact – calculated corrective value of the losses level, considering the difference between the actual values of operational indicators at the onset of the forecast period from the basic forecast values, the actual parameters of the traction power supply system from the basic ones corresponding to the normal operation of the traction power supply system, and the reduction of electricity losses in the traction power supply system due to the actually introduced energy-saving technical means and technologies in the traction power supply system in the forecast period, kW·h.
152
S. Ushakov et al.
Corrective value of the losses level, considering the difference between the actual values of operational indicators at the onset of the forecast period from the basic forecast values, the actual parameters of the traction power supply system from the basic ones corresponding to the normal operation of the traction power supply system, and the reduction of electricity losses in the traction power supply system due to the actually introduced energy-saving technical means and technologies in the traction power supply system in the forecast period, kWh, is determined by the formula: corr.actual corr.actual corr.actual W corr.actual = Woper.ind + WTPSS − WES ,
(9)
corr.actual Woper.ind – corrective value of the losses level, considering the difference between the actual values of operational indicators at the onset of the forecast period corr.actual – corrective value of the losses level, from the basic forecast values, kWh; WTPSS considering the actual parameters of the traction power supply system from the basic ones corresponding to the normal operation of the traction power supply system, corr.actual – corrective value of the losses level, considering the reduction of kWh; WES electricity losses in the traction power supply system due to the actually introduced energy-saving technical means and technologies in the traction power supply system in the forecast period, kWh. Corrective value of the losses level, considering the difference between the actual values of operational indicators at the onset of the forecast period from the basic forecast values, kWh, is determined by the formula:
corr.actual Woper.ind = Aactual total ·
β2j ·
X jactual
2
2 + β1j · X jactual − X jbasic , − X jbasic
j
(10) X jactual – the actual value of the j-th indicator in the forecast period; Aactual total – actual total volume of train work in the forecast period within the boundaries of the work areas of the locomotive crews belonging to the depots located within the boundaries of the accounting railway area, 10,000 gross ton-kilometers. Corrective value of the losses level, considering the actual parameters of the traction power supply system from the basic ones corresponding to the normal operation of the traction power supply system, kWh, is determined by the formula: corr.actual WTPSS =
actual basic WTPSS , − WTPSS k k
(11)
k actual WTPSS – the absolute value of technical losses in the traction power supply k system with the actual parameters at the onset of the forecast period, kWh. Corrective value of the losses level, considering the reduction of electricity losses in the traction power supply system due to the actually introduced energy-saving technical means and technologies in the traction power supply system in the forecast period, kWh, is determined by the formula:
Methodology for Forecasting the Electricity Losses for Train Traction corr.actual WES =
actual WES , i
153
(12)
i actual WES – reduction of electricity losses from the i-th energy-saving techi nical means or technology (i-th group of technical means or technologies) actually introduced in the forecast period, taken in accordance with the feasibility study, kWh. The relative value of the reasonable level of losses, %, is determined by the formula:
δ reasonable =
W reasonable · 100, actual WTPSS
(13)
actual WTPSS – the actual volume of electricity supplied from traction substations to the contact network within the boundaries of the accounting railway area, kWh. Consider the results of predicting the level of electricity losses for a real section of a DC railway with a forecasting horizon of 3 months. Two months have passed since the beginning of the forecast period. The unit of time is a month, the same type of data set is a quarter. The base period includes three periods of the same type - the first quarters of the previous three years. Based on the forecasting results, the expected value of the level of electricity losses for train traction was 13.4%. At the same time, the actual value of the loss level was 13.2%. According to the results of recalculation according to the data on the actual values of operational indicators, changes in the parameters of the traction power supply system and energy-saving measures implemented at the site, the reasonable level of electricity losses for train traction was 13.3%. The reasonable value of electricity losses close to the actual value indicates that almost all influencing factors were taken into account. A more significant difference between the relative value of the actual losses level and the relative value of the reasonable losses level indicates the presence of unaccounted factors. If this difference exceeds 1.0%, then the influence of such factors is significant, and in this case, additional studies are required to identify the reasons for such an excess.
4 Conclusion Given the significant importance of electricity losses for train traction, the task of forecasting their level for future periods is relevant. To solve this problem, a concept is proposed for calculating electricity losses in the traction power supply system, considering the parameters and modes of operation of its equipment, energy and operational characteristics of the transportation process. A list of the most significant factors affecting the losses level was compiled and the coefficients of their influence were determined using simulation modeling and methods of mathematical statistics based on reporting data. The necessity of comparing the actual electricity losses level
154
S. Ushakov et al.
with the reasonable level of losses, calculated according to the actual indicators, is shown in order to identify new previously unaccounted factors affecting the level of losses.
References 1. Cheremisin, V.T., Nikiforov, M.M., Ushakov, S.Y.: Assessment of train traction electric energy losses. In: International Multi-Conference on Industrial Engineering and Modern Technologies, FarEastCon, Vladivostok, pp. 8602528 (2018) 2. Frantasov, D., Kudryashova, Y.: Information and measurement system for electric power losses accounting in railway transport. Transp. Res. Procedia 54, 552–558 (2021) 3. Zakaryukin, V., Kryukov, A., Cherepanov, A.: Intelligent traction power supply system. Adv. Intell. Syst. Comput. 692, 91–99 (2018) 4. Nezevak, V.: Limitation of power equipment traction substations overload due to the use of electricity storage systems. Transp. Res. Procedia 68, 318–324 (2023) 5. Cheremisin, V., Nikiforov, M., Kashtanov, A., Ushakov, S.: Intelligent automated system for the monitoring of railway areas with a low transport process energy efficiency. Adv. Intell. Syst. Comput. 692, 83–90 (2018) 6. Arboleya, P., Mayet, C., Mohamed, B., Aguado, J. A., De la Torre, S.: A review of railway feeding infrastructures: Mathematical models for planning and operation. eTransportation 5, 100063 (2020) 7. Soler-Nicolau, M., Mera, J.M., López, J., Cano-Moreno, J.D.: Improving power supply design for high speed lines and 2 × 25 systems using a genetic algorithm. Int. J. Electr. Power Energy Syst. 99, 309–322 (2018) 8. Das, A., McFarlane, A.: Non-linear dynamics of electric power losses, electricity consumption, and GDP in Jamaica. Energy Economics 84, 104530 (2019) 9. Pu, H., Cai, L., Song, T., Schonfeld, P., Hu, J.: Minimizing costs and carbon emissions in railway alignment optimization: a bi-objective model. Transp. Res. Part D: Transp. Environ. 116, 103615 (2023) 10. Arabahmadi, M., Banejad, M., Dastfan, A.: Hybrid compensation method for traction power quality compensators in electrified railway power supply system. Global Energy Interconnection 4(2), 158–168 (2021)
Study on a Denoising Algorithm of Improved Wavelet Threshold Yunhai Hou and Yu Ren
Abstract When the partial discharge signal is monitored online in real-time, the signal collected by the signal receiver is frequently disturbed by the surrounding clutter. The wavelet threshold denoising technique has gained widespread adoption across various domains such as scientific research, high-end medical equipment, industrial manufacturing and other related fields with its excellent noise reduction performance since it was proposed in the last century with the advancement of industrial informatization. The threshold functions proposed earlier include hard threshold function and soft threshold function. However, as a result of the limitations inherent in the function, these two threshold functions cannot achieve the ideal goal when solving practical problems, and the reconstructed signal has a poor degree of restoration. In view of the problem that the above two threshold functions have poor noise reduction effect, a novel denoising formula based on wavelet thresholding is suggested in the present study. Matlab is used to run the simulation experiment, and several threshold algorithms are used to eliminate the noise from the partial discharge data. Two indicators are employed to assess the effectiveness of noise reduction: Signal-to-noise ratio (SNR) and mean square error (MSE). Upon comparing the experimental findings, it has been revealed that the novel threshold function surpasses the conventional approaches. Keywords Wavelet threshold denoising · Threshold · Threshold function · Signal-to-noise ratio · Mean square error
Y. Hou (B) · Y. Ren School of Electrical and Electronic Engineering, Changchun University, Jilin, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_16
155
156
Y. Hou and Y. Ren
1 Preface Power equipment in the generation of partial discharge phenomenon will cause equipment insulation degradation, discharge will be a serious breakdown of the insulation layer, causing the local location of the equipment temperature to rise, the potential for fire. In this context, it is necessary to test the insulation performance of power equipment, and real-time monitoring of partial discharge phenomenon is a popular insulation diagnosis method [1]. Partial discharge detection methods mainly include pulse current, UHF, ultrasonic, photometric, etc. [2]. Electric Power Equipment is usually located in an environment with strong interference signals, which creates a challenge in precisely distinguishing the initial signal within the collected signal. Due to the influence of disturbance signals when conducting partial discharge online monitoring. Interference signals usually contain white noise, periodic noise and impulse interference, where white noise becomes the main source of interference in the most common case. The main approaches for white noise elimination in partial discharge are Kalman filter [3], empirical modal decomposition [4, 5] and wavelet threshold denoising [6, 7]. Partial discharge signal is a kind of weak transient signal, which is easy to be interfered, the conventional eliminating techniques encounter challenges in efficiently handling partial discharge signals that are accompanied by noise, and the denoised data is difficult to retain the characteristics of the original signal. As a signal processing tool developed in recent years, the application of wavelet threshold denoising is prevalent across various fields, including but not limited to signal processing, image processing, and fault detection. In the choice of denoising methods, the first two denoising methods using hard thresholding as well as soft thresholding were proposed [8]. However, the mathematical formulations of the conventional soft and hard threshold formulas contain defects. In this paper, on the basis of fully absorbing the advantages of the above two optimization functions, we propose an improved threshold function, and after the noise reduction experiments and examining the results obtained from the experiment, we confirm that the updated function formula has a significant improvement over the previous functions on the denoising optimization.
2 Wavelet Threshold Denoising Principle Wavelet threshold denoising algorithm based on the principle [9]: After the signal containing interference noise is wavelet decomposed, the energy of the signal that requires extraction predominantly disperses in the area with greater amplitude of the wavelet coefficients, whereas the energy of the disturbing signal mainly disperses in the region with lower wavelet coefficient amplitude. Hence, it is possible to utilize the threshold value as a deciding factor to ascertain the retention or elimination of the decomposed wavelet coefficients.
Study on a Denoising Algorithm of Improved Wavelet Threshold
157
Fig. 1 Wavelet threshold denoising algorithm flow
The specific operation process is to filter the wavelet coefficients smaller than the threshold as the wavelet coefficients of the interference signal, and the rest as the wavelet coefficients of the effective signal. The wavelet coefficients retained as effective parts are inversely transformed. The reconstructed signal is regarded as the de-noising signal after filtering the clutter. The denoising process of wavelet threshold denoising algorithm is shown in Fig. 1.
2.1 Basic Steps of Wavelet Threshold Denoising (1) The characteristics of the noise-containing signal are analyzed, the appropriate wavelet basis function and the quantity of decomposition layers are picked, and then the wavelet decomposition of the noise-bearing signal is performed. This article chooses the suitable wavelet basis function by setting the number of decomposition layers and fixed threshold in advance. Then, the dbN, symN, and coifN wavelet bases, which possess excellent adaptability, are employed to diminish the noise in the noisy traveling wave signal. The peak signal-tonoise ratio is used as the criterion for judging the performance of wavelet basis function. In general for complex signals, the selection of the decomposition layers ranges from 3 to 5 layers.The results of signal-to-noise ratio improvement for different wavelet functions are shown in Fig. 2. (2) The decomposed wavelet coefficients are thresholded by selecting the appropriate threshold value and threshold function. The denoising outcome can be influenced by the decision of the threshold value. A threshold value chosen too large will result in the lack of useful information in the reconstructed signal; a threshold value chosen too small will retain too much noise component. (3) Wavelet reconstruction. The wavelet coefficients after the threshold function processing are reconstructed by means of discrete wavelet inverse transform.
158
Y. Hou and Y. Ren
Fig. 2 Improved amount of signal-to-noise ratio of different wavelet functions
In wavelet transform, different wavelet basis functions have different filtering principles. Therefore, different wavelet basis functions are chosen for the same signal, and the noise reduction effect varies [10]. Decomposing the signal several times requires a huge amount of operations and extremely complicated operations, which directly affects the efficiency of operations. The relationship between SNR and the quantity of layers for the process of decomposition is expressed as an approximate positive correlation until the optimal quantity of decomposition layers is attained, but after the optimal number of decomposition layers is attained, SNR will not increase anymore, but will appear to decrease. The dissimilarity in features between the primary signal and the interfering signal is evident, and the variation in the number of decomposition layers will highlight the differentiation between the two signals. Assuming that the quantity of selected decomposition layers is large, thresholding the wavelet coefficients in all the layers of space, in addition to increasing the amount of unnecessary computation will also make the signal information you want to extract and retain seriously lost, directly affecting the signal reconstruction later, resulting in signal distortion [11].
2.2 Threshold Functions The two primary types of threshold function formulas are the hard threshold function and the soft threshold function. Their mathematical formulas are demonstrated in Eqs. (1) and (2), and the function formula graphs are shown in Figs. 3 and 4. Soft threshold function:
Study on a Denoising Algorithm of Improved Wavelet Threshold
159
Fig. 3 Hard threshold function
Fig. 4 Soft threshold function
ωˆ j,k =
| | |ω j,k | < λ 0| | | | sgn(ω j,k )(|ω j,k | − λ) |ω j,k | ≥ λ
(1)
| | 0 ||ω j,k || < λ (ω j,k ) |ω j,k | ≥ λ
(2)
Hard threshold function: ωˆ j,k =
In the above threshold function equations, ω j,k indicates the jth coefficient in the kth rank after wavelet decomposition, and ω j,k indicates the wavelet coefficient of the reconstructed signal. √ The threshold value is chosen universally, and its generalized = σ |2lgN . The functional equation of the noise standard variance is equation is λ | σ = median |ω j,k | /0.6745. The meaning of N is the number of signal points. The soft threshold function works by comparing the absolute value of the decomposed wavelet coefficients with the established threshold, the wavelet coefficients less than the threshold will be removed, greater than or equal to the threshold then the point value and the threshold to do the difference and keep the original sign. The soft function image reveals that its maintains continuity at ω = λ and no additional oscillation is generated after signal reconstruction; the disadvantage is that the reconstructed signal is affected by the fixed deviation, which can cause blurred distortion after signal reconstruction [12]. The function of hard thresholding eliminates wavelet coefficients that are less than the specified value, while retaining the remaining coefficients as reliable data. From the function equation and function image of the hard threshold function, a more intuitive observation indicates that the hard threshold function experiences interruptions at ω = λ, which may lead to unnecessary waveform oscillation after signal reconstruction and generate pseudo-Gibbs phenomenon [13]. The soft and hard threshold function characteristics are shown in Table 1. Δ
160 Table 1 Comparison of charstics of hard threshold acteriand soft threshold functions
Y. Hou and Y. Ren
Characteristics
Soft threshold function
Hard threshold function
Overall Profile
Smooth
coarse
Coherence
coherent
Uneven distribution
3 An Improved Threshold Denoising Method The earlier sections provide a comprehensive account of the traits of the two conventional function formulas, but they do not achieve the expected goal when solving practical problems. Compared to the soft thresholding function, the hard thresholding function typically yields a lower MSE and results in superior denoising performance. The present study introduces an enhanced denoising technique that preserves the benefits of hard thresholding function denoising while addressing its shortcomings related to oscillations.Its threshold function equation is shown in (3). ωˆ j,k =
0 ω j,k
| | |ω j,k | < λ | | |ω j,k | ≥ λ
(3)
√ λk = λ/ 3 k is the reselected threshold, which represents the threshold of the Kth layer after wavelet decomposition. The new threshold function has the following advantages. (1) A larger threshold is set for the wavelet coefficients in the low-level space where the interfering signals are more concentrated, and a smaller threshold is set for the wavelet coefficients in the high-level space. This is determined by the scattering rules of interference signal energy on each scale of wavelet transform. It can ensure to filter out the interference signal in the lower space while retaining the useful information in the higher space. (2) The improved thresholding function has no constant deviation between ω j,k and ω j,k , thus avoiding the effect of the soft thresholding function’s own defects. (3) Compared with the hard thresholding function, the layer-by-layer decreasing threshold reduces the jump amplitude of wavelet coefficients, which improves the smoothness of the data that has been restored.
4 Simulation Analysis The PSCAD software is used to build a simulation model of cable partial discharge propagation, control the width of the pulse signal by controlling the opening and closing of the circuit breaker, simulate the transmission of the partial discharge signal in power cables of different lengths by changing the length of the cable between the partial discharge pulse signal generator and the signal detection sensor, and then compare and analyze the change rules of the detected voltage waveform. The cable
Study on a Denoising Algorithm of Improved Wavelet Threshold
161
Fig. 5 Cable structure model
Fig. 6 Partial discharge signal
Fig. 7 Trigger Pulse signal
structure model is shown in Fig. 5, the trigger pulse signal is shown in Fig. 6 and the partial discharge signal is shown in Fig. 7. To validate the efficacy of the novel threshold function proposed in this study, signal denoising simulation experiments are conducted with the help of MATLAB 2016 software. Gaussian white noise is selected as the interference signal and introduced into the original signal for denoising experiments. In this paper, the determination of the denoising effect is firstly by image comparison. Secondly, two parameters SNR and MSE are introduced to determine the denoising effect by the improvement of their data volume. ⎛
N
⎞ s 2 (i )
⎜ ⎟ ⎜ ⎟ i=1 S N R = 10lg ⎜ N ⎟ 2 ⎠ ⎝ s(i) − sˆ (i)
(4)
i=1
⎡ | N |1 √ MSE = (s(i) − sˆ (i))2 N i=1
(5)
162
Y. Hou and Y. Ren Δ
s(i ) means the initial signal and s (i ) means the noise-reduced signal SNR demonstrates the ratio of the initial signal to the noise-reduced signal energy. The higher the value, the higher the proportion of the signal to be extracted; The MSE demonstrates the level of deviation between the denoised data and the original data. As the value decreases, the reconstructed signal exhibits a greater degree of detail preservation from the primary signal and a reduced level of signal distortion [14]. For the simulation experiment of the wavelet threshold denoising algorithm, the sym8 wavelet basis function has been chosen in this study and the number of decomposition layers is selected as 5 layers. 1) Gaussian white noise with SNR = 0.005 was added to the original signal and the experimental results are shown in Figs. 8, 9, 10, 11, 12 and 13. The denoising effects of the three threshold functions are shown in Table 2. (2) Gaussian white noise with SNR = 0.0025 was added to the original signal and the experimental results are shown in Figs. 13, 14, 15, 16 and 17. The denoising effects of the three threshold functions are shown in Table 3. By adding Gaussian white noise with SNR = 0.005 and SNR = 0.0025 to the primary signal respectively, the noise-containing data is denoised experimentally using soft thresholding function, hard thresholding function and upgraded wavelet thresholding function respectively. From the pictures shown in the experimental Fig. 8 Partial discharge diagram without Gaussian noise, used to add white noise with SNR = 0.005
Fig. 9 Partial discharge diagram with Gaussian noise (SNR = 0.005)
Study on a Denoising Algorithm of Improved Wavelet Threshold
163
Fig. 10 Hard threshold denoising (SNR = 0.005)
Fig. 11 Soft threshold denoising (SNR = 0.005)
Fig. 12 Improved threshold function denoising (SNR = 0.005)
results, It is apparent that the refined threshold function offers significant benefits in terms of its noise reduction capabilities. By analyzing the data in the quantitative comparison table of the three denoising results of the partial discharge signal plotted after the above experiments, it can be learned that the new thresholding function has a higher SNR and the lowest MSE in the reconstructed signals after the denoising tests compared with the conventional thresholding formulas. This further supports that the improved thresholding function has a better denoising effect.
164
Y. Hou and Y. Ren
Fig. 13 Partial discharge diagram without Gaussian noise, used to add white noise with SNR = 0.0025
Table 2 Quantitative comparison of three denoising results of partial discharge signals (SNR = 0.005) Noisy signal
Hard threshold function
Soft threshold function
Improved threshold function
SNR
0.0051
10.8947
7.5214
15.0991
MSE
0.0237
0.0068
0.0100
0.0042
Fig. 14 Partial discharge diagram with Gaussian noise (SNR = 0.0025)
Fig. 15 Hard threshold denoising (SNR = 0.0025)
Study on a Denoising Algorithm of Improved Wavelet Threshold
165
Fig. 16 Soft threshold denoising (SNR = 0.0025)
Fig. 17 Improved threshold function denoising (SNR = 0.0025)
Table 3 Quantitative comparison of three denoising results of partial discharge signals (SNR = 0.0025) Noisy signal
Hard threshold function
Soft threshold function
Improved threshold function
SNR
0.0026
10.3951
7.4480
15.0246
MSE
0.0237
0.0072
0.0101
0.0042
5 Conclusion Based on the basic principle of wavelet threshold denoising, this paper studies the effect of denoising from the perspective of threshold function. By establishing a cable partial discharge model and then introducing Gaussian white noise as the interference signal, then, the conventional threshold denoising methods are used for noise-containing signal for noise reduction respectively. A new threshold function is proposed in this paper to address the defects of the traditional threshold functions in the noise reduction process. This resolves the issue of discontinuity at the threshold in the conventional hard-threshold thresholding function and overcomes the drawback of consistent deviation between the reconstructed and initial signals. Based on the experimental findings, it is apparent that the utilization of the enhanced
166
Y. Hou and Y. Ren
wavelet thresholding function leads to a greater resemblance in waveform and a better suppression of noise, ultimately resulting in a more desirable outcome.
References 1. Zheng, Q., Luo, L., Song, H., et al.: A RSSI-AOA-based UHF partial discharge localization method using MUSIC algorithm. IEEE Trans. Instrum. Meas. 70, 1–9 (2021) 2. Fan, W., Guan, S., Fu, J., et al.: Comparison study of partial discharge detection methods for switchgears. 2016 International Conference on Condition Monitoring and Diagnosis (CMD). IEEE, pp. 319–323 (2016). 3. Bai, Y., Wang, X., Jin, X., et al.: A neuron-based Kalman filter with nonlinear autoregressive model. Sensors 20(1), 299 (2020) 4. Luo, H.M., Song, W.Q., Xing, Y.R., et al.: An enhanced seismic weak signal processing method based on improved empirical mode decomposition. Adv. Geophys. 34(1), 167–173 (2019) 5. Chan, J.C., Ma, H., Saha, T.K., et al.: Self-adaptive partial discharge signal de-noising based on ensemble empirical mode decomposition and automatic morphological thresholding. IEEE Trans. Dielectr. Electr. Insul. 21(1), 294–303 (2014) 6. Wu, J.: Research on the threshold denoising method of wavelet analysis. Electron. Testing 480(03), 84–85 (2022) 7. Zhang, S.Q., Li, J.Z., Guo, R., et al.: Study on white noise suppression using complex wavelet threshold algorithm. Applied Mechanics and Materials. Trans. Tech. Publ. Ltd, 521, 347–351 (2014) 8. Lu, J., Hong, L., Ye, D., Zhang, Y.: A new wavelet threshold function and denoising application. Math. Probl. Eng. 2016, 1–8 (2016). https://doi.org/10.1155/2016/3195492 9. Zhao, R.M., Cui, H.: Improved threshold denoising method based on wavelet transform. In: 2015 7th International Conference on Modelling, Identification and Control (ICMIC), pp.1–4. IEEE (2015) 10. Yu, H., Zhen, T.: Research on optimal wavelet base selection based on wavelet threshold denoising. Modern Electron. Technol. 44(17), 86–89 (2021) 11. Lu, R., Wang, Z.: Research on improved wavelet thresholding algorithm for denoising in dye concentration detection. Comput. Digital Eng. 50(01), 40–44+79 (2022) 12. Li, J., Cheng, C., Jiang, T., et al.: Wavelet denoising of partial discharge signals based on genetic adaptive threshold estimation. IEEE Trans. Dielectr. Electr. Insul. 19(2), 543–549 (2012) 13. You, F., Zhang, Y.: Research of an improved wavelet threshold denoising method for transformer partial discharge signal. J. Multimed. 8(1), 56 (2012) 14. Li, J., Jiang, T., Grzybowski, S., et al.: Scale dependent wavelet selection for de-noising of partial discharge detection. IEEE Trans. Dielectr. Electr. Insul. 17(6), 1705–1714 (2010)
Research and Practice of Protocol Conversion in Comprehensive Automation Transformation Wu Xia, Xianggui Tian, Kunyan Zhang, Ming Lei, Qing Ye, and Chunhong Wu
Abstract At present, the substation automation system of process control layer, layer, spacer layer used by most of the equipment or devices of different manufacturers. However, there are some differences in the models of devices from different manufacturers, resulting in incompatible communication protocols, which cannot be connected to each other and make it difficult to realize high-speed transmission and sharing of information. As a result, the functions of devices cannot be used normally. Combined with the comprehensive automation transformation of substation, this paper analyzes the reasons for the failure of serial compensation remote control of 500 kV substation by protocol converters from different manufacturers, analyzes the messages between protocol converters from plant a and plant B, analyzes the reasons and possibility of remote control failure, and verifies through experiments to solve the problem of unable remote control, It provides a reference for solving the communication between different manufacturers in the process of substation comprehensive automation transformation. If there is abnormal function between two different manufacturers, it can be judged that there are some differences between the models of two protocol converters. The communication problem can be solved by capturing and analyzing the messages, finding out the relationship between the abnormal points and the model file, modifying the model of one protocol converter to match the model of the other, and making the model of the two protocol converters consistent. Keywords Protocol conversion · Model · Remote control
W. Xia (B) · X. Tian · K. Zhang · M. Lei · Q. Ye · C. Wu China Southern Power Grid Extra High Voltage Transmission Co., Liuzhou, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_17
167
168
W. Xia et al.
1 Introduction With the continuous development of science and technology, the equipment and technology is constantly being applied. The requirements for the safe operation of the power grid are also rapidly increased. However, the old equipment of the original substation can not meet the operation and control requirements of the new power grid, and can not meet the requirements of the modern power grid for the safe operation of substation automation. It is the general trend to adapt to the continuous development of the power grid to carry out equipment transformation or comprehensive automation transformation. In the process of transformation, the gradual replacement of equipment will make the substation automation system at the same time there are many subsystems from different manufacturers, different functions. Because of the communication protocol is incompatible with each other, they can not be interconnected. It is difficult to achieve high-speed transmission and sharing of information. Protocol conversion is the key technology to realize the interconnection of subsystems. Protocol converter is the interface device to realize protocol conversion function, so that information can be transmitted and shared among devices of different manufacturers. Protocol conversion is the key technology to realize the interconnection of subsystems, and protocol converter is the interface device to realize the function of protocol conversion. At present, the development trend of information technology is to realize super value communication services and applications through different types of communication structures and network platforms [1]. There are many different types of monitoring systems used to monitor the operation status of communication equipment in power communication network. Because the communication protocols are incompatible with each other and cannot be interconnected, it is difficult to realize the modern communication requirements of high-speed transmission of information. In order to solve the access and compatibility of a large number of old and new operation equipment, IEC61850 standard is introduced to realize “regulation integration” in the long run [2, 3]. Protocol conversion is the key technology to realize the interconnection of monitoring systems, and the central component of the system is protocol converter [4]. A 500 kV substation uses the protocol converter of plant A to convert the protocol of CLP Purui 103 into 61,850 protocol, which is connected to the monitoring system and telecontrol device of plant A. There are no problems in the functions of remote signaling, telemetry and remote control. At present, the substation is subject to comprehensive self transformation, and the telecontrol device of plant B is installed to communicate with the district control center. The remote signaling and telemetry data of the telecontrol device of plant B are sent to the district control center normally, but the remote control test fails. The remote control message between the protocol converter of plant B and the protocol converter of plant A is analyzed on site. The network structure is shown in Fig. 1.
Research and Practice of Protocol Conversion in Comprehensive …
169
Fig. 1 The network structure
2 Cause Analysis The message between the protocol converter of plant B and the telecontrol and remote control protocol converter of plant B is captured on site, as shown in Fig. 2. The left is the telecontrol selection message of plant B, and the right is the telecontrol and remote control selection message of plant A. The address of protocol converter of plant B is 172.20.100.197. The address of remote motor of plant A is 172.20.1.4. The address of protocol converter of plant A is 172.20.100.10. It can be found that the remote control selection message data structure of the two sides is the same, but there are some differences. The comparison differences between the two sides are as follows: (1) The remote control execution time mark in the remote control message is different. The execution time mark of the regulation transfer of plant B is 197001-01, and the execution time mark of the remote control of plant a is the actual time 2021-04-12. (2) The remote control times are different. The remote control times of factory B are 1 and that of factory A are 0. This item does not affect the remote control results. Compared with the message of successful remote control of protocol converter of plant B in other sites, as shown in Fig. 3, the message of successful remote control of protocol converter of plant B in its site is on the left, and the message of remote control failure of a 500 kV substation is on the right. It is found that there are many kinds of remote control messages in the substation with UTC remote control execution time mark items. The reason for the UTC entry is that the 61,850 model provided by the protocol converter of plant a has more opertime field, which leads to the addition of this entry after the protocol converter of plant B imports the model compared with other sites. The 61,850 model provided by the protocol converter of plant A is shown in Fig. 4.
170
W. Xia et al.
Fig. 2 Remote control message of plant B turning (left) and plant a telecontrol (right)
Fig. 3 Comparison between remote control success message (left) of gauge B and remote control failure message (right) on site
Fig. 4 61,850 model of protocol converter in plant A
Through the above analysis, we can get the reason for the failure of the regulation to remote control in plant B: the 61,850 model of the protocol converter in plant a has more opertime field, which leads to the increase of UTC entry line in the remote control message of the protocol converter in plant B after importing the model. The entry line time of the protocol converter in plant B is 1970–01-01, but the protocol converter in plant a does not recognize the time, so it replies to the negative remote control message, Cause the remote control to fail.
Research and Practice of Protocol Conversion in Comprehensive …
171
3 Solution Scheme 1: the protocol converter of plant A modifies the 61,850 model and deletes the opertime field, by deleting the “< BDA name = “operTm”bType = “Timestamp”/ >” of remote data structure in the model file. After the modification, the protocol converter of plant B and the protocol converter of plant A reimport the model. After the update, they send and identify messages according to the new model. The entry lines of opertime will no longer be sent and identified in the messages, so as to solve the problem of remote control failure. If the protocol converter of plant a still recognizes the entry line after re importing the model, it needs to be further modified. This scheme needs to restart the protocol converter and adaptability to the modified model of remote motor of plant A. If there is a problem with remote control on either side, the model needs to revert to the original version. Scheme 2: under the condition that the protocol converter of plant a does not modify the 61,850 model, modify the 61,850 Communication Program of the protocol converter of plant B, and modify the time of UTC time mark entry in the remote control message to the actual time to match the protocol converter of plant A. Protocol converter of plant B needs to be upgraded on site. After the upgrade, the database configuration of protocol converter of plant B and remote motor of plant B is carried out. In this scheme, the protocol converter of plant a does not need to be modified, and there will be no risk of plug-in damage caused by power failure and restart.
Fig. 5 Modified network structure diagram
172
W. Xia et al.
Scheme 3: the protocol converter of plant B communicates directly with China Power Purui Guizhuan. The existing network 103 protocol of plant A is used. After the communication is completed, the data is uploaded to the new monitoring system. The modified network structure diagram is shown in Fig. 5. In this scheme, two more switches are required to connect protocol converter of plant B and CLP Puri protocol converter. In addition, you need to configure the database for protocol converter of plant B, remote motor of plant B and monitoring background of plant B.
4 Conclusion By comparing and analyzing the message of remote control failure between the protocol conversion of plant B and the protocol conversion of plant a, and the message of successful remote control between the telecontrol of plant a and the regulation conversion of plant a, it is found that although both the regulation conversion of plant B and the regulation conversion of plant a use 61,850 models, there are some differences in their models. The regulation conversion model of plant A has more optertime field, resulting in the failure of the return command of plant a when the regulation conversion of plant B issues the remote control command, So that the remote control failed. By comparing the difference between remote control failure and remote control success message, this paper puts forward three solutions to the remote control failure between factory a and factory B, which provides a reference scheme for solving the communication problem between factory a and factory B. Scheme 2 is adopted on site to modify the 61,850 Communication Program of plant B, and modify the time of UTC time mark entry in the remote control message to the actual time, so as to match the protocol converter of plant A. the remote control is successful, and the problem of communication between plant a and plant B is solved. By analogy, if there are anomalies in the use of functions between two different manufacturers’ protocol converters, it can be judged that there are some differences between the models of the two protocol converters. By capturing and analyzing the packets on site, we can find out the cause of abnormal usage of the protocol converter function from different vendors. Then find out the relationship between the outliers and the model file, modifying the model of one protocol converter to match the model of another protocol converter. So that the model of the two protocol converters is consistent, the functions between the protocol converters can be used normally.
References 1. Liu, J., Liu, B., Yang, X.L.: Discussion on integration method of protocol converter. J. Comput. Appl. 04, 32–34 (1999) 2. Zhang, Y.L.: New observation on regulation integration. State Grid News (005) (2011)
Research and Practice of Protocol Conversion in Comprehensive …
173
3. Ding, L.W., Dong, B., Li, P.C., et al.: A communication method between embedded configuration touch screen and non-standard communication protocol equipment. Electron. Sci. Technol. 32(10), 70–74 (2019) 4. Song, M.Z., Hou, S.Z., Zhao, J.L., et al.: Design and implementation of protocol converter for power communication monitoring network. J. Electric Power 04, 253–256 (2001)
Research on Interaction Potential of Electric Vehicles in Power Grid Dispatching and Operation Scenarios Peng Liu, Xingang Yang, Boyuan Cao, Zhenpo Wang, and Encheng Zhou
Abstract As the number of electric vehicles increases year by year, electric vehicles can be used as energy storage equipment to optimize the grid load curve. Studying the interactive potential evaluation and quantification methods of electric vehicles as adjustable resources to participate in grid mobilization and operation has far-reaching significance for the optimal operation of electric vehicles and grid load. This paper is based on the historical operation data of all electric vehicles in Shanghai, collected by the Internet of Vehicles detection platform. First, classify electric vehicles as adjustable resources in interaction, and build corresponding indicator system based on classification. Secondly, the evaluation method and quantitative method are given according to the classification. Finally, an example is simulated based on the data. The simulation results show that the electric vehicles in Shanghai, as an interactive and adjustable resource for grid operation, have the potential to optimize the grid operation curve in peak shaving and valley filling. Keywords Electric vehicle · Power grid load dispatching operation · Train network interaction · Peak cutting and valley filling
P. Liu (B) · Z. Wang National Engineering Research Center of Electric Vehicles, Beijing Institute of Technology, Beijing 10081, China e-mail: [email protected] X. Yang · B. Cao State Grid Shanghai Electric Power Company Electric Power Research Institute, Shanghai 200437, China E. Zhou Beijing BITNEI Corp., Ltd., Beijing, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_18
175
176
P. Liu et al.
1 Introduction With the development of transportation electrification, the number of electric vehicles has accelerated year by year. It is estimated that the number of electric vehicles will reach 11 million in 2025 and 30 million in 2030 [1]. In order to meet the demand of charging electric vehicles, the construction of charging stations and the research of “vehicle-network” interactive technology are also being carried out. Because the charging behavior of electric vehicles is closely related to people’s work and rest habits, combined with the characteristics of the grid load change curve, a large number of charging loads connected to electric vehicles will have an impact on the load operation of the distribution network at the peak of power consumption [2]. Due to the dispersion of electric vehicle users, it is difficult for distribution network operators to directly dispatch electric vehicle users [3]. On the basis of meeting the electricity demand of electric vehicles, electric interaction technology and orderly charging technology can optimize the load curve [4]. Therefore, the study of grid interaction technology will be very important for the quantification and evaluation of the interaction potential of electric vehicles as adjustable resources to participate in grid dispatching and operation. At present, the research on the interaction between charging station and power grid mainly focuses on optimizing the security of charging load and improving the interaction economy between charging station and distribution network. In terms of interactive economy, literature [5] proposes the strategy of photovoltaic charging stations participating in demand response, and uses rolling optimization to develop real-time charging queues to reduce the power purchase costs of charging stations.. In literature [6], charging stations can obtain lower electricity purchase price by participating in day-ahead electricity trading market. In document [7], the distribution network operator cooperates with the charging station operator through a certain incentive mechanism to ensure the safe and economic operation of the distribution network. Literature [8] establishes a two-stage optimization model for orderly charging of charging stations, aiming to increase the economic benefits of charging stations and reduce the peak-valley difference. Literature [9] regards charging stations as translatable loads, and guides charging stations to participate in grid auxiliary services through price incentives to improve the economy of distribution networks. Some scholars have also studied the measures taken by the distribution network to cope with the high proportion of charging load. The literature [10] proposed the distribution network reconfiguration strategy to solve the power quality problems caused by the high proportion of charging load. Literature [11] guides charging stations to formulate day-ahead charging plans based on the marginal price of distribution network nodes to avoid distribution network congestion. In document [12, 13], distribution network operators control the charging power of charging stations through price signals to reduce the impact of charging load on the power grid. In document [14], the charging distribution network operator optimizes the charging load curve by solving the optimal power flow problem, and sends the dispatching command to the charging station. However, the above only considers the interaction potential
Research on Interaction Potential of Electric Vehicles in Power Grid …
177
evaluation between the electric vehicle and the grid system from the charging point of view, and the discharge behavior of the electric vehicle is not considered, and the final example simulation only focuses on a single vehicle use type and some specific operation areas, However, it is not universal to measure and verify the whole urban area of non-cluster type. Based on big data analysis technology, this paper studies the potential of electric vehicles to participate in grid interaction, providing support for improving the economy of electric vehicle charging and discharging and charging facilities operation, and improving the security of grid load.
2 Research Framework of Electric Vehicle’s Potential to Participate in Grid Interaction Based on Big Data Analysis Technology First of all, build the classification type and indicator system of electric vehicles as adjustable resources to participate in grid operation and dispatching. Then quantify the potential of vehicles participating in grid interaction, and classify adjustable resources participating in grid interaction according to the indicator system. Finally, the operation data of Shanghai full electric vehicles are used for example simulation and algorithm evaluation.
2.1 Classification and Evaluation Index System of Adjustable Resources Classification of Adjustable Resources and its Evaluation Index System. The adjustable resources refer to the electric vehicles that participate in the charging and discharging regulation interaction of the power grid and can meet their own daily travel power consumption needs. According to the matching characteristics between the electric vehicles and the power grid regulation demand, and based on the longlife use mode of the electric vehicle power battery, integrating the vehicle charging demand and the power grid regulation demand, the type division and division method of the adjustable resources are designed, and the electric vehicles are divided into three categories: “non-regulated resources” “One-way regulation of resources” and “two-way regulation of resources”. As shown in Fig. 1.
178
P. Liu et al.
Fig. 1 Electric vehicle participation in grid operation mode
2.2 Adjustable Resource Characteristic Parameters At the level of describing the adjustable characteristics of electric vehicles, it is mainly described from four aspects: time, space, energy, and willingness to respond to grid dispatching. The specific evaluation indicators are: time flexibility, space flexibility, energy flexibility, and management flexibility. Among them, time flexibility refers to the possibility that electric vehicles can respond to the demand of power grid regulation at the time level, that is, the higher the coincidence between the parking period and the grid dispatching period, the better the flexibility; Spatial flexibility refers to the possibility that electric vehicles can respond to the demand of power grid regulation at the spatial level, that is, the higher the proportion of parking positions with charging/discharging facilities, the better the spatial flexibility of their interaction with the power grid; Energy flexibility refers to the possibility that electric vehicles can respond to the power grid regulation demand at the energy level, that is, the lower the proportion of daily average travel power consumption to the rechargeable power within the time range that can be plugged into the grid and coincide with the grid dispatching period, the better the energy flexibility; Management flexibility refers to the possibility of electric vehicle users to respond to the demand of grid regulation at the level of willingness, that is, the higher the proportion of charging and discharging capacity participating in the interaction of grid regulation to their total charging capacity, the higher their enthusiasm to respond to grid regulation, that is, the higher their management flexibility. 1) Time flexibility (Ts. c) is defined as follows: Taking the natural day as the unit of investigation, the overlap part of the average theoretical time period of electric vehicles that can be inserted into the grid (parking and standing time period) and the time period of the demand for charging and discharging regulation of the grid is the flexible time (Ts. p). The higher the coincidence degree of the flexible time and the regulation time, that is, the higher the time flexibility (Ts. c = Ts. p/Tre), the more sufficient the time conditions for electric vehicles to participate in the grid regulation interaction. Charge time flexibility and discharge time flexibility are described based on charge and discharge scheduling. The calculation method of charge time flexibility is as follows: calculate the proportion of parking time between 22:00 and 06:00 of the next day based on the data of the last 31 days (the first day of each month rolls back to take 31 days of data); The calculation method of discharge time flexibility is as
Research on Interaction Potential of Electric Vehicles in Power Grid …
179
follows: calculate the proportion of shutdown time between 14:00 and 15:00 every day based on the data of the last 30 days. 2) Spatial flexibility (Ps. c) is defined as follows: Taking the natural day as the investigation unit, when the electric vehicle is in the flexible time and its parking static position has the actual conditions of inserting the gun into the grid (charging/discharging pile), the space where the parking static position is located is called the flexible space (Ps). The higher the proportion of the flexible space in all the parking static positions of the vehicle, that is, the greater the space flexibility (Ps. c = QPs/Qp), the more sufficient the space conditions for the electric vehicle to participate in the grid regulation interaction. In combination with the demand of power grid regulation and dispatching, the calculation method of charging space flexibility is as follows: (1) The calculation method in the case of vehicle reporting longitude and latitude. First, from the data of a vehicle in the last 30 days, select the parking segment that falls into the time zone from 22:00 to 24:00 or from 00:00 to 6:00 of the current day, and count the number of unique parking positions corresponding to the vehicle adjustment period; (2) make statistics on the unique parking location corresponding to each regulation period to see whether there are parking charging segments in the aggregated regulation period parking segments at each location. The only location where there are one or more parking charging segments is a flexible space location; (3) the current spatial flexibility coefficient of the vehicle is obtained by combining the number of unique positions in the flexible space with the number of unique parking positions in the adjustment period. The calculation method of the discharge space flexibility is as follows: The calculation method in the case of vehicle reporting longitude and latitude. (1) from the data of a vehicle in the last 30 days, select the parking segment falling into the 14:00 to 15:00 interval, and count the number of unique parking positions corresponding to the vehicle adjustment period; (2) make statistics on the unique parking position corresponding to each regulation period to see if there is a parking charging segment in the aggregated regulation period parking segment at each position and the SOC change value of the discharge segment is negative. The only position of the segment with one or more parking charging segments and the SOC change value is negative is a flexible space position; (3) taking the number of unique flexible space positions/ the number of unique parking positions in the adjustment period, the space flexibility coefficient of the vehicle under the current reverse charging adjustment demand is obtained. 3) The definition of energy flexibility (Ns.c) is as follows: Taking the natural day as the unit of investigation, when the electric vehicle is stationary in the flexible space, the theoretical maximum flexible energy (Ns, charging capacity) of the vehicle can be calculated by multiplying the length of its flexible time by the average charging power provided by the corresponding flexible space. The smaller the proportion of the average daily travel power consumption of the vehicle to its maximum flexible energy, that is, the better its energy flexibility
180
P. Liu et al.
(Ns. c = Ns/Nd), the higher the feasibility of the electric vehicle participating in the grid regulation activities. First, calculate the daily average flexible time length of the vehicle in the last 30 days, that is, divide the flexible time length previously measured by 30 days to get the current daily average flexible time length of the vehicle; Then calculate the average charging power of the flexible space of the vehicle in the last 30 days, that is, use the parking charging segments in the flexible space obtained during the previous measurement of space flexibility, accumulate the average charging power of each parking charging segment in these segments, and then divide it into the total number of parking charging segments to obtain the current average flexible charging power of the vehicle; After that, the current daily average flexible charging amount of the vehicle is obtained by multiplying the current average flexible charging time of the vehicle by the current average flexible charging power of the vehicle; Finally, the current daily average flexible charging capacity of the vehicle/the average daily driving power consumption of the vehicle in the last 30 days is used to obtain the current energy flexibility of the vehicle. 4) Management flexibility (Ms. c) is defined as follows: Taking the natural day as the investigation unit, the higher the proportion of the current actual charging/discharging volume of the vehicle that belongs to the part of the regulation interaction, the stronger the enthusiasm of the vehicle to participate in the grid regulation interaction, that is, the better its management flexibility (Ms. c = Ns/Nt). The flexible calculation method of charge management is as follows: first, calculate the charging volume of the vehicle during the adjustment period in the last 30 days, that is, measure and calculate the proportion of the time length of each charging segment that belongs to the adjustment period, and then multiply the charging volume of the charging segment by the proportion of the adjustment time length to obtain the flexible charging volume of the charging segment, and calculate the flexible charging volume of all charging segments according to the above steps, Accumulate them to get the current total flexible charge of the vehicle; Then, calculate the total power consumption of the vehicle in the last 30 days; Finally, divide the current total flexible charging capacity of the vehicle into the total running power consumption of the vehicle in the last 30 days, and the value obtained is the current management flexibility of the vehicle. The calculation method of discharge management flexibility is as follows: first, calculate the reverse discharge times in the last 30 days of the vehicle, and then calculate the regulated discharge times/30 to get the current discharge management flexibility of the vehicle.
Research on Interaction Potential of Electric Vehicles in Power Grid …
181
3 Evaluation Method of Interactive Potential of Adjustable Resources Considering the physical characteristics of the vehicle power battery and its life design as a consumer product, combined with its current flexible characteristics as an adjustable resource, an interactive potential dynamic evaluation model is built to meet the perspective of power grid regulation demand, with the following characteristics: Vehicle characteristics - battery capacity: the national quality assurance standard for new energy vehicle batteries provides a warranty of no less than 8 years or 120,000 km for non-operating vehicles, and no less than 5 years and 500,000 km for operating vehicles. By converting the above standards, the cycle life of the whole vehicle system is 1500 times, and the capacity decays to 80% of the initial value as the end of life. User characteristics - operation rules: mainly reflected in the capacity margin and energy flexibility. The capacity margin is defined as the ratio of the total power consumption of the user in the warranty period and the total available power of the battery in the expected life based on the current mileage of the vehicle; Energy flexibility is defined as the proportion of daily average travel power consumption of vehicles to their maximum flexible energy. The full Chinese name or English abbreviation can still be used below, but it is not allowed to mix Chinese and English.
3.1 Relevant Factors and Calculation Methods of Charging Regulation Potential 1) Factors related to charging regulation potential: The formula for calculating the base QB of charging regulation potential is: Q B = Q ∗ Nmin
(1)
where, the nominal capacity of the battery Q (unit: kWh). The QC calculation formula of annual average power consumption is: QC =
Mavgn 100
∗ Ec100 per
(2)
The calculation formula of the expected total power consumption QL in the whole life cycle is: Q L = QC ∗ Yqe The calculation formula of energy margin QR is:
(3)
182
P. Liu et al.
QR = QB − QL
(4)
The calculation formula of energy margin M is: M=
QL 0 QB
PA , 0 < η2 < 1 (3) ⎩ other x Ei In the formula, PA represents the attack threshold probability value. When P is greater than the attack threshold, the attack will be launched. MU determines the attack mode according to the condition of H 0 or H 1 . When the attack intensity factor η1 > 1 under H 0 , increase m i (H1 ); When 0 < η2 < 1 under H 1 , increase m i (H0 ).
3.2 Confidence of Pignistic Evidence Distance The Pignistic probability function is proposed based on the expected utility theory, the Pignistic probability function [17] of evidence m i can be defined as: Bet Pm i (ω) =
E A⊆o,ω∈A
1 m i (A) |A| 1 − m i (∅)
(4)
where |A| is the potential of set A. Equation (4) transforms BPA function into Pignistic probability function. Define the evidence distance between m i and m j as: | ) 1 E | ( | Bet Pm (ω) − Bet Pm (ω)| d mi , m j = i j 2 ω∈A,A⊆o
(5)
) ( It can be seen from the formula (5), when(m i = m ) j , d m i , m j = 0, it means there is no conflict between evidences, while d m i , m j = 1 means there is complete
810
X. Yang and C. Bi
conflict between evidences. The smaller the distance between m )i and m j , the greater ( their similarity. Therefore, the similarity coefficient s m i , m j and the similarity coefficient matrix S between m i and m j are defined as ) ( ) ( s mi , m j = 1 − d mi , m j
(6)
⎤ 1 s(m 1 , m 2 ) · · · s(m 1 , m M ) ⎢ s(m 2 , m 1 ) 1 · · · s(m 2 , m M ) ⎥ ⎢ ⎥ S=⎢ ⎥ .. .. .. .. ⎣ ⎦ . . . . s(m M , m 1 ) s(m M , m 2 ) · · · 1
(7)
⎡
The similarity is defined as Sup(m i ) =
M E ) ( s m i , m j j /= i
(8)
j=1
According to the similarity, the evidence credibility of m i [16] is defined as Rel(m i ) =
Sup(m i ) ( ( )) max Sup m j
(9)
j=1,2,...,M
Credibility indicates the degree to which one piece of evidence is (supported)by other pieces of evidence. The evidence with the greatest support max Sup(m j ) is absolutely reliable. Set the judgment threshold β to determine whether it is a MU. When Rel(m i ) × (M − 1) < β, it is considered a MU and doesn’t participate in the fusion process.
3.3 D-S Evidence Fuse In order to reduce the time complexity of D-S fusion rules, the BPAs of the remaining N CUs are weighted and summed to obtain the weighted average BPA. {
EN Rel(m i ) × m i (H0 ) m mean (H0 ) = i=1 EN Rel(m i ) × m i (H1 ) m mean (H1 ) = i=1
(10)
The weighted average BPA is fused N − 1 times using D-S fusion rules. The fusion process is
An Optimized Security Sensing Method Based on Pignistic Evidence …
⎧ | | ⎨ m(H0 ) =
1 1−K
| | ⎩ m(H1 ) =
1 1−K
E
n ||
A1 ∩A2 ∩...∩An =H0 i=1 n E || A1 ∩A2 ∩...∩An =H1 i=1
811
m mean ( Ai ) m mean ( Ai )
(11)
Since the spectrum sensing is assumed to be an independent focal element, Eq. (11) can be resolved as { 1 N m mean m(H0 ) = 1−K (A0 ) (12) 1 N m(H1 ) = 1−K m mean (A1 ) Make a decision by setting a decision threshold λ. {
H0 : m(H1 ) ≤ λm(H0 ) H1 : m(H1 ) > λm(H0 )
(13)
3.4 Algorithm Flow The spectrum sensing algorithm based on Pignistic evidence distance defines the credibility of the Pignistic evidence distance, uses the credibility weighting to obtain the weighted average BPA, and finally compares the size of the weighted average BPA to obtain the perception result. The algorithm process is as follows: (1) Each cognitive user uses the energy detection method to perform local detection, obtains the BPA functions mi (H 0 ) and mi (H 1 ) respectively, and sends them to the fusion center; (2) The fusion center will convert the BPA function into a Pignistic probability function, and calculate the evidence distance d(mi ,mj ); (3) Calculate the similarity Sup(mi ) according to the evidence distance, so as to calculate the evidence credibility Rel(mi ); (4) Set judgment threshold β to determine whether it is a malicious user. If it is, it will be discarded. If it is a normal user, it will participate in the subsequent weighted sum of credibility; The weighted average BPA is fused N-1 times using D-S fusion rules to obtain m(H 0 ), m(H 1 ); (5) Set decision coefficient λ, and the judgment will be made according to the final judgment rules.
812
X. Yang and C. Bi
4 Simulation and Analysis In order to verify the effectiveness of the algorithm, the proposed algorithm is simulated. In the test, the SNR of 10 CUs is taken as [−10 dB, −15 dB], the SNR of MUs is −10 dB, and the algorithm threshold β = 0.7. Figure 2 shows the receiver operating characteristics (ROC) curves of the detection performance of the proposed method in this paper, and compares the performance with methods in [13, 14] and [15]. The attack intensity of MUs η1 = 1.05, η2 = 0.95. The methods in [13, 14] are continuous sensing, while the proposed method and the method in [15] are discrete sensing. It can be seen from the figure that the performance of the proposed method is better than others, which can effectively reduce the impact of MUs. Compared with the method in [15], the proposed method reduces the computational complexity and has a better ability to resist SSDF attacks. Compared with the method in [13], the proposed method doesn’t rely on the times of CUs participating in perception and historical credibility, but also can effectively distinguish MUs and improve the detection probability. As shown in Fig. 3, the change of the credibility of MUs in the proposed algorithm at different attack probabilities, attack intensity η1 = 1.1, η2 = 0.9. The credibility decreases with the increase of attack probability. When the attack probability is 0.3, the credibility of most MUs is higher than 0.7, but some perception moments the credibility is lower than 0.6, indicating that MUs launch attacks at this time. When the attack probability is 0.5, the overall credibility is lower than that of normal CUs. However, the credibility is also higher at some perception moments, probably because MUs didn’t launch attacks at these moments. When the attack probability increases 1 0.9 0.8
Detection probability
0.7 Method in [13] Method in [14] Proposed method Method in [15]
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.1
0.2
0.3
0.4 0.5 0.6 False alarm probability
Fig. 2 ROC curves of detection performance
0.7
0.8
0.9
1
An Optimized Security Sensing Method Based on Pignistic Evidence …
813
1 0.9 0.8 0.7
Credibility
0.6 0.5 0.4 0.3
Normal CUs attack probability 0.3 attack probability 0.5 attack probability 1
0.2 0.1 0
0
1000
2000
3000
4000 5000 6000 Perception times
7000
8000
9000 10000
Fig. 3 Credibility changes under different attack probabilities
to 1, the average value of credibility is lower than 0.6. It can be seen that the proposed algorithm can effectively identify MUs after setting appropriate thresholds. In the discrete sensing model, the proposed method can resist MUs according to appropriate thresholds, and the time complexity is lower than other algorithms due to the use of weighted average BPA method. Figure 4 shows the recognition rate of MUs under different signal-to-noise ratios (SNR) when the proposed algorithm sets different thresholds β. Among the 10 CUs, 5 users have a SNR of −10 dB, and the remaining 5 users’ SNR increase from − 22 dB to −8 dB, and the threshold values are 0.5, 0.6 and 0.7 respectively. The recognition rate is defined as the ratio of the number of correctly identified MUs to the total number of MU attacks. It can be seen from the results that the higher the threshold value, the higher the recognition rate. When the threshold value β = 0.7, and the SNR is −15 dB, the recognition rate can reach 0.8. While when the SNR is higher than −10 dB, the recognition rate can reach 100%. When the threshold is reduced to 0.6, the recognition rate can also reach 0.9 when the SNR is higher than − 10 dB. Although the higher the threshold value is, the higher the recognition rate is. However, when the threshold value is too high, normal CUs may also be identified as MUs, which is an undesirable situation. Therefore, appropriate judgment threshold values should be set according to actual needs.
814
X. Yang and C. Bi 1 0.9 0.8
Recognition rate
0.7 0.6 0.5 0.4 0.3 β=0.7 β=0.6 β=0.5
0.2 0.1 0 -22
-20
-18
-16 -14 SNR(dB)
-12
-10
-8
Fig. 4 Recognition rate of MUs under different SNRs
5 Conclusion In this paper, the types and methods of spectrum sensing are analyzed in the presence of malicious users. In the discrete sensing type, malicious users participate in spectrum sensing discretely and cannot use historical credibility to distinguish. This paper proposes an optimized security sensing method based on Pignistic evidence distance, which uses evidence distance to measure the credibility of cognitive users. Simulation results show that the proposed method can effectively distinguish malicious users without historical credibility reference, reduce the impact of malicious users on the fusion results, increase the robustness of the system, and the overall cost of the algorithm is much smaller.
References 1. Haykin, S.: Cognitive radio: brain-empowered wireless communications. IEEE J. Sel. Areas Commun. 23(2), 201–220 (2005) 2. Abbass, N., Hussein, A.H.H., Jad, A.C., Ali, M., Koffi-Clément, Y.: Spectrum sensing for cognitive radio: recent advances and future challenge. Sensors 21, 2408 (2021) 3. Sharma, P., Abrol, V.: Individual vs cooperative spectrum sensing for cognitive radio networks. In: Conference 2013, WOCN, pp.1–5. Tenth International Conference on Wireless & Optical Communications Networks, Bhopal (2013) 4. Pei, Q.Q., Li, H.N., Zhao, H.Y., Li, N., Min, Y.: Security in cognitive radio networks. J. Commun. 1(17), 144–158 (2013)
An Optimized Security Sensing Method Based on Pignistic Evidence …
815
5. Chen, R., Park, J.M., Bian, K.: Robust distributed spectrum sensing in cognitive radio networks. In: Conference 2008, INFOCOM, pp. 31–35. The 27th Conference on Computer Communications, Phoenix, AZ (2008) 6. Qi, X.G., Qin, F.J., Liu, L.F.: Cooperative spectrum sensing method considering anti camouflage SSDF malicious attacks. J. Xidian Univ. 43(4), 86–91 (2016) 7. Gokulakrishnan, K., Baskar, S., Srinath, V., Amudhevalli, R., Jothimani, S., Kiruthiga, B.: Improved trust node selection technique to defend spectrum sensing data falsification attack in cognitive radio network. Ann. Rom. Soc. Cell Biol. 5(25), 1482–1491 (2021) 8. Xu, Z.Y., Sun, Z.G., Guo, L.L., Muhammad, Z.H., Chintha, T.: Joint spectrum sensing and spectrum access for defending massive SSDF attacks: a novel defense framework. Chin. J. Electron. 2(31), 240–254 (2022) 9. Pang, J.F., Lin, Y., Li, Y.B., et al.: A new DS evidence fusion algorithm based on cosine similarity coefficient. In: CONFERENCE 2013, MIC, vol. 2, pp. 1487–1490. International Conference on Measurement, Information and Control, Harbin (2013) 10. Lavanis, M.N.: Improving detection of primary user in a cognitive radio network with malicious users. In: Jawahar, A., Rajavel, R., Sreeja, B.S., Vinothkumar, C. (eds.) CONFERENCE 2016, WiSPNET, pp. 733–736. International Conference on Wireless Communications, Signal Processing and Networking, Chennai (2016) 11. Du, L.P., Jiang, S.K., Li, M.F.: A weighted D-S evidence theory based cooperative spectrum sensing to defend PUE attack. In: CONFERENCE 2013, CCT, pp. 74–77. International Conference on Cyberspace Technology, Beijing (2013) 12. Jiang, C.Z., Liu, T.T.: An improved evidence based cooperative spectrum sensing algorithm. In: Wang, Y., Yi, X. (eds.) CONFERENCE 2013, ICDIP, vol. 8878, no. 4, pp. 2714–2739. International Conference on Digital Image Processing, Beijing (2013) 13. Han, Y., Chen, Q., Wang, J.X.: An enhanced D-S theory cooperative spectrum sensing algorithm against SSDF Attack. In: Conference 2012, VTC Spring, vol. 15, no. 3, pp. 1–5. IEEE Vehicular Technology Conference, Yokohama (2012) 14. Men, S.Y., Charge, P., Pillement, S.: A robust cooperative spectrum sensing method against faulty nodes in CWSNs. In: CONFERENCE 2015, ICCW, pp. 334–339. International Conference on Communication Workshop, London (2015) 15. Han, Y., Chen, Q., Wang, J.X.: A D-S theory cooperative spectrum sensing algorithm with SSDF attack detection. Signal Process. 27(7), 1082–1087 (2011) 16. Yang,X.M., Wang, Q., Xu, J.P.: Spectrum sensing method of optimized D-S evidence theory. In: Liu, X., Hsu, H.-M. (eds.) CONFERENCE 2018, GSKI, vol. 234, pp. 758–766. Geo-Spatial Knowledge and Intelligence, Series: Earth and Environmental Science, WuHan (2019) 17. Ma, L.L., Zhang, F., Chen, J.G.: Synthetic rule of evidence based on pignistic probability distance. Comput. Eng. Appl. 51(24), 61–66 (2015)
Wireless Communication Channel Management for 6G Networks Applying Reconfigurable Intelligent Surface Ashish K. Singh, Ajay K. Maurya, Ravi Prakash, Prabhat Thakur, and B. B. Tiwari
Abstract In this paper, we have presented a wireless communication channel management technique for the future generation wireless communication systems using reconfigurable intelligent surfaces (RIS). It consists of numerous numbers of reflecting unit-cells which reflect the incident signal from the base stations with desired phase shift to superimpose with the non-reflected and reflected signals with a destructive way to prevent detrimental interference. The RIS matrix of phase-shift matrix could be optimized via reflected signal towards explicit directions and an effective optimization of phase-shift is a potential requirement to improve the wireless communication network’s performance substantially. Nearly in all the circumstances, wireless network covering from the base station to RIS and RIS to the users are cascaded channel state information (CSI) with dual hops. The wireless communication channel between the base station and user (i.e., direct channel) could be accomplished by adjusting RIS in absorption mode employing conventional channel estimation techniques. Further, its impact over spectral and energy efficiency are elaborated with its potential research challenges. Keywords Intelligent reflecting surface · Wireless communication · Wireless channel · Spectral efficiency · Energy efficiency
A. K. Singh · A. K. Maurya · R. Prakash · B. B. Tiwari Department of Electronics and Communication Engineering UNS IET, Veer Bahadur Singh Purvanchal University, Jaunpur 222003, India P. Thakur (B) Department of Electrical and Electronic Engineering Science, University of Johannesburg, Johannesburg 2006, South Africa e-mail: [email protected] Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, Maharashtra, India © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_77
817
818
A. K. Singh et al.
1 Introduction Currently, an explosive growth in the wireless communication devices and demand of high quality-of-service (QoS) make the spectrum resources progressively scarce and precious for the future generation wireless communication. Dynamic spectrum access (DSA) is promising technique to confront this issue, in which the unlicensed users are enabled to dynamically access the licensed user’s spectrum with tolerable interference [1]. However, the operational characteristic of traditional DSA is severely restricted by strong cross-link interference. For this purpose, the reconfigurable intelligent surface (RIS) technology has been involved which plays a very significant role to enrich the implementation of DSA. This RIS technology consist of proliferation of its reflecting unit-cells that reflect an incident signal from the base stations with desired phase shift to superimpose with the non-reflected and reflected signals with a destructive way to prevent unfavourable interference. Several basic research attempts have been carried out including RIS augmented spectrum sensing and RIS-improved resource allocation design. The physical layer technologies such as massive MIMO/cognitive radio/NOMA and millimeter wave (mmWave) communications play very imperative role [2] to solve the spectrum scarcity problem and enhance network capacity in the wireless communication networks. The prediction by the “International Telecommunication Union (ITU-R)” [3] as reported in Fig. 1, the data-traffic will increase globally and achieve up to 5 zettabytes per month by 2030 which is an emerging research issues that need a considerable attention. However, the data-intensive applications demand ever-increasing traffic which could be hardly offered by 5G/6G which demand potential research interest from both an industry and academia in the near future. Terahertz (THz) spectrum is one of the potential possibilities [4], which could further increase the spatial diversity for future 6G mobile communication. Various potential technologies including artificial intelligence, spectrum management in different regime of the spectrum, advanced computing technologies, and blockchain for security are exploited vigorously 6G communication [5]. Thus, the RIS technology is used as an emerging technology to control the wireless channel in a real-time reconfigurable manner [6] and their programmable features make them an attractive technology for 6G networks. Since there are multiple conflicting objectives (energy as well as spectral efficiency) in RIS enhanced spectrum sharing needs to be considered to accomplish a good trade-off among the objectives. As a result of distinctive improvements in enhancing the wireless network capacity by controlling the wireless ecosystems which make an emerging research area. Therefore, to exploit the extensive possibility of future generation wireless ecosystems, the viability of developing a programmable and controllable radio atmosphere that will be an essential component of system optimization [7]. It is assembled by planar/even conformal engineered metasurfaces containing many reflection unit-cells that will be adaptable by using intelligent controller as shown in Fig. 2. Thus, the smart radio environments empowered by RIS can control wave characteristics intelligently and improve the anticipated signals as well as alleviate the
Wireless Communication Channel Management for 6G Networks …
819
Fig. 1 The predicted data traffic up to 2030 by ITU-R [7]
Fig. 2 Architecture of RIS block diagram with hardware configuration for channel estimation
interference. Thus, it has enormous capability to transform the design of wireless ecosystem, especially, when it is incorporated with emerging communication tools such as NOMA, ultra-massive MIMO, terahertz communications, artificial intelligence assisted networks, and promising computing intelligence. RIS reconfigures the propagation environment using software-controlled reflection and energy efficiency with hardware imperfection are the main performance metric which need to optimize. The phase of reflecting elements is tuned individually to yield the signals in specific direction. The channel estimation in RIS supported communication systems faces potential challenges because it has no radio frequency and baseband processing capabilities. The notion to hand in is the transmission ecosystem that is itself an optimization variable of the communication networks and develops smart radio ecosystem as illustrated in Fig. 2. This new player provides probable solutions for the constraints that appear in the wireless ecosystem and materializes an emerging use case. There are several indisputable performance improvements of RIS supported smart radio environment such as: 1) RIS can be positioned everywhere, 2) RIS are ecological and fulfil the requirement of green communications, 3) RIS support full-band transmission and full-duplex and 4) the quadratic scaling law is followed by the power gain of a RIS. However, the application of RIS also puts several additional issues
820
A. K. Singh et al.
in the wireless communication system’s layout [8]. The authors contribution in this article is summarized as follows. (1) The wireless communication channel management/channel modeling technique for the future generation wireless communication networks using reconfigurable intelligent surfaces has been presented. (2) The radio resource allocation management employing RIS with its impact over the key performance parameters with its implementation issues are discussed. (3) Comparative study of the energy-efficiency in between RIS based communication and multi-antenna-based DF relay systems are presented. (4) Finally, the potential open research challenges and RIS standardization scenarios with role of machine learning/deep learning and other emerging techniques for RIS enhanced dynamic spectrum access for the wireless channel control phenomenon are discussed. The rest of the article is structured as follows. Section 2 to depict an architecture of the RIS wireless communication network modelling point of view. Section 3 explores the role of RIS with context to the wireless channel modelling perspective. Further, the open research challenges are presented in Sect. 4. The standardization activity on RIS is discussed in Sect. 5 and finally, Sect. 6 concludes the work and recommends future research directions.
2 Architecture of RIS for Channel Modelling Perspective A representative RIS structural design developed with metamaterials and comprises a planar surface that is made with single and/or multilayers and a controller as shown in Fig. 2. The detailed design of planar surface with three layers is intended and reported in [9]. The upper layer is directly connected with incident signal coming from the base station and consists of numerous reflecting unit-cells printed on dielectric substrate materials. Further, the middle layer is made with conducting materials and generally employed to avoid signal energy leakage. Finally, the last layer is an integrated circuit board which is used for adjusting the RIS unit-cells for reflection coefficients and its operation is controlled by smart controller such as field programmable gate array (FPGA). For their operation in usual circumstances, the reflection coefficient computed and optimized at base station then sent it to the smart controlled of RIS with dedicated feedback link. The channel state information (CSI) is the main source on which the reflection coefficient relies on the employed RIS, and it would be updated only after CSI variation in longer time range circumstances as compared to the data symbol duration. The reflecting unit-cells of RIS structure as presented in Fig. 2 consist with a smart controller to control the biasing voltage of the PIN diode to switch in between “on” and “off” modes as depicted in the Fig. 2 (particularly in equivalent circuit) which may comprehend up to π radians phase-shift [10]. Further, significantly more PIN diode must be unified in each reflecting unit-cell to enhance the phase-shift stages. The incident signal at RIS is reflected towards the receiver
Wireless Communication Channel Management for 6G Networks …
821
of the user by the reflecting unit-cells operated by the smart controller. Further, the received signal at the users is comprised with the signals coming from the reflecting unit-cells and the direct link (channel) which assist to enhance the signal quality and reduces interference. In general, the RIS is deployed in high locations to reduce the establishment cost at a new station. In addition, the phase-shift matrix could be optimized at the employed RIS to reflect the signal toward specific directions. Thus, an efficient phase-shift optimization can improve the wireless communication network performance significantly with context to the spectral-and energy-efficiency. Moreover, the RIS systems are also very much compatible with many other emerging technologies. The RIS reflecting unit-cells passively reflect the signal coming from the base station without any sophisticated signal processing techniques demanded by the RF transceiver’s hardware which is also a plus point with context to its practical implementations. Further, the RIS could be operated economically in terms of hardware and power consumption as compared to that of the conventional active transmitters [10]. It could be fabricated with light weight and limited layers in the planer as well as conformal format which will assist in its installation as appropriate place because of the passive nature of the reflecting unit-cell. Moreover, RIS inherently operates in full-duplex (FD) mode without introducing thermal noise as well as self-interference which assist to yield significantly higher spectral efficiency than that of an active half-duplex (HD) relay, despite their lower signal processing complexity than that of active FD relays requiring sophisticated self-interference cancellation.
3 RIS for Channel Modeling RIS is an innovative technology employed to enhance the wireless data transmission networks by reconfiguring the wireless signal transmission ecosystems. This programmable structure is used to control the propagation ecosystem particularly, the features of radio channels by controlling the electric and magnetic characteristics of the signal by employing the huge number of reflecting unit-cell of the RIS. It is an economical solution to achieve significantly enhanced spectral and energy efficiency because it comprises low-cost passive reflecting unit-cells to reflect the incident signal coming from base station signals with adjustable phase shifts. Due to these appealing advantages, RIS is accepted as a most promising technique for 6G communication networks and research on RIS is still in its infancy.
3.1 Channel Estimation The channel state information (CSI) needs to be estimated in support of the design of phase-shift to yield full advantages of RIS supported communication networks. For this purpose, we consider instantaneous CSI to yield an opportunity to modify the transmitted signal for spatial multiplexing to accomplish significantly minimal bit
822
A. K. Singh et al.
error rate. In addition, the statistical CSI characterizes the channel for transmission optimization. However, the CSI acquirement is basically limited due to the nature of channel. The channel conditions vary rapidly with the transmission of an information symbol in the fast-fading ecosystems, therefore only the statistical CSI is reasonable to estimate. Instead, the instantaneous CSI could be estimated with reasonable accuracy and used for transmission adaptation for some time before being outdated in the slow-fading ecosystem. However, the available CSI often lies in between these two levels in practical communication systems. The considered scenarios support the wireless system where multi-antenna base station communicates to the singleantenna user supported with RIS for close to an instantaneous CSI estimation. In most situations the cascaded CSI of two hops is sufficient for the communication system in which the network traversing from the base station to RIS and from RIS to the user. The cascaded channel gain encompasses many channel coefficients consequently because of vast number of reflecting unit-cells and their estimation requires many pilots that relies on the number of reflecting unit-cells. Thus, the channel estimation overhead reduction is an open research problem. However, the communication systems with RIS do not have any complicated signal processing capability which enhances the channel estimation issue. The communication channel between the base station and user (i.e., direct channel) could be accomplished by setting the RIS into the absorption mode and employing traditional channel estimation techniques. However, to maintain the power consumption and complexity of the RIS as low as possible, efficient and advanced techniques are needed to explore channel estimation between RIS/base station and RIS/user. Several techniques might be employed for the channel’s estimation between the RIS/base station and RIS/user based on hardware capabilities of the RIS. He et al. [11], have presented a cascaded channel estimation technique where a single element is turned on at each time, even though all the other elements are turned off and by transmitting a pilot signal from the user, the product of the channels of that element to the BS and user is estimated at the BS. However, this technique levies a long estimation delay because of the massive number of RIS elements. Further, this may degrade the channel estimation accuracy because a single element is turned at each time. Moreover, Ning et al. [12] have introduced another estimation technique based on beam training in which the RIS quickly sweeps the reflection coefficients of its elements over a pre-defined codebook instead of explicitly estimating the RIS/BS and RIS/user channels. Then, the best beam is selected for the RIS configuration based on the feedback of the user’s received signal strength. In addition, an alternative approach is exploited by Yuan et al. [13] in which the RIS has some active elements that are connected to receive RF chains and baseband processing units and the channels estimation at the RIS becomes feasible based on training signals from the BS and users in this scenario. With significantly high channel correlation between adjacent elements in the RIS and accurate estimation results with a limited number of active elements can be achieved on the cost of its complexity and power consumption.
Wireless Communication Channel Management for 6G Networks …
823
Fig. 3 It illustrates the comparison of RIS and DF relay system with context to the (a) spectral efficiency and (b) energy efficiency [14]
3.2 Optimization of Phase Shifts The IRS-assisted wireless networks require the optimization of IRS phase shifts to yield potential objectives for every application. However, the challenge is to optimize the IRS phase-shifts efficiently considering the hardware inconsistencies. In general, the phase shifts optimization of the IRS assisted cellular wireless networks is employed to increase the throughput of multi- user cellular wireless networks. The design of phase-shift depends on model-based optimization approaches that entail many iterations to yield an optimal solution because of the non-convexity of phase-shift limitations with non-convex nature of the objective function in the practical deployment situations that is furnished with massive amount unit-cells that reflect the incident signal from coming from the base station. Thus, the current methods suffer with high computational complexity which makes them inappropriate for real-time applications. Therefore, an artificial-intelligence based approach is an attractive data-driven method which can obtain system characteristics without any specific mathematical representation. Figure 3 presents the comparison between use of RIS supported communication system with a multi-antenna direct to forward (DF) relay system implemented at identical environment. In the proposed communication system, the direct path is non-existent, and the path aligned with line-of-sight is via the RIS supported as well as multi-antenna DF relay. In the communication system, the transmitter is 300 m away from the RIS/relay and user is 10 m from it. The transmit power corresponds to 10W per 20 MHz at the base station and 0.1W at the relay. The employed antenna gain is 10 dBi at the base station and 0 dBi at the relay and user. The glass penetration loss is − 20 dB, and the noise figure is 10 dB. Figure 3 shows the spectral efficiency achieved by an RIS and a DF relay for different surface areas as shown in Fig. 3(a). For the practical spectral efficiencies, the DF relay is smaller, while the advantage of the RIS is the lack of power amplifiers and full-duplex mode. Figure 3(b) presents the impact of a plane wave in an angular direction governed by the law of reflection such that the receiver recognizes the transmitter as being positioned at the replicated location. The RIS can be set up in cooperation with the
824
A. K. Singh et al.
angle of the reflected beam and its shape, therefore it should not be understood as an irregular mirror. Further, Fig. 3(b) fully illustrates RIS operation to maximize the SNR at the receiver end [15].
4 Open Research Challenges There are numerous reported studies available since the commencement of the RIS research on the channel estimation, SINR maximization, beamforming/beam steering optimization, SNR and SEP derivations. In addition, to evaluate RIS proficiency for the mmWave/THz, FSO, and visible light applications [16] by using the machine learning tools are well explored by researchers. The potential concerns of joint beamforming (active and passive) strategy diminish the total base station’s transmit power and used optimization techniques to solve the non-convex optimization problem has been reported by Wu and Zhang [17]. In addition to the aforementioned issues, there are potential research challenges that should be confronted to assure a better amount of reliability in the smart wireless communication networks as follows.
4.1 Realistic Optimization Frameworks With context to the optimization problem of RIS implementation need to address practical problems. Majority of the reported studies on optimization problem in RIS focus on exploiting the energy efficiency, data rate, and SINR, and most among the recent reported investigation are relatively based on theoretical consideration, such as perfect channel estimation, overlooking internal fatalities and far-field radiations, precise beamforming/beam-steering, single-antenna user equipment, etc. Therefore, the projected research needs to re-examine such expectations through practical approaches. Incorporating the full-duplex attributes of RIS further could optimize the communication networks in terms of resource allocation and beamforming. Also, integrating cooperative NOMA and RIS in full-duplex communications through proper limitations and objectives will be an exciting future direction.
4.2 Hybrid Systems (RF-VLC) The practical application of 5G/beyond 5G communication systems by controlling the unpredictability of the propagation ecosystem using RIS offers additional advantages such as energy efficiency and full-spectrum response. However, beyond and 6G communication networks demand substantial improvements in mobile broadband and ultra-reliable low-latency communications [18]. With this context, a fusion of RF and visible light communication system deployment will assist in delivering
Wireless Communication Channel Management for 6G Networks …
825
quickly, cost-effective, and consistent communication networks [19]. The visible light regime of the spectrum has capability to provide extremely large bandwidth, essential physical layer security and robustness to the electromagnetic interference. Visible light spectrum with RIS-RF could be explored in an outdoor ecosystem which is safe and health-friendly communication methods. Thus, the deployment of this fusion of visible light and RF with RIS communication system model delivers a reliable communication structure that pays for potential malfunction of one of the connecting links. Currently, most of the reported studies on RIS are only for RF/mmWave communications. Consequently, future research could be explored to investigate the performance of these fused technologies in the RIS field.
4.3 Health Issues With the emergence of 6G, the RF technologies are proliferating a wide range health risk of living things as reported in [20]. The connected health matters with electromagnetic radiation experience have open a research issue for the years. In addition, the recent initiation of the RIS in the indoor communication developments has spread out more concerns about the potential health consequences particularly, the indoor mmWave environments which offer large bandwidth for high data rates. Therefore, the trade-offs in performance versus health issues by accurately adapting various network constraints is open research challenges.
4.4 Integration of 5G with 6G Technologies RIS is a prospective physical layer technologies which produces a new transmission concept which fulfils the needs of future 6G networks [21]. The smart radio ecosystems have a probable impact on the forthcoming 6G technology markets, assisting substantial advances in spectral efficiency with lucrative solutions. Therefore, the unexplored investigation is the combination of RIS with emerging 5G technologies [22], beamforming [23], and physical layer security [24]. This technology system has capability to offer a pervasive and reliable wireless communication assistance and curbing additional interference components like noise and inter-user interface through both NLOS and LOS paths. Therefore, to enhance the network capacity and user execution in 6G networks particularly, for high-density user ecosystems because RIS decreases network interference level. Furthermore, the economic impact and its sustainability of RIS sponsored smart radio ecosystem for 6G marketplaces are key research issues that must be further investigated [25].
826
A. K. Singh et al.
5 Standardization of RIS Initial standardization efforts on RIS have been abridged with extensive research which is already initiated by in three different organizations: 1) Forum of FuTURE Mobile Communication, 2) China Communications Standards Association (CCSA) and 3) European Telecommunications Standards Institute (ETSI) — Industry specification group (ISG). All research and related contributions can be obtained on ETSI portal [26]. Standardizing stipulations might be considered after presentation of the study report of ISD up to the end of 2023. As per the draft report ITU radio-communication sector [7], the RIS is considered as potential physical layer mechanisms for 6G networks. The final proposal submitted in March 2021 to 3GPP by ZTE Corporation also illustrates potential interests in support of RIS to develop an important element of 5G communication systems and beyond, especially for the system operators [27]. The forthcoming advancement is predicted as shown in Fig. 4. For future standard activities, the ITU primarily emphasizes regulatory aspects such as spectrum management and business model, however, the wireless channel characterization employing RIS is also feasible to be considered by ITU. With the release 18, the recent attention is initiated the second stage of 5G, however there is not any prescribed strategy for 6G hitherto. Thus, the present standardization advancement on 5G reveals that the 1st 6G release might be in between Release 20 to Release 22 as far as 3GPP is concerned. The channel modelling is the most important work which needs to be completed as preparatory work for RIS standardization as an entire novel technology [28]. There are two feasible paths to standardize RIS as per 3GPP are: 1) on use cases, placement states and channel characterization initiative by SI need to include in Release 19, and 2) initiate a WI after the accomplishment of the concern SI, which permit RIS to be positioned as a constituent of 5G networks in assisting to growing this technology for industrialization. Thus, RIS would be a potential component for 5G standardization and consequently, it invites 6G standards. Further, another feasible method to standardize RIS as component 6G standards with other new features which advises about the placement of RIS up to 2030. The standardization of RIS needs to Fig. 4 Predicted standardization for RIS [29]
Wireless Communication Channel Management for 6G Networks …
827
be conducted timely to empower an opportunity for it join with other technologies for greener, safer and more reliable wireless networks because this technology is observed as generic and band-agnostic. In addition, there are still inspiring research issues that need to be attempted. As per the present research status and the maturity of this technology, the potential key facets of RIS standardization also need to narrate with explicit frequency regime of the spectrum on which RIS would be installed such as the channel modeling/channel estimation. However, it would be more effective to start with subordinate frequency bands.
6 Conclusion In this paper, a well-organized assessment of the RIS capability to mitigate the potential signal blockage and coverage issues of future generation communications networks are presented. RIS implementation jointly supports communication between the transmitter and receiver with polarization modulation to transfer its own information. It is very challenging to create the entire transmission protocol in consideration of both the RIS reconfiguration algorithms as well as channel estimation complexity in the time-variant ecosystems. In addition, a terrific attention is demonstrated to merging both sensing and communication utilizing joint infrastructure and communicating waveforms. Thus, it is very interesting to explore the possibility of RIS for both joint sensing and propagation. Further, the deployment of several distributed RIS to assist numerous users in actual communication environment enforces many disputes and introduce major complication to deal with such as resource allocation, user scheduling, channel estimation, and RIS configuration. These potential challenges indeed necessitate a massive number of control signals which depend on computationally expensive techniques. Therefore, RIS abetted communication structures with machine learning is a favorable exploration direction. For B5G and 6G communication networks, it is a promising physical layer approach which enhances the QoS and minimizes massive power intake with reference to the conventional structures. This paper presents a distinctive scientific assessment of the theories of physical functions with their optimization and implementation assessment frameworks. Finally, we also have discovered several exciting open research challenges for RIS supported future generation communication networks, together with hybrid RF-VLC systems, health considerations, new resource allocation problems, and localization. Currently, there are growing investigation and expansion activities on RIS from both the academic world and industry due to its enhanced potential for 5G/B5G/6G wireless communication networks. The wireless communication networks with RIS will transform the signal propagation atmosphere by vigorously programmable one/more key performance indicators. The importance of RIS skill with state-of-the-art and existing and forthcoming standardization actions for next generation wireless networking, are well highlighted. In addition, the presented RIS channel estimation technique approved wireless ecosystems which constitutes potential requirement for the optimized integration of RIS in 6G.
828
A. K. Singh et al.
References 1. Thakur, P., Singh, G.: Spectrum Sharing in Cognitive Radio Networks: Towards Highly Connected Environments, 1st edn. John Wiley & Sons, Hoboken (2021) 2. Zeng, Y., Zhang, R.: Millimeter wave MIMO with lens antenna array: a new path division multiplexing paradigm. IEEE Trans. Commun. 64(4), 1557–1571 (2016) 3. Tariq, F., Khandaker, M.R.A., Wong, K.-K., Imran, M.A., Bennis, M., Debbah, M.: A speculative study on 6G. IEEE Wireless Commun. 27(4), 118–125 (2020) 4. Hu, S., Rusek, F., Edfors, O.: Beyond massive MIMO: the potential of data transmission with large intelligent surfaces. IEEE Trans. Signal Process. 66(10), 2746–2758 (2018) 5. Zhang, Z., et al.: 6G wireless networks: vision, requirements, architecture, and key technologies. IEEE Veh. Technol. Mag. 14(3), 28–41 (2019) 6. Zhao, J., et al.: Programmable time-domain digital-coding metasurface for non-linear harmonic manipulation and new wireless communication systems. Nat. Sci. Rev. 6(2), 231–238 (2019) 7. ITU-R: Future technology trends of terrestrial IMT Systems towards 2030 and beyond (2021). https: //www:itu:int/md/R19-WP5D-210607-TD-0400 8. Singh, A.K., Maurya, A.K., Prakash, R., Thakur, P., Tiwari, B.B.: Reconfigurable intelligent surface with 6G for industrial revolution: potential applications and research challenges. Paladyn J. Behav. Rob. , 14, 20220114/1-12 (2023) 9. Wu, Q., Zhang, R.: Towards smart and reconfigurable environment: intelligent reflecting surface aided wireless network. IEEE Commun. Mag. 58(1), 106–112 (2020) 10. Chen, W., et al.: Angle dependent phase shifter model for reconfigurable intelligent surfaces: does the angle-reciprocity hold? IEEE Commun. Lett. 24(9), 2060–2064 (2020) 11. He, Z.-Q., Yuan, X.: Cascaded channel estimation for large intelligent metasurface assisted massive MIMO. IEEE Wireless Commun. Lett. 9(2), 210–214 (2019) 12. Ning, B., Chen, Z., Chen, W., Du, Y., Fang, J.: Terahertz multi-user massive MIMO with intelligent reflecting surface: beam training and hybrid beamforming. IEEE Trans. Veh. Technol. 70(2), 1376–1393 (2021) 13. Yuan, X., Zhang, Y.-J.A., Shi, Y., Yan, W., Liu, H.: Reconfigurable-intelligent surface empowered wireless communications: challenges and opportunities. IEEE Wireless Commun. 28(2), 136–143 (2021) 14. Emil, B., Özdogan, O., Larsson, E.G.: Reconfigurable intelligent surfaces: three myths and two critical questions. IEEE Commun. Mag. 58(12), 90–96 (2020) 15. Zheng, B., et al.: A survey on channel estimation and practical passive beamforming design for intelligent reflecting surface aided wireless communications. IEEE Commun. Surv. Tutorials 24(2), 1035–1071 (2022) 16. Pan, C., et al.: Reconfigurable intelligent surfaces for 6G systems: principles, applications, and research directions. IEEE Commun. Mag. 59(6), 14–20 (2021) 17. Wu, Q., Zhang, R.: Intelligent reflecting surface enhanced wireless network via joint active and passive beamforming. IEEE Trans. Wireless Commun. 18(11), 5394–5409 (2019) 18. Basar, E., et al.: Wireless communications through reconfigurable intelligent surfaces. IEEE Access 7, 116753–116773 (2019) 19. Abdelhady, A.M.A., et al.: Visible light communication via intelligent reflecting surfaces: metasurfaces vs mirror arrays. IEEE Open J. Commun. Soc. 2, 1–20 (2020) 20. Miller, A.B., et al.: Risks to health and well-being from radio-frequency radiation emitted by cell phones and other wireless devices. Front. Pub. Health 7, 223/1-223/10 (2019) 21. Yuan, Y., Zhao, Y., Zong, B., Parolari, S.: Potential key technologies for 6G mobile communications. Sci. China Inf. Sci. 63(8), 183301/1-183301/19 (2020) 22. Ge, L., et al.: Joint beamforming and trajectory optimization for intelligent reflecting surfacesassisted UAV communications. IEEE Access 8, 78702–78712 (2020) 23. Nadeem, Q.-U.-A., et al.: Intelligent reflecting surface-assisted multi-user MISO communication: channel estimation and beamforming design. IEEE Open J. Commun. Soc. 1, 661–680 (2020)
Wireless Communication Channel Management for 6G Networks …
829
24. Guan, X., Wu, Q., Zhang, R.: Intelligent reflecting surface assisted secrecy communication: Is artificial noise helpful or not? IEEE Wireless Commun. Lett. 9(6), 778–782 (2020) 25. Renzo, M.D., et al.: Smart radio environments empowered by reconfigurable AI meta-surfaces: An idea whose time has come. EURASIP J. Wireless Commun. Netw. 2019(1), 129/1-129/20 (2019) 26. https://portal:etsi:org 27. ZTE Corporation, Sanechips: Support of reconfigurable intelligent surface for 5G Advanced, March 2021. https://www:3gpp:org/ftp/TSG RAN/TSG RAN/TSG 91/Docs/RP-21061:zip 28. Zhang, H., Di, B.: Intelligent omni-surfaces: simultaneous refraction and reflection for fulldimensional wireless communications. IEEE Commun. Surv. Tutorials 24(4), 1997–2028 (2022) 29. Liu, R., Wu, Q., Di Renzo, M., Yuan, Y.: A path to smart radio environments: an industrial viewpoint on reconfigurable intelligent surfaces. IEEE Wireless Commun. 29(1), 202–208 (2022)
Fault Diagnosis, Simulation and Optimization Techniques
Research on Causal Model Construction and Fault Diagnosis of Control Systems Qun Zhu, Zhi Chen, Jie Liu, Zhuoran Xu, Juan Wen, Jinghua Yang, and Yifan Jian
Abstract Fault diagnosis of complex control systems is a difficult and hot topic of research at this stage. In order to explore the internal operation characteristics of complex control systems and fault propagation methods to achieve efficient and accurate fault diagnosis, this paper studies control system fault diagnosis methods based on cause-effect relationships, analyzes the role relationship between system components from the perspective of control equation structure, proposes the method of constructing cause-effect diagram models with control equations, and uses the algorithm of way to implement, and then combine the information flow to quantify the causal effect quantifiably, when the system steady state is disrupted, the information flow produces a jump, so as to extend the fault diagnosis reasoning method. The fault diagnosis is verified by simulation of the pressure control system of the nuclear power plant regulator, and the experimental results verify the effectiveness of the proposed method. Keywords Control system · Control equation · Causal graph model · Information flow · Fault diagnosis
1 Introduction Complex systems are a whole composed of several associated subsystems. Scholars at home and abroad have researched many fault diagnosis methods for the coherence, hierarchy, fault propagation, radioactivity, delay, uncertainty and other characteristics Q. Zhu · J. Liu (B) · Z. Xu · J. Yang School of Computer Science, University of South China, Hengyang 421200, China e-mail: [email protected] Z. Chen · Y. Jian Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu 610064, China J. Wen School of Electrical Engineering, University of South China, Hengyang 421200, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_78
833
834
Q. Zhu et al.
of complex systems, which are broadly divided into two categories – quantitative methods and qualitative methods. The qualitative analysis methods can be divided into graphical methods, expert systems and qualitative simulation, which represent a class of development directions, among which the graphical-based fault diagnosis methods have the advantages of being intuitive, easy to maintain, do not require precise analytical models, do not rely on historical fault information and can identify new faults. In the process of fault diagnosis reasoning, fault propagation paths can be given and the results of reasoning are highly interpreted, which has received attention from several fields of fault diagnosis research, and several branches have emerged, such as: Petri net model, causal graph model, fault tree model, etc. However, these methods also have some shortcomings, for example, some of the models are built based on knowledge obtained from qualitative analysis of system functions and fault characteristics, and do not make use of quantifiable monitoring history data, which leads to too low diagnostic accuracy and too large fault sets for practical applications. Analyzing the causality of two events or processes exists in many disciplines. As systems become more complex, the analysis of causality between time series is particularly important and is one of the "biggest challenges" in data science. Currently, a common practice is to compute time-series correlations. However, correlations are not asymmetric and therefore do not necessarily allow for the analysis of causal relationships between variables. As Barnard puts it, "correlation is not causation." An alternative approach is to perform a Granger causality test, which is a prediction for smooth time series data. As the name implies, this test can only determine yes or no and lacks the quantitative information that may be needed in many cases. To remedy this, attention has been turned in recent years to an empirical information-theoretic metric, namely transfer entropy. However, this metric is difficult to evaluate and quite complex to compute, and it is biased because its analysis depends on the autocorrelation coefficient of the system. The use of information transfer/information flow to measure the causality between dynamical events not only quantifies the magnitude of the causality but also indicates the direction of the causality. In the past decades, there have been empirical or semi-empirical explorations, which include transfer entropy. It was not until Liang and Kleeman established a mathematical expression for the information transfer of two-dimensional deterministic and stochastic dynamical systems with known dynamical processes. The key point of the analytical problem in practical engineering regarding multi-output response models is to find the description of the overall uncertainty of the multi-dimensional output model, however, it is often difficult to give the dynamical process of the system. To solve this problem, Liang introduces Liang-Kleeman information flow into time series analysis, which makes the causal relationship between two time series mathematically and physically rigorous, and the resulting rigorously derived formula is formally very concise, involving only covariance calculation, and the calculation of a concise formula can be used to know whether there is a causal relationship between two related Whether there is a causal relationship between two related events and the direction problem, for example, for the related variables X1 and X2 , calculate the information flow from X2 to X1 , if the result is greater than zero, X2 is the cause of X1 , on the contrary, X2 is the effect of X1 , if the result is zero, X2 is not the cause and effect of X1 , which
Research on Causal Model Construction and Fault Diagnosis of Control …
835
has been verified in several practical causal analysis problems. An important application of Liang-Kliman information flow theory is causal analysis of time series, which derives quantitative formulas for causal analysis among time series, enabling mathematical and physical rigor of causal analysis, i.e., calculating the magnitude of causal effects in several related causal pairs, and thus deriving which variable has the greatest effect on the outcome. The causal relationship explored in this paper is the causal relationship between pairs of variables described by information flow. The control system itself is the description and characterization of the relationship between the controlled object, environmental factors and task objectives, establishing their correlation, influence mechanism and evaluation indicators, and then determining and exerting the role of control, so that the controlled object in the environment to complete the task objectives. The relationship between “input” and “output” of the system emphasized in control theory is essentially a causal relationship of things, and the control of input to output is equivalent to a control process. In control theory, the input and output of the system correspond to the causality of events. There is coupling between the variables of the system components, and the change of one variable of the system will affect other variables; on the other hand, for the observation of a variable of the system output, besides containing the information of the state variable itself, it also contains the information of other states connected with it, and this coupling information between variables is the information transfer/ information flow of the state variable, that is, the causal role between variables, to explore the causal relationship between variables To explore the causal relationship between variables, it is necessary to find the pairs of variables with coupling relationship. Control action relationship also means a clear causal action transfer. From the physical level, the control target is applied to the controller, and the control instruction is obtained according to the calculation of the control law algorithm, and then the control instruction is sent to the actuator, and the actuator generates physical action according to the received control instruction, which acts on the controlled object to realize the state change of the controlled object. Each equation represents a set of variables with coupling relationship and coupling information, and it is the focus of this paper to find the pairs of variables from the set of variables and explore the causal relationship between variables from the equation structure. Starting from the causal model of the control system, the system behavior can be deduced from the system components, component attributes and the connection relationship between components, and the component behavior and the state change of the whole system observed through the causal relationship. And the use of causality for fault diagnosis analysis of accidents has become a popular research content nowadays. There are many forms of cause-effect relationship expression, among which, cause-effect diagram describes the cause-effect relationship in the form of a diagram, which has the characteristics of intuitive image, and the cause-effect relationship can be clearly represented by the cause-effect diagram model. Establishing cause-effect diagram model can express faults, signs and their interrelationships, which can make the expression of knowledge structured and easier to express complex knowledge compared with other models, and improve the efficiency of reasoning and diagnosis. Previous people generally obtain the causal pairs between physical quantities from a
836
Q. Zhu et al.
qualitative point of view through physical and expert knowledge, construct the causal relationship between the normal state of the system and the causal relationship when an accident is generated to construct a cause-effect diagram model to solve the accident If the system is too complex, it will face the problems of missing, redundant and unexplained connections of some causal pairs brought by insufficient access to expert knowledge, and the artificial construction of causal diagrams will inevitably produce duplication and omission. In order to solve the above problems and consider the problem of causal pair splicing integrity, this paper proposes a systematic method of constructing causal models from the qualitative and quantitative perspectives, extracting control equations using control laws and transfer relations in control systems, dividing equations into differential equations and algebraic equations, classifying equation variables, constructing causal graph models from the perspective of equation structure, and constructing algorithms to automate the generation of causal graph models. Unlike the method of constructing SDG graphs from control equations, which use positive and negative threshold balance of nodes and edges to describe the compatible pathways of fault propagation and thus reasoning about fault sources, but will face the problem of compensating response and inverse response, in the causal graph model, this paper starts from the concept of causality of information flow and is able to calculate the stability between causal pairs, quantify the causal effect, and compare the magnitude of causality between uncertain nodes to determine the fault source step by step thus overcoming some of the drawbacks faced by the SDG model fault diagnosis. The work of this paper is: (1) Proposed a method for constructing causal graphs from control equations. (2) Introduced information flow into the causal model and quantified the causal effects. (3) A causal diagram modeling of the pressure control system of a regulator is performed and an application of fault diagnosis is carried out. The work in this paper overcomes the limitations of previous work in finding causal pairs from a priori knowledge and thus constructing causal diagram models, and establishes causal diagrams directly starting from control equations, and then calculates causal effects in causal relationships through information flow, making full use of quantitative monitorable historical data to improve the diagnostic accuracy and efficiency of the fault tracing process. The structure of the paper is as follows: Sect. 2 introduces causal graph modeling and the work related to fault diagnosis; Sect. 3 details the method of constructing causal graph models based on control equations; Sect. 4 describes the causal information flow graph-based diagnosis method, and applies the method and validates the simulation with the example of nuclear power pressurized water reactor regulator pressure control system; Sect. 5 extracts the data in the simulation software for fault diagnosis; and the final chapter concludes in the last chapter.
Research on Causal Model Construction and Fault Diagnosis of Control …
837
2 Related Work Causality analysis is an important monitoring activity and has become an important research topic in industry [1]. Causal analysis can provide operators with the knowledge and process insight to properly and effectively prevent failures in systems. Traditional methods of causality analysis are based on planned or randomised experiments, however this is not feasible in complex systems due to the large number of variables and safety rules [2], hence the need for causality modelling of the system. Causal modelling approaches can be divided into knowledge-based and databased approaches [3]. Maurya et al. proposed a knowledge-based modelling approach based on mathematical equations of the system [4–6] and under system diagrams [7, 8]. Giltal [9] developed a topology-based approach to process connectivity based on the physical description of the system. There are several causal modelling techniques for constructing fault trees from process knowledge, they are called model-based reliability analysis (MBDA) techniques and they rely on the discovery of dependencies from existing system models [10–12]. Leitner-Fischer and Leue [13] developed an alternative approach to generate fault trees representing causal relationships. This method extracts counterexamples from the system model and uses a number of test conditions to discover combinations of events that lead to different system states. As these methods rely on the existence of accurate and representative models, which are extremely difficult to obtain in these large-scale systems, they require extensive expert effort and a very deep understanding of the process [14]. It is important to use data-driven techniques to learn causal models from historical data. Different techniques have been used to exploit historical data in the form of time series observations in order to discover causal relationships between variables. Among the well-known data-driven techniques is Granger causality [15]. Granger causality is based on vector autoregressive models (VAR), a class of linear regression models. In contrast, data are used and graphical causal models are generated, the most common of which are causal Bayesian networks (CBNs), which have been widely used [16, 17]. Among the information-theoretic approaches, information flow is a rather important tool for measuring causality. Liang-Kleeman information flow is widely used in causal analysis to quantify the causal effect between two time series given the amount of information passed between them per unit of time without any prior knowledge [18, 19]. Stips et al. used Liang-Kleeman information flow to find a very clear causal relationship between CO2 and global warming [20]. Xiang-San Liang’s team found a strong causal relationship between the Indian Ocean and El Niño in the Pacific Ocean, as shown by the transmission of uncertainty from the Indian Ocean to the Pacific Ocean [21]. Lu Yan et al. used information flow theory to analyse the causal effect between precipitation and landslide mudslide hazards, and the method showed a good improvement in the accuracy of high landslide mudslide hazard prediction [22]. The Liang-Kleeman information flow method is used to determine whether the marginal entropy of a change is influenced by other changes, which is consistent
838
Q. Zhu et al.
with the results of the Granger causality analysis and serves as a mutual verification [23]. The scope of control science is to discover quantitative descriptions and representations of controlled objects, environmental factors and task objectives, to establish their correlations, influence mechanisms and evaluation indicators, and then to determine and exert control to enable controlled objects to achieve their task objectives in the environment they are in [24]. Some scholars have discussed the methodological issues of control from the causality of things, and proposed a causal control method based on the “cause-body-effect” relationship, which clearly describes the relationship between control theory and cause and effect, and provides a theoretical basis for exploring the causal relationship from the control equation to generate causal diagrams [25]. The Liang-Kleeman information flow is used to calculate the causal effects between causal pairs, and to predict faults in a timely manner through changes in the information flow between characteristic variables, which is an effective new accident prediction method.
3 Causal Graph Modeling Based on Control Equations The relationship between “input” and “output” of the system emphasized in control theory is essentially a cause-and-effect relationship, and the control of input to output is equivalent to the “cause and effect” process. The causality of input and output is reflected in the variable nodes in the control structure, and it is necessary to find the cause-effect pairs and explore the coupling information of the variable nodes between the cause-effect pairs, i.e., the information flow, so as to perform fault diagnosis of the control system. Variables in the control system can be divided into three types, exogenous or disturbance variables, system variables and output variables. Exogenous variables represent disturbance and fault variables and can change independently; system variables represent variables in the system components that usually exist in the system, which affect each other and can be affected by exogenous variables, and are usually also called state variables; output variables represent the output of the system, which usually do not affect other variables The output variables represent the output of the system and usually do not affect other variables, which can also be expressed as system variables. If the mathematical model of the system, i.e., the differential algebraic equations are known, the cause-and-effect diagram model can be constructed qualitatively and quantitatively based on the mathematical model, the actual industrial control system control laws and the interaction between components can constitute a system of state equations, only need to event state variables and input and output variables as variable nodes, the construction of variables and equations between the “fruit nodes “It is necessary to recognize that the basic nature of the causal diagram may be completely different depending on whether the causal diagram is composed of
Research on Causal Model Construction and Fault Diagnosis of Control …
839
differential equations, algebraic equations or a system of equations combining the two, and the specific methods are described below.
3.1 Control Equations of the System The control equations of a system can be divided into two main categories: differential equations and algebraic equations. In the differential equations in general systems, the causal relationship is uncertain. While in the control system, the differential equation represents the control law part, the actual system usually uses the transfer function to indicate the control law, the input of the transfer function must affect the outcome of the output, that is, the input is the cause of the output, the transfer function into a general formula, that is, the differential algebraic equation, also follows this rule, the following will be the classical control law PID control as an example to prove. The three parts of the control equation are as follows, the U I part of the differential equation. U P = K p e(k)
d 1 U I = K p e(k) dt TI
U D = K p TD
e(k) − e(k − 1) T
(1) (2) (3)
Laplace transform for differential equations. 1 L U˙I (t) = L[K p e(k)(t) ] TI sU I (s) = K p
1 e(k)(s) TI
(4) (5)
K p T1I U I (s) = G (s) = e(k)(s) s
(6)
U I (s) = e(k)(s) G (s)
(7)
G (s) is the transfer function, according to the nature of the transfer function, U I is e(k) obtained through the transfer function, e(k) is the cause of U I , i.e., the variables of the differential part of the differential equation are the fruit, in order to describe this property, the output of the differential system as the fruit is written on the left side of the equation, and the input structure as the cause is written on the right side of the equation, i.e., the causal relationship on the equation is from right to left, and it
840
Q. Zhu et al.
is only necessary to draw the equation The causal relationship between the variables of the differential equation can be illustrated by simply drawing directed arcs from the variables on the right side of the equation to the variables on the left side. Due to the nature of algebraic equations, the behavior of the system described by algebraic equations is indeterminable, i.e., there is no clear directionality, and causality cannot be derived from the structure of the equation alone. The variables in the equation are classified into system variables and exogenous variables. According to the nature of the variables, the system variables interact with each other, while the exogenous variables only act on the system variables, and it is only necessary to explore the relationship between the system variables and select a fruit node in that equation among the system variables, so that the other variables in that equation can point to that fruit node. Since the role relationship between the system variables is not clear, it is chosen to let each system variable be a fruit node once to form a causal graph, i.e., in a causal graph, each system variable is a fruit node of the corresponding equation. Usually, this approach yields multiple directed graph results, but not necessarily all results are valid solutions, and the correctness of the results must be determined using knowledge of the origin of the equation nodes, the underlying process, and the physics of the equipment to discern the valid solutions of the causal graph from a qualitative perspective.
3.2 Description of the Algorithm for Constructing Causal Graphs The analytical results of constructing causal diagrams from the differential and algebraic equations in the previous subsection are presented in the form of an algorithm as follows. Differential part of the algorithm. Step 1: Input the differential equation. Step 2: Extract the system variables of the differential part on the left, and draw directed arcs of the variables on the right side of the equation toward the system variables extracted on the left, and self-arc if they point to themselves. Step 3: After the differential equation traversal is completed, the algorithm ends. Algebraic part of the algorithm. Initialization: Input all algebraic equations, define system variables and exogenous variables. Step 1: A system variable is picked in one equation to form a match with the equation. The variables picked in the equation cannot be picked again in the other equations to get a number of matching results for matching the equation with the system variable.
Research on Causal Model Construction and Fault Diagnosis of Control …
841
Step 2: In a matching result, the other variables in an equation that are not matched by that equation are plotted to the matching system variables in directed arcs until all equations are plotted. Step 3: The algorithm ends when all matching results are plotted. The algebraic part of the algorithm in matching means picking out a specific system variable, and this specific system variable represents the fruit just like the differential part on the left side of the differential equation. The meaning of perfect matching is to find the fruit node that fits all the equations in the simple noderelated information given by the set of equations, and the directed graph obtained by combining all the subsets of the directed graph derived from the equations is the causal graph. Multiple results are not always valid solutions, and it is necessary to analyze the relationship between the system variables in terms of system principles to distinguish the valid solutions.
4 Diagnostic Methods Based on Control Equations 4.1 Diagnostic Method Based on Causal Information Flow Diagram The structure of the equations representing the system process is disrupted when a fault occurs, and the stability of the system differs significantly from the stability of a causal pair with a causal relationship under normal operating conditions, and if this stability is quantified, the source of the fault can be found based on this difference. And this quantification method is exactly the information flow, which only involves the calculation of covariance and the mathematical and physical rigorization of the causal relationship between two related events. For the time series X1 and X2 , the information flow from X2 to X1 is: T2→1 =
2 C1,d1 C11 C12 C2,d1 − C12 2 2 C11 C22 − C11 C12
(8)
where Ci j is the sample covariance between X i and X j , Ci,d j is the covariance between X i and X j via the forward difference constituting the new sequence, and the unit of the calculated result is the information transfer per unit of time (nat). Strictly speaking, the information flow obtained from the above formula is an estimate of the real information flow. According to the principle of information flow, if in steady state the information flow should be maintained in some very small range of fluctuations, while when an accident occurs it will disrupt the steady state, resulting in a jump in the value of the information flow, and the time at which this jump occurs is the time when the failure occurs.
842
Q. Zhu et al.
After obtaining the causal diagram model of the system, by selecting the physical variables with causal effect as the object of study, the Liang-Kleeman information flow calculation method can obtain the information flow size within a certain time between two time series, find the jump point of the causal effect, judge the situation and occurrence time of the accident according to the jump, and then make prediction and alarm for the accident.
4.2 Constructing Cause-and-Effect Diagram Models for Voltage Regulator Control Systems In order to verify the correctness of the method, the nuclear regulator control system was selected for simulation verification. The principle of the regulator pressure control system is shown in Fig. 1. The system is a single-parameter, multi-channel regulation system that regulates pressure by controlling a proportional heater, an onoff heater and a spray valve. The regulator pressure is set at 15.4 MPa, and the input of the PID controller is the deviation between the pressure measurement and the set value, and the output is used as a compensating differential pressure to control each actuator. When the compensating differential pressure reaches the start-up threshold of each actuator, the pressure balance is achieved by proportionally adjusting the effect of pressurization and decompression of the actuator. The simplified system control structure diagram is shown in Fig. 2. The P is the pressure magnitude, Pr e f is the pressure rectification value, e is the deviation between the pressure and the rectification value, as input into the PID controller, and then output to control the actuator, Vo is the on–off heater, Vh is the proportional electric heater, Vw is the sprinkler valve controller, and Px is the system operating loss pressure. The following system of control equations is constructed for the regulator pressure control system. Assuming that the output of the PID controller compensates the differential pressure as U , the spray system reduces the pressure per unit time as Pw , the proportional heater increases the pressure per unit time as Ph , and the on–off electric heater increases the pressure per unit time as Po , the proportional adjustment of the spray Fig. 1 Principle of pressure control system of regulator
Research on Causal Model Construction and Fault Diagnosis of Control …
843
Fig. 2 Pressure control system structure diagram of the regulator
valve opening is Vw , the proportional adjustment of the proportional electric heater is Vh and the proportional adjustment of the on–off electric heater is Vo . Write a system of state equations for the pressure control system of the regulator according to the control law and the principle of system operation:
UI = K p
P(k) − Pr e f = e(k)
(a)
U P = K p e(k)
(b)
k 1 1 d U I = K p e) T e( j )( TI j=0 dt TI
U D = K p TD
e(k) − e(k − 1) T
(c)
(d)
U (k) = U P + U I + U D
(e)
Vw (k)
(f)
Vh (k)
(g)
Vo (k)
(h)
P(k + 1) = Vw (k)Pw T + Vh (k)Ph T + Vo (k)Po T + P(k) + Px
(i)
The P(k) is the pressure magnitude at sampling point k within the regulator, Pr e f is the regulator pressure rectification value, Px is the system operating loss pressure, e(k) is the input to the PID controller, U P , U I and U D are the proportional, integral and differential parts of the PID controller respectively, T is the sampling period,
844
Q. Zhu et al.
Fig. 3 Differential partial cause and effect diagram
the actuator opening obtained at the sampling point and the unit pressure value and period provided by the actuator The multiplication is the pressure impact provided by the actuator at this sampling point, where a discrete processing method is used, requiring a smaller sampling period. The structure of the set of control state equations is algebraic differential equations side by side, and the differential part has equation c, i.e. k 1 1 d U I = K p e) e( j )( UI = K p T TI j=0 dt TI
(c)
The causal diagram of the differential part is derived according to the algorithm drawing in Fig. 3. The algebraic part has equations a, b, d, e, f, g, h and i, which are matched directly to obtain a variety of results, some of which are shown in Table 1. Then for each perfect match result through the algorithm to build a cause-andeffect diagram, in the process of building a cause-and-effect diagram after the system structure analysis only perfect match 1 corresponding to the cause-and-effect diagram in line with the system control laws, the regulator pressure control system cause-andeffect diagram model is shown in Fig. 4. In the physical sense, this event state transfer process is coincident with the actual information flow. The causal diagram after combining the differential part and the algebraic part is shown in Fig. 5. Comparing with the control system structure diagram, the relationship between nodes and nodes is basically consistent and conforms to the physical operation law. Table 1 Perfect match results
Equations Perfect match 1 Perfect match 2 Perfect match 3 (a)
e(k)
P(k)
P(k)
(b) (d)
UP
e(k)
UP
UD
UD
e(k)
(e)
U (k)
UP
UD
(f)
Vw (k)
U (k)
U (k)
(g)
Vh (k)
Vh (k)
Vh (k)
(h)
Vo (k)
Vo (k)
Vo (k)
(i)
P(k)
Vw (k)
Vw (k)
Research on Causal Model Construction and Fault Diagnosis of Control …
845
Fig. 4 Causal diagram of the algebraic part
Fig. 5 Cause-and-effect diagram of the pressure control system of the regulator
4.3 Simulation Experiment Verification In order to verify the effectiveness of the fault diagnosis method, the causal pair obtained in the cause-effect diagram: heater power and pressure, the experiment uses PCTran AP1000 to simulate a nuclear power system operating in real time under normal conditions, and simulates in software the heater power and pressure change from start-up to pressure equilibrium under normal operating conditions of the pressure control system of the regulator, If a 10% valve opening disturbance is added to the shower valve at 600 s, the pressure changes from a steady to a decreasing trend, and the resulting deviation value causes the control system to send a signal to regulate, making the heater power higher. For the causal pair of regulator pressure and heater power, the data on the PCTran is exported on the Matlab software to calculate the information flow between the two, and the following results can be obtained as shown in Fig. 6. From the figure, it can be seen that the information flow between the causal pairs with causal effects tends to be smooth at the same working condition, and changes to another smooth state when the working condition produces a change.
846
Q. Zhu et al.
Fig. 6. 10% shower valve disturbance under pressure and heater power information flow changes
5 Simulation Experiments There are many kinds of faults in the nuclear reactor first-loop system, and when a fault change occurs in a key variable, the source of the fault needs to be determined in a timely and efficient manner. A steam generator water level fault change can be caused by a variety of reasons, and once the threshold value is exceeded it can cause serious accidents such as shutdown, and the SG water level (NSGA) in the steam generator water level control system is selected as the object of study for experimental analysis in this paper. In the PCTran AP1000 simulation system, the fault is set: a Feedwater pipe rupture fault is injected at 200 s. First, the cause-and-effect diagram model of the steam generator water level control system is constructed according to the method in Sect. 3, and the possible causes of the failure are added to construct the failure model as shown in Fig. 7. Starting from the symptom node NSGA, the possible fault sources are traced on the fault model, and the fault change in NSGA in the fault model may be caused by three fault causes, MSLB, SGTR and FeedWater pipe rupture, and the computable key causal pairs are identified in the path from the three fault causes to the symptom node, respectively: (1) The only SGTR fault source is NSGA which is a variable node with no computable information flow. (2) Feeder pipe rupture fault source path has key cause-effect pairs: WFWA → NSGA. (3) MSLB fault source path has key cause-effect pairs: Flow Steam → NSGA. The key cause-effect pairs are calculated separately and the results are shown in Fig. 8. As can be seen from the figure, the information flow of the key causal pair WFWA → NSGA caused by the real fault source jumped sharply at the moment of the 200 s
Research on Causal Model Construction and Fault Diagnosis of Control …
847
Fig. 7 Causal diagram of steam generator water level control system
Fig. 8 Key causal pair information flow size
fault, until the action between the causal pairs returned to balance after the control feedback adjustment, and the information flow tended to a new stable state. The information flow of Flow Steam → NSGA has been changing, but there is no sharp jump at the fault point, indicating that the MSLB is not the real source of the fault, so we can judge that the fault is caused by the rupture of the Feedwater pipe.
6 Summary Complex control systems are characterized by coherence, hierarchy, fault propagation, radioactivity, delay, uncertainty, etc. Fault diagnosis of complex systems has been a research focus of scholars at home and abroad. The causal graph model fault diagnosis in graph theory method has the advantages of intuitive expression, no
848
Q. Zhu et al.
reliance on exact models and historical data, and strong interpretation of inference results, etc. It has received the attention of fault diagnosis researchers in several fields. In this paper, we explain the control system from the causal point of view, analyze the structure of the control equation from the control equation, classify the variable nodes, and establish an algorithm to connect to build a causal graph model, then use the information flow to characterize the causal effect between the causal pairs, and compare the magnitude of the causal effect by calculating the information flow, so as to identify and diagnose the system fault. The method is validated by modeling the pressure control system of nuclear pressurized water reactor, and the results are in accordance with the physical operation law of the system. Acknowledgements The content of this article is the result of a project of the Hunan Provincial Education Department (22C0223); Funded by Science and Technology on Reactor System Design Technology Laboratory for open project: KFKT-24-2021006.
References 1. Yang, F., Duan, P., Shah, S.L., et al.: Capturing connectivity and causality in complex industrial processes. Springer Science & Business Media, Cham (2014) 2. Spirtes, P.: Introduction to causal inference. J. Mach. Learn. Res. 11(5) (2010) 3. Chiang, L.H., Braatz, R.D.: Process monitoring using causal map and multivariate statistics: fault detection and identification. Chemom. Intell. Lab. Syst. 65(2), 159–178 (2003) 4. Maurya, M.R., Rengaswamy, R., Venkatasubramanian, V.: A systematic framework for the development and analysis of signed digraphs for chemical processes. 1. Algorithms and analysis. Ind. Eng. Chem. Res. 42(20), 4789–4810 (2003) 5. Maurya, M.R., Rengaswamy, R., Venkatasubramanian, V.: Application of signed digraphsbased analysis for fault diagnosis of chemical process flowsheets. Eng. Appl. Artif. Intell. 17(5), 501–518 (2004) 6. Maurya, M.R., Rengaswamy, R., Venkatasubramanian, V.: A signed directed graph-based systematic framework for steady-state malfunction diagnosis inside control loops. Chem. Eng. Sci. 61(6), 1790–1810 (2006) 7. Thambirajah, J., Benabbas, L., Bauer, M., et al.: Cause and effect analysis in chemical processes utilizing plant connectivity information. Presented at IEEE Advanced Process Control Applications for Industry Workshop, pp. 14–16, May 2007 8. Thambirajah, J., Benabbas, L., Bauer, M., et al.: Cause-and-effect analysis in chemical processes utilizing XML, plant connectivity and quantitative process history. Comput. Chem. Eng. 33(2), 503–512 (2009) 9. Gil, G.J.D.G., Alabi, D.B., Iyun, O.E., et al.: Merging process models and plant topology. In: 2011 International Symposium on Advanced Control of Industrial Processes (ADCONIP), pp. 15–21. IEEE (2011) 10. Aizpurua, J.I., Muxika, E.: Model-based design of dependable systems: limitations and evolution of analysis and verification approaches. Int. J. Adv. Secur. 6(1) (2013) 11. Sharvia, S., Kabir, S., Walker, M., et al.: Model-based dependability analysis: state-of-the-art, challenges, and future outlook. Softw. Qual. Assur., 251–278 (2016) 12. Kabir, S.: An overview of fault tree analysis and its application in model based dependability analysis. Expert Syst. Appl. 77, 114–135 (2017) 13. Leitner-Fischer, F., Leue, S.: Probabilistic fault tree synthesis using causality computation (2013)
Research on Causal Model Construction and Fault Diagnosis of Control …
849
14. Ge, Z., Song, Z., Ding, S.X., et al.: Data mining and analytics in the process industry: the role of machine learning. IEEE Access 5, 20590–20616 (2017) 15. Granger, C. W. J.: Investigating causal relations by econometric models and cross-spectral methods. Econometrica J. Econometric Soc., 424–438 (1969) 16. Neapolitan, R.E.: Learning Bayesian Networks, vol. 38. Pearson Prentice Hall, Upper Saddle River (2004) 17. Pearl, J.: Causality: Models, Reasoning, and Inference, 2nd edn. Cambridge University Press, New York (2009) 18. San, L.X.: Normalizing the causality between time series. Phys. Rev. E: Stat., Nonlin, Soft Matter Phys. 92(2), 022126 (2015) 19. San, L.X.: Information flow and causality as rigorous notions ab initio. Phys. Rev. E 94(5), 052201-1–052201-28 (2016) 20. Stips, A., Macias, D., Coughlan, C., et al.: On the causal structure between CO2 and global temperature. Sci. Rep. 6(1), 21691 (2016) 21. San, L.X.: Unraveling the cause-effect relation between time series. Phys. Rev. E: Stat., Nonlin, Soft Matter Phys. 90(5–1), 052150 (2014) 22. Lu, Y., Xie, T., Xu, F., et al.: Causal analysis of rainfall indicators and landslide debris flow based on information flow theory. J. Nat. Hazards 28(04), 196–201 (2019) 23. Tian, H. B.: Application of ALFF and brain connectivity methods in brain imaging data analysis. Hunan Normal University (2018). 24. Guo, L.: Some thoughts on the development of control theory. Syst. Sci. Math. 31(9), 1014–1018 (2011) 25. Guo, L., Wang, C.H., Wang, Y.: A preliminary investigation of causality and causal control. Control Decis. Mak. 33(5), 835–840 (2018)
Research on Fault Diagnosis Method Based on Structural Causal Model in Tennessee Eastman Process Haoyuan Pu, Jie Liu, Zhi Chen, Xiaohua Yang, Changan Ren, Zhuoran Xu, and Yifan Jian
Abstract In the practical application of fault diagnosis in large-scale chemical systems, due to the danger of faults, the machine learning diagnosis methods used in current research often face the problem of scarcity of fault samples. This makes it difficult for the model to be trained effectively, thus affecting the fault detection rate of the model. Therefore, for the fault diagnosis of chemical system with small sample data, this paper proposes a diagnostic method that uses structural causal model to further combine the diagnostic results of the two machine learning methods, thereby improving global diagnostic performance. With the help of powerful system description ability of structural causal model, we can establish a causal graph for diagnostic process of the chemical system, and accurately construct the corresponding structural equations for the various diagnostic results of the two machine learning methods to combine them, thus improve the fault detection rate. The global results make up for the shortcomings of single machine learning method in small sample chemical system fault diagnosis. The verification on the chemical system simulation platform Tennessee Eastman process shows that, compared with the diagnosis results of the selected Gaussian Naive Bayes method and the K-nearest neighbour method, the results obtained by our method are effectively improved in fault detection rate of each fault in TEP.
H. Pu · J. Liu (B) · X. Yang · C. Ren · Z. Xu School of Computer Science, University of South China, Hengyang, China e-mail: [email protected] Z. Chen · Y. Jian Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu, China J. Liu Intelligent Equipment Software Evaluation Engineering Technology Research Center of Hunan, Hengyang, China CNNC Key Laboratory on High Trusted Computing, Hengyang, China C. Ren School of Nuclear Science and Technology, University of South China, Hengyang, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_79
851
852
H. Pu et al.
Keywords Tennessee Eastman process · Small sample · Fault diagnosis · Structural causal model
1 Introduction Because large-scale chemical system usually produces dangerous products, and its process is complex and changeable, the fault diagnosis method in this field has always been a research hotspot. In recent years, due to the high accuracy of machine learning in the classification of this field, the current research on fault diagnosis of chemical systems is mostly based on machine learning methods. In recent years, researchers in this field have made various improvements to the problems of machine learning methods. For example, the DCNN fault diagnosis model based on a large amount of data training in reference [1] has a good fault detection rate (FDR); Reference [2] and [3] proposed a distributed Bayesian network diagnosis model for the problem of difficult system modeling; Reference [4] and [5] proposed feature selection algorithm based on non-linear SVM and EDBN fault diagnosis model respectively to solve the problem of losing information due to feature dimensionality reduction; Reference [6] uses PSO-SVM-RFE feature screening to remove redundant features for the problem of noise interference in data; Reference [7] proposed an attribute transfer method using expert knowledge to artificially define fault description for fault diagnosis of zero samples in view of the difficulty in obtaining fault samples; Reference [8] proposed a process topology CNN diagnosis model to solve the problem of overfitting in machine learning fault diagnosis at present. The above research hasn’t completely solved the problem that it is difficult to obtain fault samples in practical applications. In order to improve the classification accuracy, machine learning methods need to use a large number of fault samples for training. However, due to the danger of faults in chemical system, fault samples are very scarce. Then, the diagnosis model lacking training samples is difficult to effectively diagnose all types of faults, thus reducing FDR. Although zero sample diagnosis method is proposed in [6], it depends on expert knowledge to artificially define fault information. Therefore, it is necessary to solve the problem of reducing FDR due to the lack of fault samples in machine learning methods in chemical system, and it doesn’t need to rely on expert knowledge. In this paper, taking chemical system Tennessee Eastman process (TEP) as an example, structural causal model (SCM) is introduced to further combine the diagnosis results of various machine learning methods, so as to achieve fault diagnosis with small samples. The SCM was proposed by Pearl, which is mainly composed of directed acyclic graph and structural equations. Because of its powerful representation ability, SCM has been applied in many classification researches to improve accuracy of classification. For example, reference [9] uses SCM to describe risk factors, disease and symptom states and their relationships in medical diagnosis; Reference [10] and [11] use SCM to describe the state of pictures, questions and
Research on Fault Diagnosis Method Based on Structural Causal Model …
853
answers in SGG and VQA, and their relationships; Reference [12] used SCM to build a causal model of image, context and labels in WSSS; Reference [13] used SCM to describe the molecular mechanism of action in the study of viral pathogenesis. In the TEP, SCM can use the causal graph to graphically describe the state of each variable and the causal relationship between variables in the diagnosis process, and can build corresponding structural equations to combine the diagnosis results based on the advantages and disadvantages of the diagnosis results of each method, thus improving FDR. The contribution of this paper is to provide a method that uses SCM to further combine the diagnosis results of two machine learning methods, thereby improving the overall FDR, making up for the shortcomings of the single machine learning method in the fault diagnosis of small sample chemical systems due to the lack of fault samples, and this method doesn’t rely on expert knowledge. Experimental results show that compared with the selected Gaussian Naive Bayes method and K-nearest neighbor method, the comprehensive results obtained by this method have significant improvement in FDR of each fault in TEP. The structure of this paper is divided into the following parts: Section 2 will introduce the steps of using our method in TEP in detail; Section 3 will take Gaussian Naive Bayes method and K-nearest neighbor method as examples to verify the FDR performance of this method on each fault compared with the above two methods by TEP experiments; Section 4 will summarize our work of the full text.
2 SCM Fault Diagnosis Method The diagnosis process is shown in Fig. 1. The steps are as follows: (1) Select the training data set data train and test data set data test composed of fault set F and related parameter set V in TEP; (2) Use data train to train two machine learning diagnosis methods C1 , C2 , and enter data test to get the diagnosis results R1 , R2 ; (3) Analyze the performance of R1 , R2 on FDR; (4) The corresponding decision sets D1 , D2 are obtained by processing R1 , R2 ; (5) The causal graph is constructed according to the diagnosis process, and structural equations is established based on the analysis results of step (4); (6) Input decision sets D1 , D2 to get the final decision set D SC M .
Fig. 1 SCM fault diagnosis flowchart in TEP
854
H. Pu et al.
2.1 Preliminary Fault Diagnosis The preliminary fault diagnosis corresponds to steps 1–3. The main purpose of this stage is to obtain the decision sets D1 , D2 of the diagnosis methods C1 , C2 on the fault set F of the TEP, and then analyze their performance on the FDR and as the input of the subsequent SCM fault diagnosis model. Selected Faults and Related Parameters of TEP. The TEP consists of five main units: reactor, condenser, vapor-liquid separator, stripper, and compressor. This process generates two liquid products G and H from four gaseous reactants A, C, D, and E, and also produces inert B and by-product F. Details can be found in reference [14]. The flow chart of the system is shown in Fig. 2. In this paper, we will select the faults 1, 2, 5, 7–9, 12, and 14 in Table 1 as the diagnostic objects of the model, where faults 1, 2, 5, 7 are step type, faults 8, 9 and 12 are random variation type, and faults 14 are sticking type. The list of relevant parameters is shown in Table 2. In this paper, process variables other than XMV (12) agitator speed in Table 2 will be selected as monitoring objects. Fault Diagnosis and Result Analysis. Record faults 1, 2, 5, 7–9, 12, 14 as F = { f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , f 8 }. It is assumed that TEP system may have multiple faults at the same time, and the faults are independent of each other. In steps (1) and (2), enter data train for the selected methods C1 , C2 , and enter data test to get the corresponding result sets R1 , R2 . Assume that for sample X , C1 , C2 will only diagnose it as a single fault at the same time.
Fig. 2 System flow graph of the TEP [15]
Research on Fault Diagnosis Method Based on Structural Causal Model … Table 1 List of selected faults of the TEP
855
Number
Faults
Type
IDV(1)
A/C feed radio, B Composition constant
Step
IDV(2)
B Composition, A/C radio constant
Step
IDV(5)
Condenser cooling water inlet temperature
Step
IDV(7)
C header feed loss-reduce availability
Step
IDV(8)
A,B,C feed composition
Random variation
IDV(9)
D feed temperature
Random variation
IDV(12)
Condenser cooling water inlet temperature
Random variation
IDV(14)
Reactor cooling water value
Sticking
Table 2 List of selected process variables for the TEP Number
Variables
Number
Variables
XMEAS(1)
A feed
XMEAS(19)
Stripper steam flow
XMEAS(2)
D feed
XMEAS(20)
Compressor work
XMEAS(3)
E feed
XMEAS(21)
Reactor coolant water outlet temperature
XMEAS(4)
A and C feed
XMEAS(22)
Separator coolant water outlet temperature
XMEAS(5)
Recycle flow
XMEAS(23–28)
Component A-F(Reactor feed analysis)
XMEAS(6)
Reactor feed rate
XMEAS(29–36)
Component A-H(Purge gas analysis)
XMEAS(7)
Reactor pressure
XMEAS(37–41)
Component D-H(Product analysis)
XMEAS(8)
Reactor level
XMV(1)
D feed flow
XMEAS(9)
Reactor temperature
XMV(2)
E feed flow
XMEAS(10)
Purge rate
XMV(3)
A feed flow
XMEAS(11)
Separator temperature
XMV(4)
A and C feed flow
XMEAS(12)
Separator level
XMV(5)
Compressor recycle value
XMEAS(13)
Separator pressure
XMV(6)
Purge value
XMEAS(14)
Separator underflow
XMV(7)
Separator pot liquid flow
XMEAS(15)
Stripper level
XMV(8)
Stripper liquid product flow
XMEAS(16)
Stripper pressure
XMV(9)
Stripper steam value
XMEAS(17)
Stripper underflow
XMV(10)
Reactor coolant water flow
XMEAS(18)
Stripper temperature
XMV(11)
Condenser coolant water
856
H. Pu et al.
SCM is used to combine the diagnosis results R1 , R2 of methods C1 , C2 to improve the overall diagnosis performance. Therefore, it is necessary to analyze the difference between the diagnosis results R1 , R2 and its actual category, and calculate the FDR of each fault f i and AFDR of each method, which is helpful to combine the advantages of each method. Both are defined as follows. Definition 1 (Fault detection rate of f i , F D R( f i )): The fault detection rate of fault f i refers to the proportion of correctly diagnosed samples among the samples with actual fault f i , and the calculation method is as follows: Number of samples with correct diagnosis in fault f i ∗ 100% Number of samples with actual fault as f i
F D R( f i ) =
Definition 2 (Average fault detection rate, AFDR): Average fault detection rate refers to the proportion of correctly diagnosed samples among all samples, and the calculation method is as follows: F D R( f i ) =
Number of samples with correct diagnosis ∗ 100% Number of all sample
Analyze the performance of diagnostic methods C1 , C2 on the above two performance indicators, and the hypothetical results are shown in Tables 3 and 4. For method Ci and fault f i , the higher F D R( f i ) is, the higher the diagnostic ability of method Ci to fault f i is. The higher the AFDR, the stronger the overall fault diagnosis capability of the method. Set threshold θ F D R . If the F D R( f i ) difference between the method C1 , C2 is less than the threshold θ F D R , it means that both of them have the same diagnostic ability for f i . If the difference between one method Ci and another method C j is greater than θ F D R , it means that Ci is better at diagnosing the f i . Therefore, the fault that method C1 is good at diagnosing is called F1 , the fault that method C2 is good at diagnosing is called F2 , and the fault that both methods are good at diagnosing is called F1=2 . Table 3 Analysis results of method C1 C1
F D R( f i )
AF D R
f1
f2
f3
f4
f5
f6
f7
f8
α1
α2
α3
α4
α5
α6
α7
α8
α9
Table 4 Analysis results of method C2 C2
F D R( f i )
AF D R
f1
f2
f3
f4
f5
f6
f7
f8
β1
β2
β3
β4
β5
β6
β7
β8
β9
Research on Fault Diagnosis Method Based on Structural Causal Model …
857
2.2 SCM Fault Diagnosis SCM can be defined as M =< G, δ >. The causal graph G is a directed acyclic graph, and it is composed of an exogenous variable set U , an endogenous variable set V , and a directed edge set E. Among them, U represents the external factors that will affect the model, V represents the main research objects in the model, and E represents the direct causal relationship. Structural equation δ is a quantitative representation of the direct causal relationship. Each child node in the figure corresponds to a structural equation. Our method can be divided into the construction of causal graph and the construction of structural equations. Causal Graph. It can be seen from Fig. 1 that the causal graph in the SCM is divided into: (1) Input the diagnosis result sets R1 , R2 obtained by the diagnosis methods C1 , C2 . They act as exogenous variables, that is, U = {R1 , R2 }. (2) Transform R1 , R2 into decision sets D1 , D2 . D1 , D2 are the inputs of the model, which are responsible for converting the fault category into the corresponding vector form and belong to the endogenous variable set V . Among them, the states of nodes D1 , D2 are composed of vectors di = ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , f 8 ), each component takes a value of 1 to indicate that the corresponding fault occurs, and a value of 0 to indicate that the corresponding fault is closed. From the assumption in Sect. 2.1, it can be seen that at the same time, only one component is 1, and the remaining components are 0. The state of D1 , D2 can be recorded as {d1 , d2 , d3 , d4 , d5 , d6 , d7 , d8 }, where the subscript i indicates that the value of the corresponding component is 1, and the value of the remaining components is 0, such as d1 = (1, 0, 0, 0, 0, 0, 0, 0). di, j means that fault f i and fault f j occur, the remaining faults are closed, and so on. In particular, d0 means that no fault occurs, that is, d0 = (0, 0, 0, 0, 0, 0, 0, 0). (3) According to the classification results F1=2 , F1 , F2 in Sect. 2.1, set the target {O1 , O2 , O3 , O4 , O5 , O6 , O7 } to combine all decisions. The target variables belongs to U , as shown below: ➀ Node O1 : If the values of D1 , D2 belong to F1=2 , the weights of the two are the same, and the results consider that both of them occur, that is, O1 = D1 ∪ D2 . Otherwise, O1 doesn’t combine D1 and D2 , and takes the value as a zero vector. The state of node O1 is the combination of vectors corresponding to any two faults in F1=2 and the zero vector d0 . ➁ Node O2 : If D1 is F1 , and D2 is F1=2 or F1 , then D1 has a higher weight, and the result is O2 = D1 , otherwise it is a zero vector. The state of node O2 is the vector corresponding to the fault in F1 and the zero vector. ➂ Node O3 : If D1 is F1=2 or F2 , and D2 is F2 , then D2 has a higher weight, and the result is O3 = D 2 , otherwise it is a zero vector. The state of O3 is the corresponding vector in F2 and zero vector.
858
H. Pu et al.
➃ Node O4 : If D1 is F1=2 and D2 is F1 , D1 has a higher weight, and the result is O4 = D1 , otherwise it is a zero vector. The state of O4 is the corresponding vector in F1=2 and zero vector. ➄ Node O5 : If D1 is F2 and D2 is F1=2 , D2 has a higher weight, and the result is O5 = D2 , otherwise it is a zero vector. The state of O5 is the corresponding vector in F1=2 and zero vector. ➅ Node O6 : If D1 is F1 and D2 is F2 , the weights of the two are the same, and the result is O6 = D1 ∪ D2 , otherwise it is a zero vector. The state of O6 is the combination of the corresponding two vectors in F1 and F2 , and the zero vector; ➆ Node O7 : If D1 is F2 and D2 is F1 , the weights of the two are the same, and the result is O7 = D1 ∪ D2 , otherwise it is a zero vector. The state of O7 is the combination of the corresponding two vectors in F1 and F2 , and the zero vector. (4) From the comprehensive decisions obtained in the previous step, select the global result D SC M . The state of D SC M is a combination of the states of nodes O1 , O2 , O3 , O4 , O5 , O6 , O7 . To sum up, endogenous variable set is V = {D1 , D2 , O1 , O2 , O3 , O4 , O5 , O6 , O7 , D SC M }. From step (1) to step (4), the upper layer has a causal effect on the lower layer in turn, then the causal graph is shown in Fig. 3. Structural Equations. In causal graph G, the direct causal relationship between each node is quantified by each structural equation. It is divided into three layers: The first layer is the causal effect between R1 , R2 and D1 , D2 ; The second layer is the causal effect between D1 , D2 and {O1 , . . . , O7 }; The third layer is the causal effect between {O1 , . . . , O7 } and D SC M . The structural equation is recorded as δ = {δ1 , δ2 , δ3 , δ4 , δ5 , δ6 , δ7 , δ8 , δ9 , δ10 }. The details are as follows. The First Layer:
Fig. 3 Causal graph G
δ1 : D1 = di (R1 ), (i = 1, . . . , 8)
(1)
Research on Fault Diagnosis Method Based on Structural Causal Model …
δ2 : D2 = di (R2 ), (i = 1, . . . , 8)
859
(2)
Among them, di (R1 ), di (R2 ) means that the diagnostic results obtained by C1 , C2 are converted into corresponding decision vectors di . The Second Layer:
D1 ∨ D2 , (D1 ∈ F1=2 , D2 ∈ F1=2 ) d0 , other wise D1 , (D1 ∈ F1 , D2 ∈ F1=2 |F1 ) δ4 : O 2 = d0 , other wise D2 , (D1 ∈ F1=2 |F2 , D2 ∈ F2 ) δ5 : O 3 = d0 , other wise D1 , (D1 ∈ F1=2 , D2 ∈ F1 ) δ6 : O 4 = d0 , other wise D2 , (D1 ∈ F2 , D2 ∈ F1=2 ) δ7 : O 5 = d0 , other wise D1 ∨ D2 , (D1 ∈ F1 , D2 ∈ F2 ) δ8 : O 6 = d0 , other wise D1 ∨ D2 , (D1 ∈ F2 , D2 ∈ F1 ) δ9 : O 7 = d0 , other wise
δ3 : O 1 =
(3)
(4)
(5)
(6)
(7)
(8)
(9)
The second layer corresponds to the description of the target nodes {O1 , . . . , O7 } in previous section, and is responsible for the combining of the decision set D1 , D2 to improve the FDR. The symbol “∨" represents the OR operation, that is, the OR operation of corresponding components between two vectors di = ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 , f 7 , f 8 ), (i = 1, 2), in the form of: d1 ∨ d2 = ( f 1 (d1 ) ∨ f 1 (d2 ), f 2 (d1 ) ∨ f 2 (d2 ), ..., f 8 (d1 ) ∨ f 8 (d2 )) The Third Layer: δ10 : D SC M = O1 ∨ O2 ∨ O3 ∨ O4 ∨ O5 ∨ O6 ∨ O7
(10)
The third layer corresponds to the further combination of the objective {O1 , . . . , O7 }, and the form of the operation of “∨” is the same as above.
860
H. Pu et al.
3 Experimental Study The experimental platform of this paper is the chemical system simulation platform TEP, which has been widely used as a benchmark for control system research in recent years. The public data set comes from: https://github.com/camaramm/tennes see-eastman-profBraatz. For the eight faults selected in Sect. 2.1, 480 fault data are selected for each fault to form data train , and 800 fault data are selected for each fault to form data test . In this section, Gaussian Naive Bayes method C G N B and K-nearest neighbor method C K N N are selected as the preliminary diagnosis methods. For C G N B and C K N N , respectively, enter data train for training, and enter data test to get the corresponding diagnosis results RG N B , R K N N ; Then the performance of RG N B , R K N N on FDR is analyzed, and the causal graph and structural equations of SCM fault diagnosis model are established accordingly; Finally, RG N B , R K N N are input into the SCM fault diagnosis model to obtain the global decision set D SC M , and FDR is used to verify the effectiveness of our method.
3.1 Gaussian Naive Bayes Method The Gaussian Naive Bayes method (GNB) assumes that each monitoring variable can independently predict the fault category. The predicted combination of all monitored variables is the final prediction, which returns the probability of the dependent variable being classified into each group, with the final classification assigned to the class with higher probability. The FDR of the diagnosis result RG N B on each fault is analyzed, and the following analysis results are obtained.
3.2 K-Nearest Neighbour Method The core idea of the K-nearest neighbor method (KNN) is that if most of the K most similar samples in the feature space of a sample belong to a certain category, then the sample also belongs to this category. The KNN method in this paper sets the K value to 5. The FDR of the diagnosis result R K N N on each fault is analyzed, and the following analysis results are obtained.
Research on Fault Diagnosis Method Based on Structural Causal Model …
861
3.3 SCM Fault Diagnosis First, analyze the diagnosis results RG N B , R K N N . Set the threshold θ F D R = 2% as described in Sect. 2.1. For the FDR of C G N B and C K N N on eight faults in Tables 5 and 6, the analysis results are shown in Table 7. From Sect. 2.1 and Table 7, FG N B=K N N , FG N B , FK N N are respectively shown as follows. FG N B=K N N indicates that both GNB and KNN are good at diagnosing faults, FG N B indicates that GNB is better at diagnosing faults, and FK N N indicates that KNN is better at diagnosing faults. ⎧ ⎨ FG N B=K N N = { f 2 , f 8 } = { f5 , f6 , f7 } F ⎩ GN B FK N N = { f 1 , f 3 , f 4 } Then, according to the above analysis results, the causal graph and structural equations of SCM are constructed. The causal graph G is shown in Fig. 4. Diagnostic results RG N B , R K N N belong to exogenous variables {RG N B , R K N N }; The transformed decision set DG N B , D K N N are the endogenous variable nodes of the model. Both states are {d1 , d2 , d3 , d4 , d5 , d6 , d7 , d8 }, corresponding to eight faults. Then, target nodes {O1 , O2 , O3 , O4 , O5 , O6 , O7 } are constructed to further combine the decisions in DG N B and D K N N . The status of each node is Table 5 FDR(%) of GNB diagnosis results F D R( f i ) GN B
AF D R
f1
f2
f3
f4
f5
f6
f7
f8
94.62
96.12
21.12
90.38
68.38
92
67.12
100
78.75
Table 6 FDR(%) of KNN diagnosis results F D R( f i ) KNN
AF D R
f1
f2
f3
f4
f5
f6
f7
f8
98.5
97.5
72.12
98.5
41.25
84.88
46.62
98.62
79.75
Table 7 FDR(%) comparison results of GNB method and KNN method on each fault F D R( f i ) f1
f2
f3
f4
f5
f6
f7
f8
CG N B
94.62
96.12
21.12
90.38
68.38
92
67.12
100
CK N N
98.5
97.5
72.12
98.5
41.25
84.88
46.62
98.62
862
H. Pu et al.
Fig. 4 The causal graph G of the SCM fault diagnosis model constructed by GNB and KNN
⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨
O1 = {d0 , d2 , d8 , d2,8 } O2 = {d0 , d5 , d6 , d7 } O3 = {d0 , d1 , d3 , d4 } O4 = {d0 , d2 , d8 } ⎪ ⎪ ⎪ O5 = {d0 , d2 , d8 } ⎪ ⎪ ⎪ ⎪ = {d , d , d , d1,7 , d3,5 , d3,6 , d3,7 , d4,5 , d4,6 , d4,7 } O ⎪ 6 0 1,5 1,6 ⎪ ⎩ O7 = {d0 , d1,5 , d1,6 , d1,7 , d3,5 , d3,6 , d3,7 , d4,5 , d4,6 , d4,7 } The global decision node D SC M selects the decision from the target nodes {O1 , ..., O7 } as the result, and its state is the combination of the states of the target nodes. di , (i = 1, ..., 8) D SC M = d1,5 , d1,6 , d1,7 , d2,8 , d3,5 , d3,6 , d3,7 , d4,5 , d4,6 , d4,7 According to the causal graph G, construct structural equations δ = {δ1 , δ2 , δ3 , δ4 , δ5 , δ6 , δ7 , δ8 , δ9 , δ10 } corresponding to Sect. 2.2. Where D1 is DG N B , D2 is D K N N , F1=2 is FG N B=K N N , F1 is FG N B , and F2 is FK N N . For the completed SCM fault diagnosis model, input the preliminary diagnosis results RG N B , R K N N to get the global results D SC M . The corresponding analysis results are shown in Table 8. Table 8 FDR(%) of SCM diagnosis results SC M
F D R( f i )
AF D R
f1
f2
f3
f4
f5
f6
f7
f8
98.62
96.38
76.25
99.5
69.62
95.38
69.12
100
88.11
Research on Fault Diagnosis Method Based on Structural Causal Model …
863
Table 9 FDR(%) comparison results of the diagnostic results of the three methods F D R( f i ) f1
AF D R f2
f3
f4
f5
f6
f7
f8
GN B
94.62
96.12
21.12
90.38
68.38
92
67.12
100
78.75
KNN
98.5
97.5
72.12
98.5
41.25
84.88
46.62
98.62
79.75
SC M
98.62
96.38
76.25
99.5
69.62
95.38
69.12
100
88.11
3.4 Performance Comparison and Analysis The diagnostic results DG N B , D K N N of C G N B ,C K N N are compared with the diagnostic results D SC M of our method on FDR of each fault and AFDR of each method. The results are shown in Table 9. It can be seen that compared with methods C G N B andC K N N , our method has the best FDR performance on faults f 1 , f 3 , f 4 , f 5 , f 6 , f 7 , f 8 , that is, on faults 1, 5, 7–9, 12, 14, and has the highest AFDR. The results show that, in the small-sample fault diagnosis of the chemical simulation platform TEP, our method can effectively further combine the diagnosis results of each single machine learning method to obtain an overall better diagnosis. For example, due to the lack of fault samples, the diagnostic performance of the GNB method on fault f 3 and the KNN method on faults f 5 , f 7 is low, but the global results of our method have been significantly improved.
4 Conclusion In the field of chemical system fault diagnosis, there are many successful application cases of machine learning. However, due to the dangers of the products of the chemical system, it is difficult to collect fault data in practical applications, and the FDR of the machine learning diagnosis method based on a small number of fault sample training will decrease. In this regard, for the small-sample fault diagnosis of the chemical system, this paper proposes a diagnostic model based on the SCM, which further combines the diagnostic results of the two machine learning methods to obtain a global result, thereby improving the overall FDR. This method doesn’t rely on expert knowledge to build a diagnostic model, but only further combines the diagnostic results of different machine learning methods. The experimental results also prove the effectiveness of our method, and the FDR of each fault and the AFDR of the method have been significantly improved. Acknowledgements The content of this article is supported by the National Natural Science Foundation of China (62003157) and the Research Foundation of Education Bureau of Hunan Province (22C0223, 21B0434) and the Science and Technology on Reactor System Design Technology Laboratory for open project: KFKT-24-2021006.
864
H. Pu et al.
References 1. Xu, H., Ren, T., Mo, Z., et al.: A fault diagnosis model for Tennessee Eastman processes based on feature selection and probabilistic neural network. Appl. Sci. 12(17), 8868 (2022) 2. Zhu, J., Ge, Z., Song, Z., et al.: Large-scale plant-wide process modeling and hierarchical monitoring: A distributed Bayesian network approach. J. Process Control 65, 91–106 (2018) 3. Chen, G., Ge, Z.: Hierarchical Bayesian network modeling framework for large-scale process monitoring and decision making. IEEE Trans. Control Syst. Technol. 28(2), 671–679 (2018) 4. Onel, M., Kieslich, C.A., Pistikopoulos, E.N.: A nonlinear support vector machine-based feature selection approach for fault detection and diagnosis: application to the Tennessee Eastman process. AIChE J. 65(3), 992–1005 (2019) 5. Wang, Y., Pan, Z., Yuan, X., et al.: A novel deep learning based fault diagnosis approach for chemical process with extended deep belief network. ISA Trans. 96, 457–467 (2020) 6. Feng, L., Zhao, C.: Fault description based attribute transfer for zero-sample industrial fault diagnosis. IEEE Trans. Industr. Inf. 17(3), 1852–1862 (2020) 7. Wu, D., Zhao, J.: Process topology convolutional network model for chemical process fault diagnosis. Process Saf. Environ. Prot. 150, 93–109 (2021) 8. Wu, H., Zhao, J.: Deep convolutional neural network model based chemical process fault diagnosis. Comput. Chem. Eng. 115, 185–197 (2018) 9. Richens, J.G., Lee, C.M., Johri, S.: Improving the accuracy of medical diagnosis with causal machine learning. Nat. Commun. 11(1), 3923 (2020) 10. Tang, K., Niu, Y., Huang, J., et al.: Unbiased scene graph generation from biased training. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3716–3725 (2020) 11. Niu, Y., Tang, K., Zhang, H., et al.: Counterfactual VQA: a cause-effect look at language bias. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12700–12710 (2021) 12. Zhang, D., Zhang, H., Tang, J., et al.: Causal intervention for weakly-supervised semantic segmentation. Adv. Neural. Inf. Process. Syst. 33, 655–666 (2020) 13. Zucker, J., Paneri, K., Mohammad-Taheri, S., et al.: Leveraging structured biological knowledge for counterfactual inference: a case study of viral pathogenesis. IEEE Transactions on Big Data 7(1), 25–37 (2021) 14. Downs, J.J., Vogel, E.F.: A plant-wide industrial process control problem. Comput. Chem. Eng. 17(3), 245–255 (1993) 15. Wu, P., Lou, S., Zhang, X., et al.: Data-driven fault diagnosis using deep canonical variate analysis and fisher discriminant analysis. IEEE Trans. Industr. Inf. 17(5), 3324–3334 (2020)
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim Haotan Li, Zhi Chen, Xuecen Zhao, Yuan Min, and Yifan Jian
Abstract Aiming at the problem that the breakdown maintenance and periodic maintenance methods of nuclear reactor make-up pump cannot deal with potential abnormalities and sudden failures, which affect the normal operation of nuclear reactor and increase the maintenance costs, the condition based maintenance is applied to the make-up pump to realize its remaining useful life prediction. The simulation software AMESim was used to construct the simulation model of the make-up pump, and the leakage components in the software were modified to meet the simulation requirements to realize the full life cycle simulation of the leakage failure of the make-up pump. Multiple sets of experimental data were obtained using Monte Carlo simulation method. A high-precision prediction of the remaining life of the pump in case of leakage failure was achieved using a long short-term memory network optimized by Bayesian algorithm. Keywords Make-up pump · AMESim · Long short-term memory network · Fault prognosis
1 Introduction The make-up pump is one of the important equipments of the make-up system of nuclear reactor, which is used to maintain the water loading of the reactor coolant system and to perform the initial filling and boosting of the reactor coolant system and the primary system [1]. Therefore, it is important for the safety and stability of the nuclear reactor plant to grasp the status of the make-up pump and make it operate reliably in time. At present, the maintenance methods of nuclear reactor make-up pump are still mainly breakdown maintenance and periodic maintenance, breakdown maintenance H. Li (B) · Z. Chen · X. Zhao · Y. Min · Y. Jian Nuclear Power Institute of China, Chengdu, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_80
865
866
H. Li et al.
cannot cope with sudden failure, and periodic maintenance will increase the maintenance cost. Therefore, it is necessary to ensure the safety and reliability of the make-up pump through condition based maintenance and predicting its remaining useful life so that faults can be eliminated before they occur, while making full use of the equipment’s life. At present, there are three main types of prediction methods for remaining useful life prediction of mechanical equipment: reliabilitybased model, physics-based model and data-driven model. Reliability-based model predicts through statistical characteristics with low accuracy; physicals-based model will be ineffective when an accurate mathematical model of the system cannot be established; data-driven methods recognize or learn the healthy or non-healthy behavior of the object system from historical data, and do not require a priori knowledge of the object system. Although the typical failure data of some critical equipment in practice are not easy to obtain, data-driven failure prediction has been widely used and promoted with its flexible adaptability and ease of use [2]. In recent years, data-driven prediction methods have mainly focused on neural networks, and Long Short Term Memory (LSTM) is commonly used in time series prediction problems. Qixiao Zhang et al. [3] used Bayesian optimized LSTM for time series prediction of sensor data of aero engines, and the prediction accuracy of the optimized network was significantly improved compared to other algorithms. Yongtao Li [4] optimized LSTM by Bayesian neural network to achieve better results in the identification of external gear pump health status. ZeYi Shang [5] used AMESim (Advanced Modeling Environment for Simulations of engineering systems) software to build plunger pump model to achieve internal leakage fault simulation and predicted the leakage flow using support vector machine. This paper focuses on the typical leakage fault of the make-up pump in nuclear reactor, the make-up pump model was established in the simulation software AMESim to provide failure data for the leakage failure prediction, and its changeable degree of leakage failure was simulated by changing the leakage gap between the piston and the sleeve. The prediction method was LSTM network, choosed the leakage of the pump as the health index, and optimized the hyperparameters of the network by using Bayesian method. Finally achieved the prediction of the leakage development trend at the early stage of failure, calculated its remaining useful life, and provided technical support for the condition based maintenance of the make-up pump.
2 Make-Up Pump Model Establishment The object of this study was an actual make-up pump used in engineering, whose type is vertical three-cylinder single-acting reciprocating pump, mainly composed of transmission box, coupling, liquid cylinder, lubrication system, cooling system, motor, etc. When using AMESim modeling, the parts to be simulated are analyzed according to the structure and function of the pump, among which the three-phase squirrel cage motor and motor speed control circuit in the motor part; the worm gear
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
867
and crank connecting rod in the transmission part; the piston, the element to control leakage, the inlet and outlet control valves in the plunger part were the parts that must be established. The remaining parts of the make-up pump were simplified in the modeling.
2.1 Core Component Design A) The actual motor is a three-phase squirrel-cage motor, and its speed control is realized by PI control. Accordingly, EMDSCIM01 component was selected in the software. The component interface relationship is shown in Fig. 1, in which ports 1 to 3 transfer current and voltage, port 4 transfers rotor torque and speed, port 5 transfers heat and temperature, and port 6 transfers stator torque and speed. B) In practice, the motor voltage control is realized by converting the threephase motor into two DC motors by dq transformation, calculating the voltage according to the required torque, and then calculating the corresponding threephase voltage through inverse dq transformation. In the software, only the EMDSCIMFOC01 component includes a complete circuit to complete this process. Its interface relationship is shown in Fig. 2. Port 1 inputs the current speed of the motor, ports 2 to 4 inputs the current three-phase current of the motor, ports 5 to 7 outputs the voltage required by the motor in polar coordinates, port 8 inputs the limit value of the motor voltage, and port 9 inputs the torque required by the motor. C) WORMGR1 was selected for worm gear, and its interface relationship is shown in Fig. 3. Port 1 transfers the torque and speed after deceleration with subsequent elements, and port 2 transfers the torque and speed before deceleration with the front component. D) The rear end of the worm gear is connected with three crank connecting rods. CRANK0 was selected as the crank connecting rod component. Its interface relationship is shown in Fig. 4. Port 1 transfers torque and rotational speed with the front, port 2 transfers the speed, displacement and force after the rotating motion is converted into linear motion. According to the actual design, three Fig. 1 Three-phase squirrel-cage motor component
868
H. Li et al.
Fig. 2 Voltage control component
Fig. 3 Worm gear component
crank connecting rods were set at the initial angles of 0°, 120° and 240° respectively to achieve the alternately drainage of the three pistons and form a relatively stable total flow. E) The piston is connected behind the connecting rod, and BAP12 component was selected in the software. Its interface relationship is shown in Fig. 5. Port 1 transfers the flow, pressure and volume of the liquid inlet and outlet, and ports 2 and 3 transfer the speed, displacement and force at two sides of the piston. The element calculates the output flow, volume and force at the other end according to the force received, the pushed displacement and the pressure. F) There is a component BAF01 specially designed for leakage in the software [6]. Its interface relationship is shown in Fig. 6. Ports 1 and 2 transfer pressure, output flow and volume at two sides, while ports 3 and 4 transfer force, displacement Fig. 4 Crank connecting rods component
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
869
Fig. 5 Piston component
Fig. 6 Leakage component
and speed with the same function as the piston element. The component can set the leakage gap and calculate the leakage amount according to the leakage gap, size, speed, etc. The leakage gap of this component is set before simulation and cannot be changed during simulation. In order to simulate the process of increasing leakage gap with the development of fault in simulation, the component was modified using the model editor provided in the software and its normal operation was realized by modifying its code. Its interface relationship is shown in Fig. 7. The functions of ports 1 to 4 are the same as those in BAF01, and a signal input port 5 was added to receive the leakage gap. At the same time, the leakage gap parameter in the original BAF01 was removed. The new component could change the leakage gap according to the data input from the signal port during the simulation, which was used to meet the requirements for subsequent change of fault degree. Fig. 7 New designed leakage component
870
H. Li et al.
2.2 Overall Model of Make-Up Pump According to the structure and working principle of the actual make-up pump, after considering the function and interface relationship of each component in the model, all components were spliced into the overall simulation model of the make-up pump. The model is shown in Fig. 8. The left side is the motor part, the upper right is the transmission part, and the lower right is the plunger part. In addition to these three parts, there are some circuits, pipelines and other components that make the model connection reasonable. In order to verify the consistency between the simulation model and the actual pump was carried out by the load lifting test. The comparison between the simulated pressure flow P-Q curve and the real curve is shown in Fig. 9. It can be seen that the simulated P-Q curve is very close to the real situation that flow hardly changes with the pressure. Therefore, the simulation model can be used for subsequent pump fault data acquisition. Fig. 8 Overall model of make-up pump
Fig. 9 Comparison of actual and simulated P-Q curves
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
871
3 Make-Up Pump Fault Injection 3.1 Make-Up Pump Leakage Mechanism and Simulation The gap of piston and sleeve in this make-up pump is sealed by a stuffing box. As the working time increases, the packing will be constantly worn, and there will be a gap between the piston and sleeve, and the liquid will leak out from the gap. Since it is costly and time consuming to inject faults into an actual pump, the failure was simulated by better controlled software fault injection method, and the change of packing wear degree is simulated by changing the gap of leakage element. Since the pump takes the leakage rate of 2 L/h as the criterion for serious failure, the make-up pump will be deemed to have failed after reaching serious failure, so the leakage gap was increased during the simulation until the leakage rate reached 2 L/h. In order to simulate the whole life cycle of the make-up water pump, it was necessary to know what rule the leakage gap will develop according to. The stuffing box of this pump used a mixture of carbon fiber and graphite, and the results of the J. A. Williams’ carbon-graphite composite with stainless steel friction experiment [7] was used as a reference. Referring to the wear curve of composite materials in this document, let the gap of the three leakage elements change according to the trend of the experimental result. The change curve of the gap is shown in Fig. 10. Since the state of the model was determined after the parameters such as leakage clearance were determined, the simulation results were a series of state points in the whole life cycle. The simulation was carried out for 20 s, with a simulation step of 0.01 s, and a total of 2000 state points were obtained. In the whole process, the gap rose from 0 mm to 0.01 mm according to the trend of wear curve. The total leakage flow curve of the corresponding three pistons is shown in Fig. 11. This curve is the instantaneous flow curve, and the average flow near the last state point is about 2L/ h, reaching the end of the working life. Fig. 10 Leakage clearance curve
872
H. Li et al.
Fig. 11 Total leakage curve
3.2 Simulation Data Processing In order to improve the generalization ability of machine learning, it was necessary to provide multiple sets of historical data of the whole life cycle for training, so the Monte Carlo simulation [8] method was used to make the motor speed and piston diameter that directly affect the leakage, take random values 50 times within its possible fluctuation range, forming 50 sets of motor speed and plunger diameter. Then let the model simulate according to the 50 groups of motor speed and plunger diameter, and 50 leakage curves were obtained for representing 50 different full life cycle processes. After the instantaneous leakage was obtained, it was converted into the average leakage. A sliding window with a length matching the reciprocating cycle of the reciprocating pump was used to slide average the instantaneous leakage, and a smooth average leakage curve was obtained. Considering that the actual leakage curve might be affected by the measurement error and might not be a very regular curve, the Gaussian white noise was superimposed to the data to make the curve fluctuate within a reasonable range. One of the curves is shown in Fig. 12. Considering that 50 pieces of data might not achieve a good training effect for machine learning, each piece of data was augmented by upsampling and then random Fig. 12 Leakage curve after processing
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
873
downsampling [9, 10]. The augmentation method is as in Algorithm 1, each piece of data was augmented 19 times, forming a total of 1000 pieces of data.
4 Make-Up Pump Failure Prediction 4.1 Data Pre-processing Since the leakage was used as the fault evaluation standard in engineering, the leakage is directly used to indicate the severity of the fault, i.e., the health index. Data normalization can improve the convergence speed and the accuracy of the model. Therefore, the leakage data was normalized to the range [0,1] as the fault index. The calculation method is shown in Formula 1. This indicator was also the object to be predicted by the prediction model, one of indexes is shown in Fig. 13.
Fig. 13 Failure index curve
874
H. Li et al.
x =
x − min(X ) max(X ) − min(X )
(1)
X represents time series, x represents one value of X, x’ represents the normalized value. After normalization of 1000 data, 600 of them are randomly selected as training sets, 200 as validation sets and 200 as test sets. The training set was used for network training, the validation set was used to evaluate the current network during the training process, and the test set was used to finally evaluate the prediction ability of the network. In network training, all 2000 state points of the training set were involved in the training. During the verification and test, the data before a small leakage occurs, i.e., the first 700 points, were provided to the network, let it predict the development trend of the next 1300 points. The error between the time of reaching the end of the life predicted by the network and the test data, the fitting degree of the test data were used as the evaluation criteria. Finally, in order to give the network the ability to predict future sequences based on existing sequences, it was necessary to use sliding windows to process the training data into sequences as the input to the network, and the latter value of the sliding window in the data as the target output of the network, as in Algorithm 2. If the sliding window length and the network structure were chosen appropriately, then the network would be able to learn the relationship between the sequence before and after and predict the sequence trend of the unknown part.
4.2 LSTM Network The LSTM network is a special kind of recurrent neural network, which has both the temporal correlation characteristics that recurrent neural networks have and the ability to control the historical information transfer between neurons in the network by gating recurrent units, which can avoid the problems of gradient disappearance and gradient explosion in recurrent neural networks [11]. The wear of stuffing box is related to time, so the prediction of leakage failure can be transformed into LSTM time series prediction.
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
875
Fig. 14 LSTM node structure
Figure 14 shows the basic structure of the LSTM node [12–14], where x t is the input, ct is the internal state control unit, ht is the external state of the hidden layer, and is also the output. LSTM controls the path of information transfer through the gating mechanism, which includes input gate it , forgetting gate f t , and output gate ot , respectively, where the forgetting gate f t controls how much information needs to be forgotten for the internal state ct-1 at the previous moment; the input gate it ∼ controls the candidate state ct at the current moment The output gate ot controls how much information needs to be output from the internal state ct to the external state ht at the current moment. The gate in LSTM node is a soft gate with a value between (0, 1), which means that information is allowed to pass through in a certain proportion, and the three gates are calculated as: i t = σ (Wi xt + Ui h t−1 + bi )
(2)
f t = σ W f xt + U f h t−1 + b f
(3)
h t = ot ∗ tanh(ct )
(4)
σ (·) is the logistic function with output interval (0, 1), x t is the input of the current time and ht-1 is the external state of the previous time. The calculation process of the whole neuron is: 1) Using the external state information ht-1 at previous time and the input x t at ∼ current time, calculating three gates and candidate states ct ; 2) Combining forgetting gate f t and input gate it to update memory unit ct ; 3) Combined with the output gate ot , the internal state information is transmitted to the external state ht . Through LSTM cycle unit, the whole network can establish a long distance temporal dependency. The above process can be summarized as follows:
876
H. Li et al.
⎡∼⎤ ⎡ ⎤ tanh ct
⎢ ⎥ ⎢ ⎢ ot ⎥ ⎢ σ ⎥ ⎥(W xt + b) ⎢ ⎥=⎣ σ ⎦ h t−1 ⎣ it ⎦ σ ft
(5)
ct = f t ct−1 + i t · c˜t
(6)
h t = ot ∗ tanh(ct )
(7)
4.3 Bayesian Optimization Algorithm There are many parameters that affect the performance of LSTM network and the range of their empirical values is wide. It is time-consuming to find the optimal value manually, and it lacks theoretical guidance and guarantee. Therefore, the optimization algorithms are usually used to select the model hyperparameter. Existing hyperparameter estimation methods mainly include grid search, random search and Bayesian optimization methods [4, 15]. Grid search determines the optimal value by finding all points on the grid in hyperparameter space, which consumes computing resources; random search finds the global best by randomly selecting sample points in the search range, and can also improve the search efficiency with the help of heuristic ideas. However, there is still no effective theoretical guarantee on the method of introducing randomness in the hyperparameter space. Bayesian optimization regards the optimization objective as a black box function, estimates the posterior distribution of the objective function using Bayesian theorem after knowing the function value of the current parameter sampling point, selects the next hyperparameter combination to be sampled according to the distribution, and finds the optimal solution of the function [16]. Bayesian optimization algorithm makes full use of the information of known data points, while also adds some randomness to avoid falling into local extremum. It is one of the few hyperparameters estimation methods with good convergence theory guarantee. This paper used Bayesian optimization method to optimize the hyperparameters of LSTM network. The optimization process is as in Algorithm 3, the minimum objective function is taken as an example.
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
877
4.4 Bayesian Optimized LSTM Prediction Model The hyperparameters that have a great impact on the performance of LSTM network are the number of layers of the network, the number of nodes per layer, the number of network training epochs, the number of samples input for each training batch_ size and sliding window length time_window used in data processing. Use Bayesian optimization algorithm to optimize these parameters, and based on experience [17– 19], set the optimization search range as the number of network layers from 1 to 3, the number of nodes per layer from 16 to 128, the number of training times from 20 to 200, the number of samples input per training from 16 to 256, and the length of sliding window from 20 to 100. The optimization process is as follows: A) Set the minimization target as the prediction error of the validation set and set the running times of the optimization algorithm N, let the hyperparameters be randomly selected within the value range; B) Use the current hyperparameter to build the network and input the training sets to complete the training; C) Input the validation sets to let the network predict, and calculate the error between the prediction result and the validation set; D) Use Bayesian optimization algorithm to explore the next set of hyperparameters and replace the current parameters; E) Repeat steps 2 to 4 N times; F) Select the set of parameters from the historical records that minimize the prediction error. Finally, the optimal number of network layers found was 3, the number of nodes in three layers were 61, 127 and 60, the number of samples input for each training epochs was 31, the number of training times batch_size is 158, and the length of sliding window time_window was 21. Used these parameters to build the network
878
H. Li et al.
and complete the training. Finally, input the test sets for prediction, the prediction process is as shown in Algorithm 4.
A comparison of the test curve and predicted result is shown in Fig. 15. The network predicted that the leakage will reach a serious level at the 1292nd state point, which was extremely close to the 1300th point in the test set, with an error of 0.71%. The average absolute error between the predicted result and the test set for all 1300 points was calculated as Formula 8, with an average error of 4.32%. It can be concluded that the network predicted this curve very well and could correctly predict the trend of leakage and the time to reach the end of life.
len
i=1 x tr ue,i − x pr ed,i AverageErr or = len
(8)
x true,i represents the i-th element of the test set sequence, x pred,i represents the i-th element of the forecast sequence, len indicates the length of the test set sequence. Fig. 15 Example of comparison between test set and prediction result
Fault Prognosis of Nuclear Reactor Make-Up Pump Based on AMESim
879
After testing all 200 data in test sets, the average error of all prediction points in the test set was 6.23%, and the average prediction error of remaining life of 200 curves was 2.26%, which indicates a good prediction of the development trend of leakage failure.
5 Conclusion In this paper, the leakage failure of the make-up pump in nuclear reactor was simulated by the software AMESim, and the failure data was obtained. The remaining useful life of the make-up pump was predicted by the Bayesian optimized LSTM network. The results showed that: A) The established simulation model of make-up pump could replace the real makeup pump in terms of hydraulic characteristics. After modifying the original leakage component in the software, the leakage gap could change in a specified sequence, which could simulate the situation that the packing weared gradually and the gap increased gradually during the development of the failure, leading to the increase of the leakage. B) The LSTM network optimized by Bayesian could predict the remaining useful life of the pump with high accuracy in case of leakage failure. This research expanded the simulation research on the make-up pump in nuclear reactor, and realized more complex fault simulation, and preliminarily realized the prediction of the leakage fault of the make-up pump, which laid a technical foundation for the subsequent realization of the prognostics and health management of the makeup pump.
References 1. Yu, J.: Nuclear power plant systems and operation. Tsinghua University Press, Beijing (2016) 2. Zhao, X., Kim, J., Warns, K., et al.: Prognostics and health management in nuclear power plants: an updated method-centric review with special focus on data-driven methods. Front. Energy Res. 9, 696785 (2021) 3. Zhang, Q., Dong, P., Wang, K., et al.: Prediction of engine residual life based on Bayesian optimized LSTM. Fire Control Command Control 47(04), 85–89 (2022) 4. Li, Y.: Research on health status identification method of external gear pump. Yanshan University (2021) 5. Shang, Z.: Construction and application of health prediction simulation platform for aviation hydraulic oil pump. Civil Aviation Flight Institute of China (2016) 6. Zhang, J., Hu, Z., Zhang, S., et al.: Fault injection analysis of A10VNO pump internal leakage based on AMESim. Hydraulics Pneumatics Seals 41(07), 69–73 (2021) 7. Williams, J., Morris, J., Ball, A.: The effect of transfer layers on the surface contact and wear of carbon-graphite materials. Tribol. Int. 30(9), 663–676 (1997) 8. Yang, B.: Research on quantum simulation collision experiment based on monte carlo. Institute of Modern Physics, Chinese Academy of Sciences (2021)
880
H. Li et al.
9. Oh, C., Han, S., Jeong, J.: Time-series data augmentation based on interpolation. Procedia Comput. Sci. 175, 64–71 (2020) 10. Talavera, E., Iglesias, G., González-Prieto, Á., et al.: Data augmentation techniques in time series domain: a survey and taxonomy. arXiv preprint arXiv:13508 (2022) 11. Miao, K.: Network Short-term load forecasting based on bayesian optimized LSTM. North China Electric Power University (2021) 12. Yang, T., Li, B., Xun, Q.: LSTM-attention-embedding model-based day-ahead prediction of photovoltaic power output using Bayesian optimization. IEEE Access 7, 171471–171484 (2019) 13. Cabrera, D., Guamán, A., Zhang, S., et al.: Bayesian approach and time series dimensionality reduction to LSTM-based model-building for fault diagnosis of a reciprocating compressor. Neurocomputing 380, 51–66 (2020) 14. Qiu, X.: Neural Network and Deep Learning. Machinery Industry Press (2020) 15. Li, Y., Zhang, Y., Wang, J.: A review of bayesian optimization methods for hyperparameter estimation. Comput. Sci. 49(S1), 86–92 (2022) 16. Zhu, Y.: EMD-BOLSTM prediction model for underground coal gasification gas production index. China University of Mining and Technology (2022) 17. Ma, Z.: Application of Bayesian optimized LSTM in power battery SoC estimation. Jiangsu University (2020) 18. Tang, Q.: Research on temperature prediction method of multi-information fusion based on 1DCNN and LSTM. Kunming University of Science and Technology (2022) 19. Zhai, X.: Data driven rolling bearing fault diagnosis and residual life prediction. Beijing Jiaotong University (2021)
D2D Cooperative MEC Multi-objective Optimization Based on Improved MOEAD Qifei Zhao, Gaocai Wang, Zhihong Wang, and Yujiang Wang
Abstract In this paper, we study D2D collaborative MEC multi-objective optimization based on improved MOEAD. The current mobile edge computing model (MEC), where tasks are directly uploaded to the MEC server for execution, has problems such as high computational pressure on the edge server and underutilization of resources on idle mobile devices. If collaborative computing is performed using idle devices in the edge network, it can realize the reasonable utilization of users’ idle resources and enhance the computational capacity of MEC. In this paper, we propose a partially offloaded MEC model (DCM-PO) with D2D collaboration, in which, in addition to local computation and MEC server computation, some tasks can be uploaded to idle D2D devices for auxiliary computation. In this paper, the MOEAD algorithm is improved with parallel computing (Parallel MOEAD – PMOEAD), and the simulation results show that the PMOEAD model can effectively reduce the energy consumption and latency of the edge network compared with the baseline MEC model. Keywords Mobile edge computing · D2D · Task offloading · Multi-objective optimization · PMOEAD
Q. Zhao (B) School of Electrical Engineering, Guangxi University, Guangxi, Nanning 530004, China e-mail: [email protected] G. Wang · Z. Wang School of Computer and Electronic Information, Guangxi University, Guangxi, Nanning 530004, China Y. Wang College of Mechanical and Automotive Engineering, Guangxi University of Science and Technology, Liuzhou 545001, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_81
881
882
Q. Zhao et al.
1 Introduction The rapid development of smart mobile terminals and mobile Internet has given rise to many new intelligent applications [1, 2], and in order to address the demand for computing resources for new intelligent applications, the Mobile Edge Computing (MEC) model is proposed, whose core idea is to deploy edge servers with high-speed computing capabilities to the edge of the network, such as cellular base stations, network aggregation points The core idea is to deploy edge servers with high-speed computing capabilities to the edge of the network, such as cellular base stations, network aggregation points, wireless sensor networks, and drone communication [3], and user devices offload tasks to the edge servers for execution, which can reduce the overhead caused by executing tasks locally. However, the classical MEC model has shortcomings [4, 5]. First, the connection quality between mobile terminals and MEC servers directly affects the task uploading efficiency, and the simultaneous arrival of a large number of computing tasks may still cause the MEC servers to reach their maximum tolerance limit. Second, mobile terminals and MEC servers are accessed in a competitive manner, and the system throughput and resource utilization are limited by the MEC servers. Finally, the task offloading experience is also affected by the channel quality. Terminal pass-through (Device to Device, D2D) is one of the key technologies for 5G communication. In the D2D communication model, each user device has an independent message sending and receiving function, and data can be passed directly from one terminal to another within the signal coverage, which provides support for resource sharing among user devices [6]. In real networks, user terminals have different computational capabilities (depending on the computing chip) and communication capabilities (depending on the communication chip and the communication environment), which leads to two phenomena in real networks: first, on the computational side, devices with more advanced computing chips are able to complete computations faster and consume less energy, even if the computations are the same. Second, in terms of communication, even if the amount of data transmitted is the same, devices with higher limit transmission power or better channel quality with the MEC server can complete the upload task faster and consume less energy. The main work and innovations of this paper are as follows: (1) In the classical MEC model, user devices are independent of each other. In this paper, we propose a D2D Collaborative MEC for Partial Offloading (DCM-PO) model, in which idle devices can provide data transit and computation services for task devices through D2D links. (2) MOEAD algorithm is a classical multi-objective optimization algorithm. In this paper, we propose an algorithmic improvement of the parallel computing approach by dividing the data set into multiple subsets, each of which is processed at a different computing node.
D2D Cooperative MEC Multi-objective Optimization Based …
883
2 Related Research Work In classical mobile edge computing research, task devices usually establish many-toone connections with MEC servers and then use decision engines to assign pending tasks to local or MEC servers for execution and finally fetch computational results, and the focus of such research is usually on how to divide tasks and allocate resources. For example, the literature [7] allocates radio resources and computational resources to improve energy efficiency in an IoT network with integrated MEC. In the literature [8], in order to solve the migration problem of complex tasks, a subtask partitioning model is constructed based on the dependencies between tasks, and a genetic algorithm is used to make the energy consumption minimized under the latency constraint. In the literature [9], wireless power transfer is introduced into MEC, and the energy consumption of the model is minimized under the delay constraint by optimizing the energy transfer beam, offloading task volume, CPU frequency and communication time slice. The literature [10] first normalizes the energy consumption and latency, and then proposes an iterative algorithm to solve the task assignment and power allocation problems, which achieves a balance between latency and energy consumption. Due to the centralized nature of the classical mobile edge computing model, the computational power of the network is too dependent on the edge servers, and as the computational load gradually increases, the MEC servers will become the performance bottleneck of the network, which makes the classical mobile edge computing model not applicable to large-scale networks [11]. The use of widely distributed edge devices for auxiliary computing can effectively solve the problem. The literature [12] proposes a fog computing model with task hybrid offloading, which transforms the problem of minimizing task execution cost into a minimum weight problem on a three-layer graph. The literature [13] constructed a user device-assisted MEC model that reduces the delay and energy consumption by assigning subtasks to base stations and neighboring users. The literature [14] proposed a collaborative 5G cellular network with fog computing devices, which is solved using the Lyapunov optimization method after formulating the computational offloading problem as a stochastic optimization problem. The literature [15] proposed a four-slot protocol for implementing joint communication and computation in a MEC model consisting of user nodes, auxiliary nodes, and APs. The literature [16] investigates the MEC model with the assistance of user devices and reduces the overall energy consumption under the delay constraint by defining the computational offloading problem as a potential game model.
884
Q. Zhao et al.
3 System Model 3.1 Network Model The physical components of the proposed DCM-PO model are shown in Fig. 1. DCM-PO consists of task devices, MEC servers, and D2D devices. The task device generates computational tasks with certain computational capacity, and usually, the tasks are all executed locally which brings excessive computational overhead. The MEC server is deployed near the cellular base station and connected to the central cloud through a high-speed link to provide computational resources to the network edge users. the D2D device is an idle device with computational or communication resources in the edge network, and the task device can provide resources through the D2D link Directly assign D2D devices to provide resources. Figure 2 shows the task offloading model of DCM-PO. In task assignment, after the application layer of the task device generates computation tasks, the tasks are first divided by the local decision engine into three parts: MEC computation, D2Dassisted computation and local computation, and then transmitted to the MEC server, the appropriate D2D device and the local CPU, respectively. When uploading part of the tasks to the D2D device, information is provided incidentally on how the D2D device should handle this part of the tasks. The D2D device uses the execution engine to interpret the additional information, and is able to directly derive the amount of tasks and CPU frequency for local computation by the D2D device, and the amount of tasks and upload power for uploading to the MEC server, and then it can use the local resources to directly complete the processing of this part of the task, without the need for its own decision engine to make secondary decisions. To describe the process of building the decision engine, the latency model of DCM-PO is first built, and let the set represent task devices and the set represent D2D devices, then the total number of mobile devices in the model can be represented as. A task of a device is represented by a triplet, where is the amount of task data in bits, is the average number of CPU cycles it takes to process 1 bit of data for that task, i.e., the task complexity, and is the maximum tolerated latency. After generating a computational task, the device divides the task data volume into, and three parts, which are processed locally, by the D2D device, and by the MEC server, respectively, i.e. Fig. 1 Physical components of DCM-PO
D2D Cooperative MEC Multi-objective Optimization Based …
885
Fig. 2 Task offloading model
Component
Processing component CPU Task block
Decision engine
Communication chip
Decision engine
Cellular communication
Cellular communication
D2D Communication
D2D Communication
CPU
CPU
Dk = Dk,l + Dk,i + Dk,e
(1)
Let the CPU frequency assigned to local computation be, then the local computation latency is c τk,l =
Dk,l Ck f k,l
(2)
Direct use of MEC server for auxiliary calculation requires data upload, MEC server processing, and result return, and since the volume of calculation results is relatively small, the return delay is negligible, and the total delay generated by MEC server auxiliary calculation is approximately the sum of upload delay and calculation delay, i.e. t c τk,e = τk,e + τk,e
(3)
where t τk,e = Dk,e /Bk,e log2 (1 +
Pk,e h k,e ) σ2
(4)
Bk,e is the bandwidth of the connection between the task device and the MEC server, Pk,e is the transmission power from the task device to the MEC server, h k,e is the channel gain, σ 2 is the ambient noise power, and c τk,e =
Dk,e Ck f k,e
(5)
f k,e is the CPU frequency assigned to the task device by the MEC server. Similarly, the transmission delay of the task upload to the D2D device is t τk,i
Pk,i h k,i = Dk,i /Bk,i log2 1 + σ2
(6)
886
Q. Zhao et al.
When tasks are uploaded to both the D2D device and the MEC server, the maximum transmit power limit cannot be exceeded, i.e. Pk,i + Pk,e ≤ Pkmax
(7)
The tasks handled by the D2D device are divided into two parts, namely Dk,i = Dk,i,l + Dk,i,e
(8)
which Dk,i,l is partially performed locally by the D2D device using the local CPU, generating a computational delay of c τk,i,l =
Dk,i,l Ck f k,i,l
(9)
Similarly, the total delay generated by the indirect upload from the D2D device to the MEC server is approximated as the sum of the upload delay and the computation delay, i.e. t c τk,i,e = τk,i,e + τk,i,e
(10)
where, t τk,i,e
= Dk,i,e /Bk,i,e log2
Pk,i,e h k,i,e 1+ σ2
(11)
Bk,i,e is the bandwidth of the connection between the selected D2D device and the MEC server, Pk,i,e is the transmission power from the D2D device to the MEC server, h k,i,e is the channel gain, h k,i,e is the ambient noise power c τk,i,e =
Dk,i,e Ck f k,i,e
(12)
f k,i,e is the CPU frequency assigned to the D2D device by the MEC server. Assuming that the tasks can be divided arbitrarily and that the task blocks can be executed in parallel, the latency model can be represented by the AOE net shown in Fig. 3, where the critical path is the total latency generated by the task device executing one task offload using the D2D device, i.e. c t c t c t t c Tk,i = max{τk,l , τk,e + τk,e , τk,i + τk,i,l , τk,i + τk,i,e + τk,i,e } c t c t = max τk,l , τk,e , τk,i + τk,i,l , τk,i + τk,i,e
(13)
In order to meet the time delay requirements, it is necessary to have Tk,i ≤ Tkmax
(14)
D2D Cooperative MEC Multi-objective Optimization Based …
887
Fig. 3 Delay model D2D
t
t kt ,i ,e
t k ,i
Task Devi ce
t kc,l t kt ,e
MEC
MEC
t
c k ,i ,l
t kc,i ,e End
t kc,e
The energy consumption model of DCM-PO is established below. The direct energy consumption of the task device is equal to the sum of the local execution energy consumption, the upload D2D energy consumption and the upload MEC server energy consumption, i.e. t t E k = δk Dk,l Ck f k,2 j + pk, j τk,i + pk,e τk,e
(15)
where δk is the energy consumption factor related to the CPU architecture [21]. Since the data reception process is a passive process of receiving electromagnetic waves from space, the energy consumed is much less than that consumed by sending data, so the reception energy consumption of the D2D device is ignored. Then the direct energy consumption of the D2D device is approximated as the sum of the energy consumption of the auxiliary computing and indirect uploading MEC servers, i.e. 2 t E i = δi Dk,i,l Ck f k,i,l + Pk,i,e τk,i,e
(16)
The computational energy consumption generated by the MEC server is 2 2 E e = δe Dk,e Ck f k,e + δe Dk,i,e Ck f k,i,e
(17)
In summary, the total energy consumption is E k,i,e = E k + E i + E e
(18)
3.2 Problem Definition In this paper, we perform a multi-objective optimization of the mobile edge computing model with D2D collaboration. The optimization variables include D2D device pairing rules, transmit power allocation, CPU frequency allocation, and task assignment. The multi-objective optimization problem is established with minimizing the latency and energy consumption of the edge network, which can be
888
Q. Zhao et al.
expressed as two target optimization P1 : min
∑∑
∑∑
u k,i Tk,i,e , u k,i E k,i,e k∈K i∈I D ≤ Dk , ∀D ∈ Dk,l , Dk,i , Dk,e D ≤ Dk,i , ∀D ∈ Dk,l , Dk,e Pk,i,e ≤ Pimax , ∀i ∈ I f k,l ≤ f kmax , ∀k ∈ K f k,i,l ≤ f imax , ∀i ∈ I ∑∑ ∑ f k,e + u k,i f k,i,e ≤ f emax
D,p,f,u k∈K i∈I
C1 : 0 ≤ C2 : 0 ≤ C3 : 0 ≤ C4 : 0 ≤ C5 : 0 ≤ C6 : 0 ≤
k∈K
(19)
k∈K i∈I
C7 : u k,i ∈ {0, 1} ∑ u k,i ≤ 1, ∀i ∈ I C8 : k∈K
where C1 is a constraint on the direct division of tasks by the task device, whose sum of the three parts of tasks divided by the task device is guaranteed to be equal to the output tasks of the application layer. is a constraint on the further division of tasks arriving at the D2D device, which with Eq. ensures that the sum of tasks executed locally by the D2D device and indirectly uploaded by the task device is equal to the tasks arriving at the D2D device. is a constraint that the sum of the transmit power of a task device does not exceed the maximum transmit power of that device when uploading tasks to the D2D device and the MEC server. Limits the upload power of the selected D2D device to not exceed the maximum transmit power of that device when uploading tasks. It is guaranteed that the CPU frequency of a task device does not exceed the maximum limit when performing a local task. Similarly, the constraint ensures that the CPU frequency of the D2D device and the MEC server, respectively, does not exceed the maximum limit. The formula guarantees that the task device can get the computation results within the maximum tolerated delay. The value of the pair is stated as u k,i =
0, k does not establish a D2D connection with i 1, k establishes a D2D connection with i
To avoid misuse of resources, C8 a task device is limited to selecting at most one D2D device to establish a connection. To avoid fully resourced D2D devices from becoming hotspots, limit an idle device to serve at most one device at a time. The optimization problem is a mixed-integer nonlinear programming problem that cannot be easily solved directly. Weighted sums [10, 12], game theory [17] and convex optimization [15, 18] have been considered as effective methods for solving similar optimization problems. In addition, because of the strong global search capability and wide adaptability of intelligent optimization algorithms, a new improved algorithm based on MOEA/D is considered for the literature [19], which can be used to solve multi-objective optimization problems well.
D2D Cooperative MEC Multi-objective Optimization Based …
889
4 Improving MOEAD for Solving Multi-objective Optimization Problems 4.1 MOEAD Algorithm Improvement MOEAD algorithm is a multi-objective optimization algorithm, mainly used to solve multi-objective optimization problems. The traditional single-objective optimization algorithm can only deal with single-objective optimization problems, and when we need to optimize multiple objectives, we often need to use multi-objective optimization algorithm. MOEAD algorithm obtains the approximate solution set of the global optimal solution by decomposing the multi-objective optimization problem into multiple single-objective optimization sub-problems and solving these sub-problems using an evolutionary algorithm. MOEAD algorithm has the characteristics of high efficiency, MOEAD algorithm has the characteristics of high efficiency, diversity, and easy implementation. MOEAD algorithm is a classical multi-objective optimization algorithm, which has been widely used in many fields. However, with the increase of problem size, the computational complexity of MOEAD algorithm also shows exponential growth, which brings some difficulties to its application. Therefore, it is necessary to improve the MOEAD algorithm for its application. In this paper, the improvement of MOEAD algorithm is mainly implemented by improving the algorithm. The computational complexity of MOEAD algorithm is high, and the performance and efficiency of the algorithm can be improved by improving the algorithm implementation, and this paper improves the MOEAD algorithm into a new algorithm of parallel computing method, which can be called Parallel MOEAD (PMOEAD). Parallel computing, i.e., the computational task is assigned to multiple computing nodes for processing simultaneously, thus speeding up the algorithm computation. In the parallel computing approach, different computational tasks can be accomplished by adjusting different parallel modes, such as data parallel computing, model parallel computing, and task parallel computing. In this paper, the data parallel computing mode is used to divide the data set into multiple subsets, and each subset is processed on different computing nodes. The parallel computing mode of PMOEAD algorithm can be adjusted according to the actual situation, for example, increasing or decreasing the number of computing nodes, adjusting the data set partitioning method, etc., so as to further improve the performance and efficiency of the algorithm.
4.2 Multi-objective Optimization Problem Solving PMOEA/D is a multi-objective optimization algorithm based on decomposition strategy for solving optimization problems with multiple objective functions. the main steps of PMOEA/D algorithm include: initializing the population, initializing
890
Q. Zhao et al.
the weight vector, calculating the neighbor table, calculating the decomposition value of individuals, crossover and variation, updating the population and neighbor set, and judging the stopping condition. Algorithm 1 Improved PMOEA/D algorithm for solving multi-objective optimization problems. (1) Initializing the population, a set of individuals is randomly generated as the initial population, where each individual contains multiple decision variables. (2) Initialize the weight vectors, generate a set of weight vectors randomly according to the weight vector generation method, where each weight vector contains multiple weight factors, and usually use the uniform distribution to generate the weight vectors. (3) Calculate the neighbor table, and determine the set of neighbors of each individual by calculating the neighbor table based on the distance of the weight vectors. The specific calculation of the neighbor table is as follows. Ni = { j|d(i, j ) ≤ T }
(20)
where denotes the set of neighbors of an individual, denotes the Euclidean distance between (4) Individual i and individual j, denotes the neighbor radius, and usually takes the value of 0.1. To calculate the decomposition value of an individual, the objective function value of each individual is decomposed into the optimization objectives of multiple subproblems, and the decomposition value of an individual on each subproblem is calculated. The specific decomposition strategy can be a weighted sum approach, where each objective function value is multiplied by the corresponding weight factor, and then the results are summed to obtain the decomposition value of an individual. The formula is as follows. f i (λ) =
m ∑
λ j f j (xi )
(21)
j=1
where f i (λ) denotes the decomposition value of individual i under the weight vector, λ denotes the objective function value of individual i under the jth objective function, and f j (xi ) denotes the jth weight factor of the weight vector λj. (5) Crossover and mutation, where the population is updated by crossover and mutation operations to generate new individuals. The specific crossover and mutation operations can be performed in the same way as in the standard genetic algorithm. (6) Calculate the decomposition value of new individuals, and the decomposition value of new individuals is calculated in the same way as in step (4). (7) Update the population and the set of neighbors. Based on the decomposition value of the new individual and the set of neighbors, the population and the set
D2D Cooperative MEC Multi-objective Optimization Based …
891
of neighbors are updated and the optimal individual is retained. The specific update can be done in the way of optimal solution selection, random selection or nearest neighbor selection. (8) Judging the stopping condition, when the preset stopping condition is reached, stop the algorithm and output the result; otherwise, return to step (2) to continue the iteration. The usual stopping conditions include reaching the maximum number of iterations, reaching the optimal solution or the function value change is less than a certain threshold value.
5 Simulation Results and Analysis In the actual network environment, the devices or mobile devices of various heterogeneous networks have different computing and communication capabilities, so most of the simulation parameters are randomly generated within a certain range. According to the literature [20], the distance between the task device and the D2D device is set to [1,50] m and the distance between the mobile device and the base station is [100,300] m. The number of task devices is calculated to be [1, 20], the amount of data for each task is about [0.1,0.5] Mbit, the mean value of complexity is 1350, the variance is 250 with normal distribution, the maximum tolerated delay is [1, 2] s, the bandwidth of the channel quality mobile device is [1, 20] MHz, the bandwidth between mobile devices is [1, 20] Mhz, the bandwidth between mobile devices and base station is [1, 5] MHz, the ambient noise power is W, the channel gain is, where is a random variable obeying the standard normal distribution, and d is the distance between devices. In the validation of the PMOEAD algorithm, each computational node population as well as an arbitrary weight vector is randomly generated, i.e., p is the computational node population and w is the weight vector, and the PMOEAD multiobjective optimization data parallel computation scheme is compared with two other multi-objective optimization schemes. (1) PAES (Pareto Archived Evolution Strategy) algorithm, which is a multiobjective optimization algorithm based on evolutionary strategy, where tasks are all offloaded to MEC for execution. (2) JOSR (Joint Optimization Algorithm of Relay Selection and Resource Allocation) joint optimization scheme of relay selection and resource allocation, a task offloading scheme proposed in the literature [20] with AP as a relay node, where AP is able to perform secondary uploads of tasks but does not have computational capabilities. Figure 4 shows the impact of the average number of tasks on the total latency of the edge network in a mobile edge network with 5 task devices. When the number of tasks is small, the latency of the three schemes is small, and as the number of tasks increases the latency of JOSR and PAES starts to amplify gradually, from the figure it can be seen that JOSR does not have the computing capability thus the latency
892
Q. Zhao et al.
Fig. 4 Latency vs. average number of tasks
is higher and the latency rises faster, while the latency of PAES is much smaller compared to JOSR, but still in the rising stage, as the number of tasks increases, the latency of PMOEAD first decreases and then As the number of tasks increases, the delay of PMOEAD first decreases and then tends to rise smoothly, which shows that the JOSR and PAES schemes are more affected by the number of tasks. Figure 5 shows the effect of the average number of tasks on energy consumption in the mobile edge network, the energy consumption of all three schemes increases with the number of tasks, the PMOEAD algorithm is significantly lower when the number of tasks is small, while JOSR is significantly higher than PMOEAD and PAES, and the instantaneous average energy consumption of JOSR and PAES intersects during the gradual increase in the number of tasks, and then PMOEAD increases with the The trend of PAES gradually increases in the increasing number of tasks, but the energy consumption of PMOEAD still belongs to the lowest, which shows that the average energy consumption of PMOEAD is smaller than that of JOSR and PAES in the same number of tasks. Figure 6 shows the effect of the number of iterations on the latency in the edge network, where PAES has the highest latency when the number of iterations is the same, and the number of iterations is less than 400, JOSR and PAES latency is closer, and PMOEAD has the lowest latency at this time. In the PMOEAD scheme, because the data parallel computing method is used, all task data are distributed to each computing node according to the parallelism principle, and according to different weight vectors and neighboring node paths, the task data are distributed to each computing node starting from the shortest path in preference, the more iterations the larger the neighboring node to weight ratio is obtained, and the neighboring computing nodes can be found faster in each iteration round. Nodes in each iteration, thus greatly reducing the latency. Figure 7 shows the comparison of total energy consumption with the number of iterations, as shown in the figure, the higher the number of iterations of JOSR, the greater the total energy consumption, while the total energy consumption of
D2D Cooperative MEC Multi-objective Optimization Based …
893
Fig. 5 Energy consumption vs. average number of tasks (Mbit)
Fig. 6 Delay and number of iterations
PMOEAD is the lowest in the same number of iterations, and the total energy consumption of PAES is between PMOEAD and JOSR, PMOEAD is obviously better than JOSR and PAES, while PMOEAD shows a trend of The energy consumption tends to fall back from high to low as the number of iterations increases and also climbs to the next highest point and then falls back, but the total energy consumption is still below that of JOSR and PAES, which shows that the number of iterations has a greater impact on JOSR and PAES, and the impact on JOSR is the greatest, while the number of iterations has a smaller impact on PMOEAD.
894
Q. Zhao et al.
Fig. 7 Energy consumption vs. number of iterations
6 Conclusion In this paper, an improved mobile edge computing algorithm (PMOEAD) under D2D collaboration is proposed, which divides task data into multiple data subsets by task devices, and each subset is distributed to different MEC computing nodes for parallel computation, thus accelerating the computation speed of the algorithm and improving the performance and efficiency of the algorithm. The multi-objective optimization problem is established by building a communication model, a computational model, and from minimizing the energy consumption and delay of the edge network MEC. Then the algorithm MOEAD (Multi-Objective Evolutionary Algorithm Based on Decomposition) algorithm, which is based on searching Pareto optimal solution objectives, is improved to solve the multi-objective optimization problem using MOEAD in parallel computing mode. Finally, simulation experiments verify the effectiveness of the proposed model. Acknowledgements National Natural Science Foundation of China (62062007), Guangxi Science and Technology Major Special Fund (Guike AA22068101), Liuzhou Science and Technology Plan Project Fund (2022AAA0103).
References 1. Siriwardhana, Y., Porambage, P., Liyanage, M., et al.: A survey on mobile augmented reality with 5G mobile edge computing: architectures, applications, and technical aspects. IEEE Commun. Surv. Tutorials 23(2), 1160–1192 (2021) 2. Qi, H., Gani, A.: Research on mobile cloud computing: review, trend and perspectives. In: 2012 Second International Conference on Digital Information and Communication Technology and It’s Applications, pp. 195–202. IEEE, NJ (2012)
D2D Cooperative MEC Multi-objective Optimization Based …
895
3. Abbas, N., Zhang, Y., Taherkordi, A., et al.: Mobile edge computing: a survey. IEEE Internet Things J. 5(1), 450–465 (2018) 4. Hu, W.J., Cao, G.H.: Quality-aware traffic offloading in wireless networks. In: Proceedings of the 15th ACM International Symposium on Mobile Ad Hoc Networking and Computing, pp. 277–286. ACM, NY (2014) 5. Li, Y., Sun, L., Wang, W., et al.: Exploring device-to-device communication for mobile cloud computing. 2014 IEEE International Conference on Communications. NJ: IEEE, pp. 2239– 2244 (2014). 6. Bangerter, B., Talwar, S., Arefi, R., et al.: Networks and devices for the 5G era. IEEE Commun. Mag. 52(2), 90–96 (2017) 7. Liu, B., Liu, C., Peng, M., et al.: Resource allocation for energy-efficient MEC in NOMAenabled massive IoT networks. IEEE J. Sel. Areas Commun. 39(4), 1015–1027 (2021) 8. Wang, Y.C., Zhu, H., Hei, X.H., et al.: An energy saving based on task migration for mobile edge computing. J. Wirel. Commun. Netw. 133 (2019) 9. Wang, F., Xu, J., Wang, X., et al.: Joint offloading and computing optimization in wireless powered mobile edge computing systems. IEEE Trans. Wireless Commun. 17(3), 1784–1797 (2018) 10. Liu, S.M., Yu, Y., Guo, L., et al.: Adaptive delay-energy balanced partial offloading strategy in mobile edge computing networks, 29 May 2022. https://www.sciencedirect.com/science/art icle/pii/S2352864822001225 11. He, Y., Ren, J., Yu, G., et al.: D2D communications meet mobile edge computing for enhanced computation capacity in cellular networks. IEEE Trans. Wireless Commun. 18(3), 1750–1763 (2019) 12. Chen, X., Zhang, J. S.: When D2D meets cloud: hybrid mobile task offloadings in fog computing. In: Proceedings of the 2017 IEEE International Conference on Communications, pp. 1–6. IEEE, NJ (2017) 13. Wang, H.P., Lin, Z.P., Lv, T.J., et al.: Energy and delay minimization of partial computing offloading for D2D-assisted MEC systems. In: Proceedings of the 2021 IEEE Wireless Communications and Networking Conference, pp. 1–6. IEEE, NJ (2021) 14. Jia, Q.M., Xie, R.C., Tang, Q.Q., et al.: Energy-efficient computation offloading in 5G cellular networks with edge computing and D2D communications. IET Commun. 13(8), 1122–1130 (2019) 15. Cao, X.W., Wang, F., Xu, J., et al.: Joint computation and communication cooperation for mobile edge computing. IEEE Internet Things J. 6(3), 4188–4200 (2018) 16. Wang, C., Qin, H., Yang, X.X., et al.: Energy-efficient offloading policy in D2D underlay communication integrated with MEC service. In: Proceedings of the 3rd International Conference on High Performance Compilation, Computing and Communications, pp.159–164. ACM, NY (2019) 17. Liu, M., Liu, Y.: Price-based distributed offloading for mobile edge computing with computation capacity constraints. IEEE Wireless Commun. Lett. 7(3), 420–423 (2018) 18. Gesualdo, S., Francisco, F., Lorenzo, L., et al.: Parallel and distributed methods for nonconvex optimization-part I: theory. IEEE Trans. Signal Process. 65(8), 1929–1944 (2017) 19. Wang, J.Q.: A modification of MOEA/D for solving multi-objective optimization problems. J. Adv. Comput. Intell. Intel. Inform. 22(2a129) (2018). 20. Deb, K., Pratap, A., Agarwal, S., et al.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002) 21. Chai, R., Lin, J.L., Chen, M.L., et al.: Task execution cost minimization-based joint computation offloading and resource allocation for cellular D2D MEC systems. IEEE Syst. J. 13(4), 4110– 4121 (2019)
Research on Feature Space Migration Fault Diagnosis for Missing Data Signals Ying Zhang, Tingwei Peng, and Ruimin Luo
Abstract The phenomenon of data deletion generated by signals based on occasional occurrences in actual transmission changes the signal structure, which is aimed at the change of data missing signal structure and the loss of data volume. This paper proposes a fault diagnosis method for integrated multidimensional feature migration in Broad-Tree. Firstly, the features are extracted from multiple dimensions based on the sample structure, and the feature subspace facing the target domain is constructed by removing redundant features. Secondly, the multidimensional feature space in the Bald-Tree integrated classification model is reduced dimensional increment, based on the sampling diagnosis of the subclassifier, and the integrated classification layer is used to make integrated decisions for sub-classification, and finally the diagnosis of the target domain is realized based on the supervised sample parameter fine-tuning, and the open data set experiment proves that the method can effectively diagnose the fault of the missing signal of the data, and greatly improves the operation speed and diagnostic accuracy while greatly reducing the training data set. Keywords Loss data signals · Transfer learning · Feature space · Ensemble learning
1 Introduction The phenomenon of missing data of signals usually occurs during random packet loss during signal acquisition and transmission. On the one hand, the missing data changes the original structure of the signal, which will cause great difference between its analysis and the actual working conditions, which will have an unpredictable impact on the diagnosis results [1, 2]. On the other hand, fault diagnosis and research on data missing vibration signals will increase technical experience in related fields and is of great significance to ensure production safety. Data missing signals include Y. Zhang · T. Peng (B) · R. Luo Civil Aviation University of China, Tianjin 300300, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_82
897
898
Y. Zhang et al.
random missing and non-random missing signals. It is characterized by the fact that the deletion type is mutated, unpredictable, it is difficult to distinguish from the complete signal manually, and the data loss is episodic, and the actual data collection is difficult [3, 4]. The technical difficulties of diagnosis are reflected in the feature extraction method and classification mode of the missing signal [2]. The transfer multi-dimensional feature spatial ensemble classification learning method proposed in this paper uses the transfer learning concept to construct a diagnostic channel that can match the missing data signal features and fault conditions. The failure mode is matched with three signal dimensions, such as basic, highdimensional, and shock, and the multi-dimensional feature space of the sample is constructed. Experimental results show that this method can effectively strengthen the connection between feature information and diagnostic working conditions and optimize the fault diagnosis process while overcoming the error of missing signals.
2 Fundamental Theories 2.1 Transfer Learning Transfer learning is one of the branches of machine learning, which is divided into four types: sample transfer, feature transfer, model migration, and relationship transfer [5]. Transfer learning is widely used in medical imaging, image classification, ergonomics and other fields [6, 7]. This article applies transfer learning to the field of troubleshooting and belongs to model transfer. Transfer learning: The process of transferring learning from the source domain Ds to the target domain Dt : Ds = {X i , Yi } → Dt = X j , Y j (Ds = Dt , i = j )
(1)
Xi and Yi in Eq. (1) represent the ith sample of the source domain and its sample label, and Xj, Yj represent the jth sample of the target domain and its sample label [8, 9].
2.2 Tree Structure Integration Classification Model The integrated classification decision model of tree structure is derived from the decision tree model, and hierarchical fault diagnosis is realized through the decision hierarchical logic of tree branching. Its core idea is to assemble a variety of subclassifiers and use the concept of feature space to incrementally output multidimensional feature space. At the same time, the independent diagnosis is carried out in combination with the branching concept of the decision tree, the number of branches and decision tree parameters are dynamically adjusted with the sample, and
Research on Feature Space Migration Fault Diagnosis for Missing Data …
899
Fig. 1 Tree structure integration classification model
finally all the sub-diagnosis results are input to the integrated classification layer for comprehensive diagnosis, and the final results are obtained. According to the design decision sequence structure of the underlying logic of the input parameters of the diagnostic task, two types of decision levels, as well as fault types and corresponding degree ranges, are generated. As shown in Fig. 1, the first layer input is the fault type that distinguishes the sample, and the second layer input is to confirm sub-properties such as the input characteristics and the degree of fit of the sample. The parentheses indicate the probability prediction for a node under the branch. The feature weights are transferred from the fully connected layer to the subsequent branch nodes and seed nodes, and the nodes are trained by independent random sampling as independent subclassifiers, which ensures the performance of the model. The weights corresponding to the i(A, B, C)branch nodes are finally determined by the weighted average of the swi seed nodes: ⎫ ⎪ ⎪ WA = SW j ⎪ ⎪ ⎪ ⎪ ⎪ j=1 ⎪ ⎪ ⎪ ⎪ 5 5 ⎬ WB = SW j (x, swi ) = 1 ⎪ ⎪ j=1 i=1 ⎪ ⎪ ⎪ ⎪ 4 ⎪ ⎪ ⎪ WC = SW j ⎪ ⎪ ⎭ 5
(2)
j=1
The Bagged-Tree Ensemble model belongs to the branch of the decision tree model, and uses the sub-decision layers of the tree branch structure to make sequential decisions for parallel diagnosis.
900
Y. Zhang et al.
3 Migrate Multidimensional Feature Space Integration Classification Fault Diagnosis Method Aiming at the diagnostic problem of changing the time series structure signal, a multidimensional feature transfer and integrated classification platform is constructed. The subspace constructed based on the basic, high-dimensional, and pulsed features of the complete signal is used as the training source domain to sort the high-quality migration features that are easy to identify faults, eliminate the redundant features of interference, and construct the migration feature space. In this paper, the model is analyzed and studied through the following steps, and Fig. 2 shows the process framework of the method:
3.1 Troubleshooting The specific steps are as follows: Step 1: Build a multidimensional feature space of the source domain based on the migration of sample feature monotonicity fault development trend. Step 2: Based on the diagnostic goal, set the maximum number of split branches, the number of subdecision trees, the learning rate, and the corresponding maximum number of splits of the Bagged-Tree ensemble classification model. Step 3: Source domain multidimensional feature space input, cross-validation method, model diagnosis training. Step 4: Build the target domain multidimensional feature space. Step 5: Migrate the multidimensional feature space of the target domain into the training model for diagnosis.
Fig. 2 Method flow framework
Research on Feature Space Migration Fault Diagnosis for Missing Data …
901
4 Experimental Analysis 4.1 Experimental Datas Experimental data from Case Western Reserve University rolling bearing dataset. The test bench consists of a 1.5KW motor, a torsion sensor, a power tester and an electronic controller. The test bench data is from SKF 6205 drive end bearing data, the failure mode is EDM single point fault damage, selected fault point diameter is 0.5334 mm. The damage locations are located in the inner ring of the rolling bearing, the rolling element of the rolling bearing, the six points in the center direction of the outer ring of the rolling bearing, three points in the orthogonal direction of the outer ring of the rolling bearing, and twelve points in the relative direction of the outer ring of the rolling bearing, corresponding to five types of failure modes. Bearing sampling frequency 12 kHz. For the classification of diagnostic fault samples of gradient deletion samples, refer to Table 1: Table 1 A detailed classification of the fault vibration signal dataset Fault category
data set
Classify samples
The proportion of random data missing
The proportion of missing non-random data
Missing gradients
Normal operating conditions
10 × 2000
20,000
0–50%
0–50%
5%
Inner ring failure
10 × 2000
20,000
0–50%
0–50%
5%
Rolling element failure
10 × 2000
20,000
0–50%
0–50%
5%
Six points in the direction of the fault center of the outer ring
10 × 2000
20,000
0–50%
0–50%
5%
Three points in the orthogonal direction of the outer ring fault
10 × 2000
20,000
0–50%
0–50%
5%
The outer ring fault is 10 × twelve points relative to 2000 the direction
20,000
0–50%
0–50%
5%
120,000
0–50%
0–50%
5%
summation
60 × 2000
902
Y. Zhang et al.
4.2 Comparative Experiment of Gradient Missing Data Features This experiment discusses the comparison of diagnostic accuracy under different deletion forms and gradient conditions. The results are shown in Table 2: Combined with the analysis, the diagnostic accuracy of missing data is higher under the condition of less than 35% deletion rate, when it reaches more than 40%, the diagnostic accuracy of non-random missing data will decrease by 1.7%–4.7%, and the random missing data will remain above 98.3% accuracy. Therefore, comparing the diagnosis results of classified samples in the 0–50% gradient grouping of the two missing forms, this method can effectively solve the diagnostic problems of information loss and structural change signals, and make the diagnostic accuracy of samples reach a high level and have strong robustness for the multi-dimensional characteristics of sample migration. Figure 3 shows the scatterplot migration characteristics and confusion matrix comparison: Taking Fig. 3 as an example, the scatter plot shows the correlation degree between the corresponding working conditions of the sample features, and the distribution of each type of fault and the characteristic scatter plot in the figure shows a strong correlation. The greatest correlation feature under this training condition. Compare scatterplots of missing data signals and intact signals. It can be seen from the figure that the distinguishability of the different fault modes of the complete signal is obvious, and the cluster of different failure mode features in the figure is tight, and the missing signal is accompanied by the increase of the missing proportion. The minimum distance between operating case features increases. Especially in the outer ring fault blue, the scatter area of the outer ring fault is wider, the spread trend between the characteristic points is large, and the nearest neighbor value is larger than other working conditions. From this, it can be concluded that the intact signal is Table 2 Accuracy in diagnosing gradient missing data
Missing gradients
Target domain accuracy (%)
Missing gradients
Target domain accuracy (%)
Complete data
100
Complete data
100
RL10
98.3
URL10
100
RL15
98.5
URL15
100
RL20
97.0
URL20
100
RL25
98.3
URL25
100
RL30
100
URL30
98.3
RL35
100
URL35
100
RL40
98.3
URL40
96.7
RL45
98.3
URL45
95.3
RL50
100
URL50
98.3
Research on Feature Space Migration Fault Diagnosis for Missing Data …
903
Fig. 3 Gradient random miss sample accuracy scatterplot and confusion matrix
A. Complete data scatterplot and corresponding confusion matrix
B. Random missing 10% scatterplot and confusion matrix
C. Random missing 20% scatterplot and confusion matrix
D. Random missing 30% scatterplot and confusion matrix
E. Random missing 40% scatterplot and confusion matrix
F. Random missing 50% scatterplot and confusion matrix
904
Y. Zhang et al.
more feature-distinct than the missing signal. With the increase of the proportion of data missing, the distribution distance of the corresponding features of the working condition on the scatter plot showed a positive correlation trend, and the difference degree was expressed as the normal signal > outer ring fault > rolling element > inner ring fault. The confusion matrix can obtain that the overall accuracy is high, and the error is mainly manifested as the classification error of the working condition of the feature.
4.3 Comparative Experiment Between Migration Features and Non-migration Features As can be seen from Fig. 4. The diagnostic results of migratory and non-migratory features under the conditions of random deletion and non-random deletion were compared side by side. The most severe fluctuation in accuracy is the non-migration feature under random missing data, with a fluctuation range of 31.6%. The diagnostic accuracy of migration features is high, and the accuracy fluctuation range is small under the change of the proportion of missing data. The diagnostic accuracy of nonmigrating features is low, and the fluctuation range is large under the change of the proportion of missing data. It can be concluded that the diagnostic accuracy of the sample reaches a high level and has strong robustness for the multi-dimensional characteristics of sample migration. Fig. 4 Diagnostic results of dominant and non-dominant characteristics
Research on Feature Space Migration Fault Diagnosis for Missing Data …
905
Table 3 Comparison of results with other methods Groups
DT
GNB
SVM
EDC
DNN
Complete
80.0%
100.0%
21.7%
KNN 16.7%
Bagged-tree 100.0%
SDC 96.7%
68.3%
16.7%
RL10
83.3%
95.0%
11.7%
16.7%
98.3%
95.0%
43.3%
16.7%
RL20
78.3%
91.7%
11.7%
16.7%
98.3%
95.0%
35.0%
16.7%
RL30
75.0%
100%
96.7%
80.0%
100.0%
98.3%
93.3%
81.7%
RL40
76.7%
100%
13.3%
16.7%
98.3%
98.3%
48.3%
16.7%
RL50
80.0%
95.0%
23.3%
16.7%
96.7%
96.7%
41.7%
16.7%
URL10
81.7%
98.3%
25.0%
16.7%
100.0%
100.0%
56.7%
16.7%
URL20
80.0%
98.3%
25.0%
16.7%
100.0%
100.0%
56.7%
16.7%
URL30
80.0%
98.3%
98.3%
100.0%
100.0%
100.0%
90.0%
95.0%
URL40
80.0%
98.3%
21.7%
16.7%
93.3%
91.7%
46.7%
16.7%
URL50
80.0%
86.7%
23.3%
16.7%
98.3%
98.3%
81.7%
16.7%
4.4 Multi-model Comparative Experiments To compare the difference in diagnostic performance with different classifiers under the same diagnostic conditions. Bagged-tree ensemble classifiers are combined with Decision Trees (DT), Gaussian Naïve Bayes (GNB), Support vector machines (SVM), K-Nearest Neighbo Classifiers (KNN), and integrated subspace classifiers. Discriminant Classifiers (SDC), Ensemble Discriminant Classifiers (EDC), and Deep Neural Network (DNN) are classified in the same random deletion ratio: the results are shown in Table 3: Based on the analysis of Table 3, the principle of support vector machine and nearest neighbor classifier comes from the linear distance relationship between feature spaces. Therefore, the diagnostic effect varies with the data missing conditions, resulting in poor diagnostic efficacy of these two methods. Compared with other classification models such as Bayes, decision trees, and neural networks, the Baggedtree ensemble classifier has more stable models, more stable accuracy overall, and a higher degree of optimization.
4.5 Comparative Test Under Variable Load The experimental background of Sects. 3.2–3.4 is the ideal condition of no load, and the ideal condition is almost impossible when the method is really applied in actual production and life. Therefore, in order to verify the generalization ability of this method, experimental data with different loading conditions (1–3 horsepower load) were set, and according to the random missing data signal and non-random missing data signal introduced in Sect. 2, it was shown that there was no obvious difference between the experimental gap between the two in the experiment 3.6. In
906
Y. Zhang et al.
Table 4 Experimental results under variable loading HP
0
1
2
3
Migration feature source
complete
100%
96.7%
100%
100%
NO
RL5
98.3%
96.7%
98.3%
100%
complete → RL5
RL10
98.3%
93.3%
100%
100%
complete → RL10
RL15
98.5%
98.3%
100%
100%
complete → RL15
RL20
97.0%
90%
100%
98.3%
complete → RL20
RL25
98.3%
93.3%
100%
93.3%
complete → RL25
RL30
100%
98.3%
100%
90%
complete → RL30
RL35
100%
90%
98.3%
90%
complete → RL35
RL40
98.3%
85%
96.7%
90%
complete → RL40
RL45
98.3%
85%
90%
85%
complete → RL45
RL50
100%
88.3%
96.7%
85%
complete → RL50
Mean
98.81%
92.26%
98.18%
93.78%
95.76%
the experiments in this section, the non-random deletion phenomenon is identified as a special case combination analysis of the random deletion phenomenon, and the results are shown in Table 4. The accuracy decreased significantly by Table 4, the average accuracy of this method under 0-3HP load was 95.76%, of which 0HP load result was the best, reaching 98.81%, compared with the worst diagnostic effect is 1HP load, the average accuracy is 6.55% higher. Under the condition of 2HP load, the fluctuation range of the diagnostic results is the smallest and most stable, while the largest fluctuation range is 3HP load, and the accuracy fluctuation range is 0–15%. Balanced comparison of the four different load conditions, the overall diagnosis rate showed a downward trend in the random deletion condition of 0–50%, and the overall cut-off point existed in the missing value of 30%. It can be roughly divided into 0–30% missing values all load conditions can maintain good diagnostic efficiency, and there is no obvious downward trend. In the stage of 30%-50% missing value, the diagnostic efficiency is significantly reduced, and there is a tendency to decrease monotonically. It can be concluded that under the variable load experiment, if 2000 sets of data are used as samples, the target diagnosis effect can be achieved under the condition of missing data within 30%.
5 Conclusion Based on the experimental comparative theory analysis of the dataset, the BaggedTree integrated diagnosis after migrating the multi-dimensional feature space can maintain a high accuracy rate in the absence of random and non-random data, and the features of the corresponding working conditions can be well correlated on the scatter
Research on Feature Space Migration Fault Diagnosis for Missing Data …
907
plot. And the satisfactory accuracy was achieved under the comparison conditions of different gradient missing data in follow-up experiment 2, indicating that the method had good robustness. Compared to other methods, it is possible to achieve a high diagnostic rate with integrated diagnostics at low sample data conditions. Under the variable load condition of the experiment, the threshold of method accuracy is 30% missing value, beyond which the diagnostic ability will show a downward trend.
References 1. Ouma, Y.O., Cheruyot, R., Wachera, A.N.: Rainfall and runoff time-series trend analysis using LSTM recurrent neural network and wavelet neural network with satellite-based meteorological data: case study of Nzoia hydrologic basin. Complex Intell. Syst. 8, 213–236 (2022) 2. Hoque, N., Singh, M., Bhattacharyya, D.K.: EFS-MI: an ensemble feature selection method for classification. Complex Intell. Syst. 4, 105–118 (2018) 3. Fernández, A., del Río, S., Chawla, N.V., Herrera, F.: An insight into imbalanced big data classification: outcomes and challenges. Complex Intell. Syst. 3(2), 105–120 (2017) 4. Kashyap, H., Ahmed, H.A., Hoque, N., Roy, S., Bhattacharyya, D.K.: Big data analytics in bioinformatics: a machine learning perspective. arXiv preprint arXiv:1506.05101 (2015) 5. Shafieian, S., Zulkernine, M.: Multi-layer stacking ensemble learners for low footprint network intrusion detection. Complex Intell. Syst. (2022) 6. Shao, M., Kit, D., Fu, Y.: Generalized transfer subspace learning through low-rank constraint. Int. J. Comput. Vis. 109, 74–93 (2014) 7. Hsu, H.H., Hsieh, C.W., Lu, M.D.: Hybrid feature selection by combining filters and wrappers. Expert Syst. Appl. 38(7), 8144–8150 (2011) 8. Coble, J.: Incorporating data sources into predicting remaining useful lives – an automated method for identifying prognostic parameters. Doctoral dissertation. University of Tennessee, Knoxville, TN (2010) 9. Lei, Y.: Smart faults Diagnostics and remaining service life prediction of rotating machinery. Xi’an Jiaotong University Press, Xi’an, China (2017)
A Performance Comparison Among Intelligent Algorithms for Solving Capacitated Vehicle Routing Problem Jingchen Li and Jie Tang
Abstract For better solving vehicle routing problem (VRP), the solution processes of three algorithms including ant colony, particle swarm optimization and simulated annealing are introduced and compared in performance. Three customer sizes of Solomon data set are selected to evaluate the solution performance of these algorithms. Results show that particle swarm optimization algorithm has unsatisfactory effects in capacitated vehicle routing problem (CVRP) of all scales. Simulated annealing algorithm could obtain the optimal solution in small and medium-size. Ant colony algorithm has the highest comprehensive evaluation for solving all sizes CVRP. The research results could provide reference value for selecting algorithms to solve VRP problem with volume constraints. Keywords Capacitated vehicle routing problem · Intelligent algorithms · Solution performance
1 Introduction Vehicle Routing Problem (VRP) is a mathematical model abstracted from vehicle distribution. In 1956, Dantzig and Ramser proposed a mathematical model of vehicle scheduling transportation based on the routing problem of oil truck fleets transported by gas stations [1]. Then Clarke and Wright extended this model to resolve the issue of logistics and transportation [2]. The optimization of logistics and transportation can effectively reduce the cost of enterprises. Toth and Vigo pointed out that computerized transportation can save up to 20% of the total expenditure [3]. Vehicle routing problem is a NP-hard problem and the intelligent optimization algorithms are the most applicable algorithms used to solve VRP problem [4]. Braekers et al. point out that 71.25% of related re-searches used Intelligent optimization algorithms to solve VRP problems during 2009 to 2015 [5]. The solving J. Li (B) · J. Tang Beijing University of Technology, 100 Pingleyuan, Beijing, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_83
909
910
J. Li and J. Tang
process of intelligent optimization algorithm is stochastic. As a result, the algorithm usually obtains an approximate solution rather than an optimal solution, and the quality of the results depends on the adopted algorithm. The objective of this paper is to describe the calculation process of three algorithms for VRP including particle swarm optimization algorithm, simulated annealing algorithm and ant colony algorithm. For evaluating the performance of the three algorithms, the Solomon test sets are selected as the samples. The results would provide reference for the selection of three algorithms applied to VRP.
2 Capacitated Vehicle Routing Problem Capacitated vehicle routing problem (CVRP) is the basic type of VRP, and the research of CVRP would benefit the other variants VRP [6]. The constraints are given as follow: a. The vehicle starts from a single parking lot and finally drives back to the parking lot; b. Vehicles must visit all customers and each customer should only visits once; c. The load of each vehicle should not exceed the capacity constraints. The purpose of solving the CVRP is to conduct path planning that can maximize the optimization goal.
3 Intelligent Optimization Algorithms 3.1 Particle Swarm Optimization Particle swarm optimization (PSO) was first proposed by Kennedy and Eberhart in 1995.PSO is a population intelligence algorithm [7], which is inspired by the group cooperation mode when birds flock to seek food. The PSO algorithm first initializes a group of particles randomly distributed in the optimization space. The population size is named as ‘popsize’. The location update of each particle is related to the best solution of own research and the best solution of the population of the particle swarm. In the iterative process of this algorithm each particle moves to a better region known in the group to improve group fitness [8]. The dimension of the particle is meter, and the number of iterations of the algorithm is named as ‘maximum’. The flight speed of each particle in the k-generation and the location in the search space are vk and xk . The individual best solution and population best solution of the kth generation of particles are pbestk and gbestk . The updating rules are given as Eqs. (1) and (2):
A Performance Comparison Among Intelligent Algorithms for Solving …
911
vk+1 = ωvk + c1 ∗ rand1( pbestk − xk ) + c2 ∗ rand2 ∗ (gbestk − xk )
(1)
xk+1 = xk + vk+1
(2)
where ω is the inertia weight coefficient and it controls the speed retention after iteration, c1 and c2 are the learning factors of the algorithm. Acceleration weight coefficients that are named as rand1 and rand2 are the random numbers between [0,1]. Particles have a maximum flight rate limit, Vmax, to control the distance change of each iteration.
3.2 Simulated Annealing Algorithm Simulated Annealing (SA) is an algorithm which simulates metal annealing process. Metropolis et al. first used mathematical language to describe this method [9]. SA algorithm is a random optimization algorithm which can accept a worse solution in probability related to temperature. The main parameters of SA include initial temperature T 0 , termination temperature T e , and annealing factor α. The annealing factor affects the convergence speed of the algorithm. Metropolis criterion is the criterion for inheriting solutions. If the previous state is x k , the system will be disturbed and the state would become x k+1 , Accordingly, the system energy changes from f k to f k+1 Eq. (3) is the Metropolis criterion. 1 f k + 1< f k P= (3) fk − f k+1− T e fk +1 ≥ fk where P is the probability of acceptance the system state from x k to x k+1 . T is temperature constant. As the temperature decreases, the probability of taking the second case decreases slowly until the second case is almost not selected.
3.3 Ant Colony Optimization Algorithm Ant Colony Optimization (ACO) is an algorithm proposed by Dorigo in 1996 inspired by the foraging behavior of ant colonies. When an ant passes through a certain path, it releases chemicals called pheromones. Ants prefer paths with high concentrations of pheromones, and then leave pheromones on the path they chose. This is a positive feedback process. The ants would concentrate on the shortest path they find after a period of time. The number of ants in an ant colony is n. The probability pij of ants from point i to point j is calculated according to Eq. (4) [10]:
912
J. Li and J. Tang
pi j =
⎧ ⎨
α β τ i j (k) ηi jα(k) τ is(k) ηis(k)β
⎩0
s∈allowed
j∈ / tabu j ∈ tabu
(4)
where α is pheromone heuristic factor α and it could affect the randomness of ants’ path finding, while expectation heuristic factor β affects the role of determining factors in ants’ path searching. τ is the pheromone concentration on the path. tabu is a value that represents a collection of nodes that have not been visited. When an ant passes a customer point, it would put this point in the tabu table to avoid repeated visits to customer points. η is heuristic information and the reciprocal of the distance between two points is usually set for η. The pheromone of each path will be updated after the ant colony completes the path search, which is affected by the pheromone volatilization factor ρ.
4 Performance Tests In order to compare the solution performance data set of algorithms, the optimal solution and average solution of all solutions obtained by each algorithm in the same running time are selected as evaluation indicators. Three scales of VRP are selected to solve in the experiment. The top 25, top 50 and all 100 customer points of the five examples of Solomon data set were selected respectively. The time window constraint of the data set is not considered in the tests. The distance between two points is Euclidean distance.
4.1 Experimental Data The computer configuration is listed as follows: Intel (R) Core (TM) i7-7500U CPU, @ 2.70 GHz, Windows 10 operating system, 8 GB memory. Python (version: PyCharm community edition 2020.3.3) is adopted for simulation. The optimal solutions (Best) and average solutions (Average) obtained by the three algorithms are shown in Tables 1, 2 and 3. The running time of the algorithm is limited in 10 s, 25 s and 100 s depending on the size of the data set.
4.2 A Subsection Sample Figures 1 and 2 show the relationship between the optimal solution and the average solution of the results respectively. It can be seen that in the medium and small scale CVRP, the solution performance of SA is basically the same as that of ACO. The result of particle swarm optimization is worse than that of the other two algorithms
A Performance Comparison Among Intelligent Algorithms for Solving …
913
Table 1 Experimental results of 25 customer sites Data set
ACO
SA
PSO
Best
Average
Best
Average
Best
Average
C101
293.11
297.25
288.24
292.25
293.66
311.56
C201
372.02
377.05
359.85
368.16
387.47
409.74
R101
408.02
414.44
397.72
413.00
508.70
558.45
R201
405.40
418.93
398.52
412.40
494.10
539.63
RC101
508.16
509.63
506.58
507.91
508.23
530.93
Table 2 Experimental results of 50 customer sites Data set
ACO
SA
PSO
Best
Average
Best
Average
Best
Average
C101
600.07
607.14
588.06
630.75
645.73
C201
727.72
730.37
704.31
733.86
810.67
844.84
R101
771.18
785.08
720.54
775.12
1085.53
1193.79
R201
760.39
787.29
737.75
766.66
1096.94
1177.87
RC101
928.67
938.63
913.14
943.04
1219.69
1279.64
672.24
Table 3 Experimental results of 100 customer sites Data set
ACO
SA
PSO
Best
Average
Best
Average
Best
Average
C101
1453.64
1485.06
1499.89
1637.33
1542.29
1614.30
C201
1555.9
1597.07
1535.8
1702.84
1768.81
1884.92
R101
1300.39
1347.02
1331.20
1442.64
2067.46
2366.25
R201
1316.54
1344.54
1334.83
1432.17
2064.14
2201.02
RC101
1709.62
1742.66
1821.17
1968.57
2484.57
2703.44
since the optimal solution and average solution of the results obtained by the PSO has the higher optimization function value. However, in the large-scale CVRP, the optimal solution and average solution of the ant colony algorithm on each data set are significantly smaller than those of the other two algorithms. The solution performance of PSO in C101 and C201 is better than R101, R201 and RC101. Comparing the optimal solution and average solution of each data set, it can be found that ACO can get good solutions when solving small and medium-sized CVRP and has strong ability to solve large-scale CVRP. So ACO has the best performance in solving CVRP among the three algorithms.
914
J. Li and J. Tang
Fig. 1 Optimal solutions of experimental results obtained by three algorithms
Fig. 2 Average solutions of experimental results obtained by three algorithms
5 Conclusion This paper selects Solomon data set with different sizes and distributions for comparing the performance of ACO, SA and PSO algorithms in solving CVRP problem. By comparing the optimal and average solutions of the results, it can be found that the performances of ACO, SA and PSO algorithms are different, and
A Performance Comparison Among Intelligent Algorithms for Solving …
915
the size of the problem has an impact on the performance of the algorithm. The conclusions are as follows: 1) PSO is not satisfactory in solving CVRP problems of all scales; 2) The structure of SA is simple, and the algorithm has better ability to find the optimal solution in small and medium scale; 3) The ACO has a high algorithm complexity, and it has a strong ability to solve CVRP problems of all scales with high accuracy. The evaluation of the algorithm’s comprehensive performance is the highest.
References 1. Dantzig, G.B., Ramser, J.H.: The truck dispatching problem. Manage. Sci. 6(1), 80–91 (1959) 2. Clarke, G., Wright, J.W.: Scheduling of vehicles from a central depot to a number of delivery points. Oper. Res. 12(4), 568–581 (1964) 3. George, P.: The vehicle routing problem: monographs on discrete mathematics and applications. Society for Industrial and Applied Mathematic(2001) 4. Tan, S.Y., Yeh, W.C.: The vehicle routing problem State-of-the-art classification and review. Appl. Sci. 11(21), 10295 (2021) 5. Braekers, K., Ramaekers, K., Van, N., I.: The vehicle routing problem: state of the art classification and review. Comput. Ind. Eng. 99, 300–313 (2016) 6. Jia, Y.H., Mei, Y., Zhang, M.: A bilevel ant colony optimization algorithm for capacitated electric vehicle routing problem. IEEE Trans. Cybern. 4, 1–14 (2021) 7. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95International Conference on Neural Networks, vol. 4, pp. 1942–1948.IEEE (1995) 8. Zhang, L.Y.: Application of particle swarm optimization in vehicle routing problem. Shanghai Jiaotong University, Shanghai (2006). (in Chinese) 9. Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., et al.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21(6), 1087–1092 (1953) 10. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. Part B 26(1), 29–41 (1996)
Life Evaluation and Optimization of Equipment in the Production Systems Peng Yu
Abstract Life determining is usually considered to be simple and rough in current practice, and the practice may not best reflect the real life of the equipment will lead to inappropriate decision. Sometimes only technical conditions are considered to decide whether an equipment should be get rid of or not. However, an equipment plays a role in the production system, it can’t fulfill the task by itself, also the life of the equipment can’t be decided by its own condition. There are various forms of life of the equipment in the production system, such as natural life, technical life, economic life, system life and so on. The relationship between these forms should be considered when calculating the life. Especially, the contribution to the system can be an important factor of resolving the issue of life determining. Only when the contribution of the equipment to the system becomes the obstacle factor of progress of the production system. So a life evaluation model that maximum the operational effectiveness of the production system is proposed, and then a case is used to validate the model. Based on the results, this study presents a strategy for optimizing the management of the production system. Keywords Life Evaluation · Operational Effectiveness · Marginal Contribution
1 Introduction In current practice of equipment management, the life determination is simple and rough. The empirical approach is the main method of life calculation. However, this approach presents the characteristics of linearity, passivity and isolation, and lacks the support of complete theoretical system and refined management system. Usually, we only consider the technical condition of the individual equipment itself. When the equipment fails or is about to fail, if the equipment is not worth maintenance, we consider it to have reached the end of its life, then replacement becomes the only P. Yu (B) Naval University of Engineering, Jiefang Avenue 717, Wuhan, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_84
917
918
P. Yu
choice, and this means the span of time the equipment is used is its life, which is called natural life. As long as it has restorative value, it can continue to be used, which means its life is not over. In fact, some equipment can still be repaired and used, but as more advanced equipment becomes available, it may be taken out of service. In this case, the life of the equipment will be shorter than the natural life, which is called technical life. Also sometimes the equipment can be repaired and used, but it is not worth maintenance, replacement with new equipment is more cost-effective, then the life in use is called economic life. The generic method to determine equipment update occasion is the model of applying conception of equipment natural life, technical life, or economic life in isolation. However, equipment plays its function in the whole production system, and its contribution to the production system must be considered when determining its life. Some new equipment may be more advance, but it does not match the production system, it is difficult to play its maximum effectiveness, and even hinder the realization of the production system’s function. So life determination cannot be evaluated static and one-sided for a single equipment, but should be considered in the whole system. And from the perspective of timeline, it should be considered from the total life cycle, rather than the respective analysis of each stage of equipment acquisition, use and retirement. This kind of life considering both the relationship between the equipment and the production system and the total life cycle is called system life. Given this situation, life determination must be based on the traditional natural life, technical life and economic life, explore the connotation of the system life from the perspective of the whole production system, then dynamically adjust the life value of the equipment by considering the relationship between four kinds of life, so as to promote the coordination between the equipment and the whole production system, and improve the efficiency of the equipment. This is a practice can be named comprehensive life determination. Based on the experience and understanding of the current literature and related research, we can know that the problem of equipment life determination is popular and has always been concerned. Equipment life determination is often associated with equipment renewal. Derman raised the issue of equipment renewal in the paper of A Renewal Decision Problem, which is published by Management Science in 1978 . But he only considered the retirement arrangement and the replenishment with the original or new kind equipment, not the development of the whole production system. This is different from the traditional equipment renewal problem, and the difference can be analyzed from the following two perspectives. One is to study retirement decision-making problem from the perspective of individual equipment. LV Wei proposed the shortest circuit analysis method of equipment’s . Martins proposed using optimal renewal period in 2006 machine learning to explore patterns of multi-criteria decision AIDS in 2020, and this method can effectively reduce the amount of criteria. The simplified model can be used to predict the remaining life of equipment and the choice of alternatives . LI Yan-dong analyzed the main factors affecting equipment retirement and use the grey clustering decision method to build the equipment retire. According to the characteristics ment decision model in 2007
Life Evaluation and Optimization of Equipment in the Production Systems
919
of continuous loss of equipment life, XIA Zhi-an established an optimization model . Genetic algoof availability and replacement cycle in 2008 rithm was used to solve the model to obtain the optimal replacement cycle. The other is to study retirement decision-making problem from the perspective of the whole production system. Some of the related research are concerned with system life. In order to select the equipment that should be retired first, WANG Bu-yun uses the most priority row method of multi-attribute decision making to sort the equipment that is about to be retired in 2007. MEI Jian-guo made a comprehensive study on the control of motorcycle hourly discharge before the retirement of long serving armored equipment and established a decision-making model about planned retire. And there are ment time and adding new equipment in 2009 also some studies about military equipment’s retirement time research, such as the retirement of Ka-50 helicopter and F117. It can be seen from these research result that the conclusion of equipment retirement decision may be different from different perspectives. Equipment retirement is a systematic project, which must be comprehensively analyzed and evaluated from the perspective of the whole system, so as to reach a scientific conclusion. Within this context, this study proposes a life evaluation model that maximum the operational effectiveness of the production system but not the single equipment. The proposed model will be validated by a case of determining the life of an equipment in a production system consists of 8 equipment. Based on the validation, this study will present a strategy for choosing the right time to get rid of old equipment and update with a new equipment to make sure the production system achieving the highest ratio of cost-effectiveness. Moreover, the finding in this study will offer an opportunity to augment academic discussions about how to choose a new equipment to update the production system to make the new system in the most efficient manner.
2 Principle Model of Equipment Life Determination According to the previous analysis of the four kinds of life, for specific equipment, its life determination needs to follow the rules below. (1) Natural life is the rigid constraint of comprehensive life determination. Equipment must reach a specific level of technical state to ensure the completion of the specific task, which requires the stabilization of its basic physical form. Therefore, the final life of the equipment must be determined within the range of natural life. Subsequent analysis and optimization must obey the rigid constraint of natural life. If the technical status level cannot meet the basic requirements, even if it has excellent technical advancement, economy and high contribution rate to the whole system, it must be retired. (2) Technical life is an important way to optimize. The technical life of each equipment is different. In many cases, the maintenance of equipment requires the shutdown of the whole production system. The equipment’s replacement always
920
P. Yu
Life determination
Natural life analysis
Further analysis: Ability constraint Proportion constraint ......
Technical life analysis
Preliminary judgment of life Analysis of life cycle cost
Optimization of life cycle cost
Adjustment of maintenance structure Economic life analysis
System life analysis
Meet the requirement of performance or not
Reanalysis of maintenance structure
Figure out the life
Y
N Fig. 1 The scheme of comprehensive life determination
be done with the planned maintenance. So the technical life of equipment and the system maintenance structure interact with each other, and the technical life of equipment will affect the play of the technical performance of the system. The disunity of the technical life of each equipment will affect the contribution rate of equipment to the system efficiency. Therefore, the technical life plays an important role in comprehensive life determination. The whole production system’s life can be optimized by adjusting the technical life. (3) Economic life analysis is the basis of comprehensive life determination. Economic life has always been an important research direction in traditional life evaluation, which means the lowest cost with the certain performance. Due to the high cost of follow-up maintenance, it is very important to arrange reasonable maintenance strategies. The economic life calculated may vary greatly under the different maintenance structures. (4) System life is the core content of comprehensive life determination. From the perspective of the whole production system and its operational performance, the equipment’s effectiveness and cost are considered. System life is a comprehensive index which is more in line with the real complex environment, considering
Life Evaluation and Optimization of Equipment in the Production Systems Fig. 2 The fundamental structure of WSEIAC model
921
Operational effectiveness
Availability
Credibility
Capability
the influence of equipment’s contribution rate to the production system. All the decision-making about life determination must include the system evaluation. So, the comprehensive life analysis of equipment is not a linear process, but through several iterations, the final comprehensive balance and optimization can be achieved. Its general analysis model is shown in Fig. 1.
3 The Evaluation Model of System Life 3.1 WSEIAC Model The WSEIAC model established by the Advisory Committee on Weapon System Effectiveness of U.S. industry can often be adopted for operational effectiveness assessment. This model is a quantitative model for operational effectiveness assessment. It is a scientific approach that has been proven through long term practice and has been widely recognized and applied by countries around the world. The fundamental structure of WSEIAC model is shown in Fig. 2. Shown in Fig. 2, the fundamental structure of WSEIAC model can be described by the following formula: SE = A × D × F
(1)
where, SE is the operational effectiveness, A is the availability, D is the credibility, C is the capability. The operational effectiveness evaluation model given above is the basic model for a single equipment. If it is used to evaluate the operational effectiveness, the WSEIAC model above should be modified on its own characteristics.
3.2 Operational Effectiveness Evaluation Based on Marginal Contribution to the Production System In the model of operational effectiveness, the calculation of capability value is the most important and basic work. As a system of systems, the application of information system plays a multiplier role in capability of the production system. Therefore, the capability of the production system can be calculated by the following formula.
922
P. Yu
( SP = N P × SPI = N P × 1 +
N E
) Wi Vi
i=1
( × 1+
N E
) Si Vi
(2)
i=1
SP and NP is the index reflects the product system’s capability with and without the multiplier effect of system. And SPI represents the multiplication of the information system. N is the number of the equipment or subsystems. W i reflects the importance E contribution of the subsystem i, 0 ≤ W i ≤ 1, W i = 1. S i is the index reflecting theE to the collaborative operation of the production system, 0 ≤ S i ≤ 1, S i = 1. V i represents the standardized capability index of the subsystem i, 0 ≤ V i ≤ 1. The index can be calculated directly on the basis of the capability assessment of each subsystem. It can also use AHP or ANP for in-depth analysis. The specific treatment depends on the certain situation. In this paper, ANP is adopted to calculate V i . The key of the capability index evaluation is to determine the contribution of each equipment capability characteristics to the production system capability with the mixture effect. Assume that the kth capability index of the production system is I k , which can be expressed as: NP =
n ||
Ikwk
(3)
k=1
The supporting effect of information system on the production system is not considered when calculating NP. wk is the weight of the kth capability index. The equipment has different operational scenarios and operational intensity. Assume that the equipment has two operational scenarios and three intensities. The capability of the equipment can be affected by the scenarios and intensities. At this point, the capability value of the equipment should be corrected. γk =
3 E 2 E
Q i j λki j
(4)
Q i j λsi j
(5)
i=1 j=1
αs =
2 3 E E i=1 j=1
γk represents the adjustment factor of the kth capability index in different operational scenarios. αs represents the adjustment factor of the capability index of the subsystems in different operational scenarios. Q i j represents the possibility of the occurrence of the ith operational intensity and the jth scenario. λki j represents the adjustment factor of the kth capability index of the production system when the system is in the ith type of operational intensity and the jth type of the operational scenario. λsi j represents the adjustment factor of the capability index of the information subsystem V s . And on this basis SE can be concluded.
Life Evaluation and Optimization of Equipment in the Production Systems
SE =
|
n ||
wk
(γk × Ik )
× A× D× 1+
k=1
N E
|
923
|
Wi (αi × Vi ) × 1 +
i=1
N E
| Si (αi × Vi )
i=1
(6) With the extension of the operational time of the equipment, its performance has a basic damping law, which can be described as: when the equipment is just put into use, all the indicators of its performance are in the best state, but with the passage of time, the performance of different types of equipment will inevitably show different degrees of decline (the degrees of the decline is related to the characteristics of the equipment), and the attenuation rate is uneven. But it tends to decay over time. When the equipment is repaired, its performance will be greatly improved, but it will not exceed the initial maximum designed performance. Then it will continue to attenuate according to the previous attenuation law. Generally speaking, this attenuation law applies to all kinds of equipment. The microscopic characteristics of the capability attenuation of each equipment will be superimposed on the macro level of the production system capability attenuation. In other words, the production system also conforms to this monotonous capability attenuation law. The attenuation of capability is monotonous, and its attenuation function must be monotonous. This attenuation law can be seen in the following Fig. 3. Therefore, the evaluation formula for the capability of the production system considering attenuation law can be described as follows. N P(t) =
| m n || E k=1
i=1
N (t) Iik Ni0
|wk (7)
N (t) is the remaining value after the t years’ normal attenuation. Ni0 is the initial capability value of the ith equipment. performance
Repair workload Performance curve
Designed performance
Trend curve of repair workload Minimum performance Retirement Top Minor Minor Minor Minor Retirement Overhaul Overhaul Overhaul Overhaul Overhaul
Fig. 3 Attenuation law of the production system
924
P. Yu
Similarly, the production system’s operational effectiveness evaluation model considering attenuation law can be described as the following formula. SE =
| m n || E k=1
i=1
N (t) Iik Ni0
|wk × A × D × [1 +
N E
Wi (αi × Vi )] × [1 +
i=1
N E
Si (αi × Vi )]
i=1
(8) The improvement of the equipment performance is to enhance the operational effectiveness of the whole production system, and the role of equipment in the system is finally reflected in the improvement of the system’s operational effectiveness. Therefore, the improvement of the operational effectiveness of each equipment should be measured by its contribution rate to the effectiveness of the system. So, here the marginal contribution is introduced to describe the operational effectiveness of the equipment. The marginal contribution of the operational effectiveness MC(t)i of the production system refers to the degree to which the equipment increases the operational effectiveness of the system when it is put into use. The contribution of the equipment i to the production system through the total life cycle can be described as follows. LC MCi =
T E
MC(t)i =
t=1
T | E
| ' S E(t)i − S E(t)i × d(t)i
(9)
t=1
LC MCi is the summed marginal effectiveness of equipment i through the total life cycle. d(t)i the availability of the production system with equipment i in the tth ' year. S E(t)i and S E(t)i are the operational effectiveness of the production system with and without equipment i in the tth year. The cumulative marginal effectiveness contribution rate S MC R(T ) can be described below. {| | T T E E S MC R(T ) = MC(t) C0 + C(t) − ST (10) t=1
t=1
C0 is the acquisition cost, C(t) is the maintenance charge of each year, ST is the salvage value of the equipment. Finally, the basic model for solving the issue of life determining can be established. T = f (C0 , C(K ), MC(t), ST ) Subject to max S MC R(T ) =
T E t=1
| /| T E MC(t) C0 + C(t) − ST . t=1
The result of the model can be explained by the following Fig. 4. As we can see from this graph, the solution of the model can be simplified as:
Life Evaluation and Optimization of Equipment in the Production Systems
925
Fig. 4 The solution of the issue of life determining MCR(t)
max SMCR(T)
K
T*
Fig. 5 The transformation mode of capability from equipment to system
System Capability
F1
F2
…
F8
Equipment 1
S1
MC R(T ∗ ) = S MC R(T ∗ )
S2
…
S8
(11)
4 A Case Study Assume that the production system consists of 8 equipment. Take equipment 1 as an example. It has 8 capability characteristics from S1 to S8. With the effect of the information system, its capability characteristics are transformed to the system’s capability ranged from F1 to F8. And the transformation mode can be shown as Fig. 5. Since the salvage value changes little after a period of operation, and the salvage value is not the goal of holding the equipment, its impact on the system life is small and can be ignored here. Therefore, without considering the salvage value, the cumulative marginal effectiveness contribution rate can be calculated as shown in Table 1. According to the above table, the production system life decision diagram can be worked out which is shown in Fig. 6.
926
P. Yu
Table 1 The cumulative marginal effectiveness contribution rate of the production system T
LCMC
0
2.54
C
SMCR(T ) × 1000
T
LCMC
C
SMCR(T ) × 1000
1044.16
2.44
6
13.90
200.22
6.72
1
4.83
152.09
4.04
7
14.87
211.62
6.52
2
6.89
148.68
5.12
8
16.63
207.12
6.68
3
8.78
169.59
5.80
9
18.20
236.38
6.68
4
10.73
179.19
6.34
10
19.48
249.77
6.55
5
12.48
175.36
6.68
11
20.939
244.29
6.50
Fig. 6 The cumulative marginal effectiveness contribution rate changing by years
The combination of the figure and table above shows that the equipment’s system life is 7 years. Overall, the equipment plays a weak role in the system.
5 Conclusion From the data of the above analysis results, it can be seen that no matter which equipment, its cumulative marginal effectiveness contribution rate presents the characteristics of first rising and then declining. Therefore, in general, the system life of the equipment can be achieved in the position corresponding to the maximum cumulative marginal effectiveness contribution rate. In addition, it is worth noting that the system life, natural life, economic life and general life are generally not equal, and it is usually smaller. With the development of time, the technical status of equipment will continue to decline, new technologies will appear, and the system has higher and higher requirements to the equipment. All these will lead to the equipment reaching its
Life Evaluation and Optimization of Equipment in the Production Systems
927
system life too quickly. Generally speaking, the faster the technology develops and the higher the new requirement is, the greater degree of system life shortening. In order to achieve a seamless interface between the equipment’s retirement and the new requirement’s putting into use, we should deal with the development planning of the equipment to avoid the ability of the equipment cannot meet the demand and the new equipment cannot be used, so that the production system is always in the best state.
References 1. Derman, C., Lieberman, G.J.: A renewal decision problem. Manag. Sci. (6), 554–561 (1978) 2. Lv, W., Lou, S.C., Li, T.Z.: The analysis discussion of equipment renewal about technology and economy. Tactical Missile Technol. (3), 62–64 (2006) 3. Martins, I.D., Bahiense, L., Infante, C.E.D., et al.: Dimensionality reduction for multi-criteria problems: an application to the decommissioning of oil and gas installations. Expert Syst. Appl. (148), 113236 (2020) 4. Li, Y.D., Chen, Y.G.: Study on retirement model for ground-to-air missile. J. Projectiles Rockets Missiles Guidance (2), 1193–1195 (2006) 5. Xia, Z.A., Zhao, Y.J.: Model of equipment part replacement cycle optimization based on GA. Armament Autom. (8), 15–18 (2008) 6. Mei, G.J., Zhang, X.B., Zhou, X.: Operation model of aged equipment based on total system and total life theory. J. Acad. Equipment Command Technol. (4), 113–115 (2009)
Intelligent Fault Diagnosis of Rotating Machinery Based on Improved RseNet and BiGRU Zeyu Ye, Xiaoyang Zheng, and Chengyou Luo
Abstract Rotating machinery, as an important part of industry, often operates in complex environments. Therefore, the original signal data collected has noise interference, which directly affects the Identifiability of fault features. In order to solve the above problems, this paper proposed a deep learning network based on Residual network (RseNet), Channel attention mechanism and Bidirectional gated recurrent unit (BiGRU). Firstly, introduce Channel attention mechanism into RseNet, establish the relationship between feature extraction channels, and adaptively adjust the importance of feature extraction channels. Then, extract the features of time series through BiGRU, and finally complete the classification using softmax. In order to verify the performance of the proposed model, simulate vibration signals under complex environments by adding Gaussian white noise, and then compared with the existing model. The experimental results show that the proposed model has higher accuracy and excellent anti-noise ability. Keywords Fault Diagnosis · Rotary Machinery · RseNet · Channel Attention Mechanism · BiGRU
1 Introduction In modern industrial production, rotating machinery has become an indispensable component of various mechanical equipment. Once a failure occurs, it will lead to serious safety accidents and economic losses. It is an effective way to correctly detect the faults of rotating machinery under complex working conditions and maintain them in time to ensure the safety of industrial production. Therefore, it is significant to develop an accurate fault diagnosis methods for rotating machinery. The rapid development of machine learning has greatly promoted the progress of intelligent fault diagnosis methods, and has made many excellent achievements [1, Z. Ye (B) · X. Zheng · C. Luo Chongqing University of Technology, Chongqing 400000, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_85
929
930
Z. Ye et al.
2]. You et al. [3] optimized the parameters of support vector machine (SVM) through the modified shuffled frog-leaping algorithm to achieve a fault diagnosis method with faster convergence speed. Singh et al. [4] added the fuzzy rule base to the decision tree, and realize multi-class decision at the terminal node to overcome the problem that the accuracy of the decision tree decreases due to excessive classification. Long et al. [5] improved AdaBoost and added one-pot vector, combined with attention mechanism to realize fault diagnosis of multiple types of signal data. Although the method based on machine learning has good performance, it relies heavily on expert experience and knowledge in feature extraction and feature selection. The emergence of deep learning has solved this problem, and it also has strong nonlinear feature extraction ability, which can solve the problem of larger and more complex data [6, 7]. Zheng et al. [8] combined the convolution neural network with BiGRU, which made up for the defect that the convolution neural network (CNN) cannot extract the temporal characteristics, and realized gearbox fault diagnosis under multiple working conditions. Pei et al. [9] proposed a model based on Transformer and convolution, which has powerful parallel computing ability and has completed high-precision fault diagnosis. In addition to the above, such deep learning models as Deep belief network [10], Bidirectional Long Short-Term Memory (BiLSTM) [11], Deep autocoder [12], Generative adversarial network [13] and Graph neural network [14] have also been applied to the field of fault diagnosis. Although the method based on deep learning has greatly improved the feature extraction, the above methods do not consider the noise problem of rotating machinery. Wei et al. [15] proposed a Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCN), which captured low-frequency features and suppresses noise through sufficiently wide convolution kernels. Chen et al. [16] introduced transfer learning based on CNN to adapt to small sample data while solving noise interference. Jing et al. [17] added a multiscale coarse-grained layer on the basis of traditional CNN to obtain richer diagnostic information from signals of different scales, which can effectively resist the interference of noise. The above methods are all realized by improving CNN. However, CNN has the problem of too many layers leading to gradient disappearance, which makes training difficulty [18]. Therefore, this paper proposed a method based on RseNet, BiGRU and Channel attention mechanism, which can not only solve the gradient problem of CNN, but also realize the accurate fault diagnosis of noise signals. The main contributions of this paper are summarized below: 1. A novel fault diagnosis method for gearbox combining the advantages of RseNet, BiGRU, and Channel attention mechanism. 2. The proposed model can directly take the original signal as input without prior knowledge. The remainder of this paper is organized as follows. The relevant theoretical background will be introduced in Sect. 2. The detailed introduction of the proposed model is in Sect. 3. The comparative experiments with existing methods are demonstrated in Sect. 4. The final conclusion is in Sect. 5.
Intelligent Fault Diagnosis of Rotating Machinery Based on Improved …
931
2 Theoretical Background 2.1 RseNet RseNet is an improved model based on CNN proposed by He et al. [19]. It solved the problem of increasing the number of layers of neural network but decreasing the performance in deep learning by introducing residual block (RB). The specific structure of RB is shown in the Fig. 1. RB consists of a shortcut connection and a main connection containing multiple operations. The main connection includes three operations: batch normalization (BN) operation, ReLU activation function and convolution operation. The shortcut connection involves input X in the final operation, and its calculation is as follows: H (X ) = F(X ) + X
(1)
where, X is the input of RB and the output of the shortcut connection, F(X ) is the output of the main connection, and H (X ) is the output of RB. The addition of shortcut connection changes the learning object from the original identity mapping to the residual mapping, reducing the difficulty of optimization and improving the network performance. Fig. 1 The structure of RB
932
Z. Ye et al.
2.2 Channel Attention Mechanism The channel attention mechanism was proposed by Hu et al. [20]. Its core is the two operations of squeeze and excitation. The operation of squeeze is to pool the input features globally, and compress each feature into a real number with global receptive field. Its calculation is as follows: Fsq (xi ) =
W 1 xi (k) 1×W k
(2)
where xi is the ith feature vector with length W. The operation of excitation is mainly composed of two full connection layers and two activation functions, which can help capture channel correlation and generate corresponding channel weights. Its calculation is as follows: Fex (Fsq (xi ), ω) = σ (ω2 δ(ω1 F(xi )))
(3)
where ω means full connection calculation, δ is the ReLU activation function, σ is the Sigmoid activation function.
3 The Proposed Model 3.1 The Improved RseNet This paper introduced Channel attention mechanism to RseNet to improve its feature extraction ability. The structure of the improved RB is shown in the Fig. 2. Fig. 2 The structure of the improved RB
Intelligent Fault Diagnosis of Rotating Machinery Based on Improved …
933
Fig. 3 The structure of the proposed model
In the training process of CNN network, the convolution operation will generate multiple channels, and each channel will contain a lot of redundant information unrelated to fault diagnosis. Adding the channel attention mechanism can make the correlation between channels. By giving the channel weight, the model can learn the channel characteristics closely related to fault diagnosis, so as to reduce the impact of noise.
3.2 The Structure of the Proposed Model The proposed model based on improved RseNet and BiGRU is shown in the Fig. 3. The mixed noise signal data first enters the convolution operation, extracts the local features. Then the maximum pooling operation is performed, and the input is downsampled on the basis of reserved features. Next, enter the feature extraction layer composed of 32 improved RB modules to extract effective features while solving the gradient explosion or gradient disappearance problem. In the next step, BiGRU extracts time series features, and finally completes classification through Softmax.
4 Experiment Validation 4.1 Data Set This paper used two data sets for experimental verification. The first data set the gearbox data from Southeast University (SU). This gearbox data set contains the vibration signals of gear and rolling bearing respectively, and the gear and rolling bearing contain four fault states and one normal state respectively. The data of the
934
Z. Ye et al.
gearbox is collected under two conditions of speed and load configuration of 20 Hz0 V and 30 Hz-2 V. The second data set is the rolling bearing data set from Paderborn University (PU). The bearing data set contains three fault states: inner ring fault, outer ring fault, inner and outer ring fault. Each fault state includes two levels of damage. All data are collected under four working conditions of different rotational speed, different load torque and different radial force settings.
4.2 Experiment In order to verify the effectiveness of the proposed model, the above data sets were used for experiments, and each group of experiments was evaluated using the tenfold cross validation evaluation. For SU data set, there are nine states. Each state contains 300 samples, 200 samples are randomly selected as training set, and the remaining samples are used as test set. For PU data, there are four states in total. 300 samples are randomly selected from each state, and then 200 are randomly selected as training sets, and the remaining data sets are used as test sets. For all data, the vibration signal of mixed noise is simulated by adding Gaussian white noise. The experimental results on the SU dataset are shown in Table 2. From the experimental results, we can see that the model proposed in this paper has higher fault diagnosis accuracy than the existing models under noise. The experimental results on the PU dataset are shown in Table 1. The experimental results show that the accuracy of the proposed model is slightly lower than LEFE-Net only when SNR = −4 dB, and the accuracy is improved in other cases.
Table 1 Average diagnostic accuracy (%) of different SNR values in PU data set Method
SNR (dB) −8
−4
0
4
8
SVM [21]
70.53 ± 1.62
82.97 ± 1.43
83.38 ± 1.11
75.73 ± 1.31
73.26 ± 1.27
LEFE-Net [21]
81.58 ± 1.32
95.62 ± 0.86
99.29 ± 0.31
99.77 ± 0.20
99.38 ± 0.23
The proposed method
82.37 ± 0.71
95.60 ± 0.27
99.41 ± 0.53
99.80 ± 0.45
99.93 ± 0.32
Intelligent Fault Diagnosis of Rotating Machinery Based on Improved …
935
Table 2 Average diagnostic accuracy (%) of different SNR values in SU data set Method
SNR (dB) −6
−4
0
4
6
WDCN [15]
61.07 ± 8.30
72.99 ± 6.31
87.21 ± 5.99
94.85 ± 3.08
97.41 ± 1.98
TCNN [16]
69.54 ± 9.18
79.13 ± 8.15
91.62 ± 5.26
97.37 ± 2.02
98.43 ± 1.49
MSCNN [17]
82.57 ± 1.41
88.48 ± 1.48
95.14 ± 0.56
98.15 ± 0.30
98.57 ± 0.29
The proposed method
90.17 ± 0.56
92.44 ± 0.48
99.05 ± 0.11
99.47 ± 0.23
99.73 ± 0.26
5 Conclusion In order to realize high accurate fault diagnosis of rotating machinery under noise signals, this paper proposes a model based on improved RseNet and BiGRU. This model introduces channel attention mechanism into RseNet, adds feature filtering mechanism to it to filter out less-important features and reduce the impact of noise, and finally extracts high-level features through BiGRU to complete accurate pattern recognition. The results show that the proposed model has higher fault diagnosis ability, better anti-noise ability and robustness. However, the sample size of signal data collected in the real environment is usually very small. The future work direction of this study is the fault diagnosis of small samples.
References 1. Higami, Y., Yamauchi, T., Inamoto, T., Wang, S., Takahashi, H., Saluja, K.K.: Machine learning based fault diagnosis for stuck-at faults and bridging faults. In: 2022 37th International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC), pp. 477–480 (2022) 2. Liang, T., Chen, C., Wang, T., Zhang, A., Qin, J.: A machine learning-based approach for elevator door system fault diagnosis. In: 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), pp. 28–33 (2022) 3. You, L., Fan, W., Li, Z., Liang, Y., Fang, M., Wang, J.: A fault diagnosis model for rotating machinery using VWC and MSFLA-SVM based on vibration signal analysis. Shock Vibr. (2019) 4. Singh, A.K., Singh, R., Kumar, G., Soni, S.: Power system fault diagnosis using fuzzy decision tree. In: 2022 IEEE Students Conference on Engineering and Systems (SCES), pp. 1–5 (2022) 5. Long, Z., et al.: Motor fault diagnosis using attention mechanism and improved adaboost driven by multi-sensor information. Measurement 170, 108718 (2021) 6. Jiang, T., et al.: Research on power grid fault diagnosis technology based on deep learning. In: 2022 Power System and Green Energy Conference (PSGEC), pp. 533–542 (2022) 7. Tan, X., Liu, M., Lv, R.: Rolling bearing fault diagnosis technology based on deep learning. In: 2022 International Conference on Computation, Big-Data and Engineering (ICCBE), pp. 214– 216 (2022) 8. Zheng, X., Ye, Z., Wu, J.: A CNN-ABiGRU method for gearbox fault diagnosis. Int. J. Circuits Syst. Signal Process. 440–446 (2022)
936
Z. Ye et al.
9. Pei, X., Zheng, X., Wu, J.: Rotating machinery fault diagnosis through a transformer convolution network subjected to transfer learning. IEEE Trans. Instrum. Meas. 70, 1–11 (2021) 10. Ren, Z.H., Yu, T.Z., Ding, D., Zhou, S.H.: fault diagnosis method of rolling bearing based on VMD-DBN. J. Northeastern Univ. (Nat. Sci.) 42(8), 1105 (2021) 11. Zheng, X., Wu, J., Ye, Z.: An end-to-end CNN-BiLSTM attention model for gearbox fault diagnosis. In: 2020 IEEE International Conference on Progress in Informatics and Computing (PIC), pp. 386–390 (2020) 12. Pei, X., Zheng, X., Wu, J.: Intelligent bearing fault diagnosis based on Teager energy operator demodulation and multiscale compressed sensing deep autoencoder. Measurement 179, 109452 (2021) 13. He, W., Chen, J., Zhou, Y., Liu, X., Chen, B., Guo, B.: An intelligent machinery fault diagnosis method based on GAN and transfer learning under variable working conditions. Sensors 22(23), 9175 (2022) 14. Chen, W., Zai, H., He, H., Zhang, K., Xi, R., Fu, F.: Research on fault diagnosis method of power transformer based on graph neural network. In: 2021 IEEE 5th Conference on Energy Internet and Energy System Integration, pp. 4289–4294 (2021) 15. Zhang, W., Peng, G., Li, C., Chen, Y., Zhang, Z.: A new deep learning model for fault diagnosis with good anti-noise and domain adaptation ability on raw vibration signals. Sensors 17(2), 425 (2017) 16. Chen, Z., Gryllias, K., Li, W.: Intelligent fault diagnosis for rotary machinery using transferable convolutional neural network. IEEE Trans. Industr. Inf. 16(1), 339–349 (2019) 17. Jiang, G., He, H., Yan, J., Xie, P.: Multiscale convolutional neural networks for fault diagnosis of wind turbine gearbox. IEEE Trans. Industr. Electron. 66(4), 3196–3207 (2018) 18. Deng, F., Ding, H., Hao, R.: Fault diagnosis of rotating machinery based on residual neural network with multi-scale feature fusion. J. Vibr. Shock 40(24), 22–28 (2021) 19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016) 20. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018) 21. Fang, H., Deng, J., Zhao, B., Shi, Y., Zhou, J., Shao, S.: LEFE-Net: a lightweight efficient feature extraction network with strong robustness for bearing fault diagnosis. IEEE Trans. Instrum. Meas. 70, 1–11 (2021)
Research on Micromouse Maze Search Algorithm Based on Centripetal Algorithm Liang Lu, Xiaoxi Liu, Pengyu You, and Zhe Zhang
Abstract Aiming at the problem that the micromouse guided by the centripetal search algorithm has reached the maze lattice adjacent to the target area but cannot immediately enter the target area, an improved centripetal search algorithm is proposed. The maze is modeled in two dimensions, and the micromouse’s operation, coordinate memory, and wall data storage are realized through the conversion of relative direction and absolute direction. Based on the centripetal search algorithm, the eight adjacent maze grids in the target area are filled with local algorithms. The test results of 10 mazes show that compared with the previous algorithm, the improved algorithm has reduced the number of search steps and increased the stability of the micromouse search maze. When the micromouse is located in the adjacent grid with access to the target area of the maze, 100% directly enters the target area. It can be seen that unnecessary search is reduced during the maze search, and it’s an efficient maze fusion search algorithm. Keywords Micromouse · Centripetal algorithm · Maze · Solving · Modeling
1 Introduction The micromouse is a product of many disciplines, including mechanical engineering, electronic engineering, automatic control, artificial intelligence, program design, sensing, and testing technology. It has the characteristics of accurate positioning ability, fast walking ability, and excellent obstacle avoidance ability. It’s a typical L. Lu (B) Unicloud Tech. Co., Ltd., Tianjin, China e-mail: [email protected] X. Liu · Z. Zhang State Grid of China Technology College, Jinan, Shandong, China P. You Jinan Thomas School, Jinan, Shandong, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_86
937
938
L. Lu et al.
artificial intelligence microrobot. Maze search is a classic problem in many fields such as data structure, graph theory, and graphics. The problem of searching the maze and planning its optimal path is an important part of the field of intelligent robot path planning [1–5]. The centripetal search algorithm is composed of the center-left, center-right, lefthand, and right-hand search algorithms to fill the maze area [6–8]. Because of the priority of straight travel in some directions, the micromouse can’t enter the maze immediately even though it has searched the adjacent maze cells that can directly enter the target area. Because of this, this paper proposes to fill the 8 adjacent maze lattices in the target area with the algorithm to correct the existing filling deficiencies, thus overcoming the above shortcomings and improving the search efficiency.
2 Establishment of Maze Mathematical Models The site used by the IEEE micromouse to walk the maze is composed of 16 * 16 squares, each of which is 18 cm * 18 cm. To facilitate the processing of the maze grid that the micromouse passes through so that it can identify the running direction and record the wall data information, 256 maze grids can be represented by twodimensional coordinates [9, 10].
2.1 Maze Coordinate Establishment The four corners of the maze can be used as the starting point of the micromouse, and each corner has two directions as the starting point of the micromouse, so the micromouse has 8 starting states, as shown in Fig. 1 below. To simplify coordinate processing, no matter what state the micromouse is in, it’s placed in the front, rear, left, and right of the starting point corresponding to the positive direction of the Y axis, negative direction of the Y axis, negative direction of the X axis and positive direction of the X axis. The coordinates of each grid can be numbered with a twodimensional array, which is defined as GucMapBlock [X][Y], where X corresponds to the X-axis coordinates, Y corresponds to the Y-axis coordinates, and X and Y meet the Eq. (1):
0≤X ≤F 0≤Y ≤F
(1)
Note: X, Y ∈ Z; Because the values of X and Y are integers of 0-F, affected by the 8 initial states of the micromouse, to ensure that the coordinate records of the micromouse still
Research on Micromouse Maze Search Algorithm Based on Centripetal … Direction 7
Direction 6
Direction 2
Direction 3
Direction 5
Direction 8
Fig. 1 8 kinds of micromouse initiation
939
Direction 4
Direction 1
meet Eq. 1 when searching the maze, the coordinate conversion of the corresponding states of the starting point is required. No matter which starting state the micromouse is in, the starting point is assumed to be (0,0) point, and the front of the micromouse at the starting point is the positive direction of the Y-axis. When the micromouse first detects that there is a path on its left side, it means that the maze is on the left side of the micromouse. At this time, the established coordinates don’t meet Eq. 1 (a negative value will appear). To ensure that Eq. 1 is still valid, it’s necessary to perform the maze coordinate conversion, and convert the starting point of the maze to (F,0) point, As shown in Fig. 2(a) below, the micromouse starts from 2 and continuously detects the surrounding wall data. The first intersection is located on the left side of the micromouse, so the maze is located on the left side. The condition that Eq. 1 is still valid is to convert the starting point to (F,0), and set 1 as the maze (0,0) point; When the micromouse detects that the intersection first appears on the right side of the vehicle body, it can be seen that the maze is located on the right side of the micromouse. At this time, the coordinates established with the starting point as (0,0) point meet Eq. 1, so there is no need for coordinate conversion, as shown in Fig. 2(b) below, and the starting point of the 3 points is the coordinate (0,0) point. 1 Y
X
2
(F,F)
(a)The starting point is (F, 0) point
Fig. 2 Example of starting point
Y (F,F)
3
X
(b)The starting point is (0, 0) point
940
L. Lu et al.
2.2 Conversion of Relative Direction and Absolute Direction Although the micromouse establishes the maze coordinates, the micromouse searches in the maze, and its search direction will change continuously with the presence or absence of the maze baffle. It’s not only necessary to determine the current position, but also need to know how to enter the next maze lattice. The position and direction of the micromouse’s infrared sensor will be fixed, but for the maze, the position and direction of the micromouse’s infrared sensor will change continuously with the search, this will involve the conversion of relative direction and absolute direction. Since the maze itself is fixed, select the maze to establish the absolute direction, as shown in Fig. 3 below. The up, right, down, and left of the maze are represented by 0, 1, 2, and 3; The direction and position of the micromouse are constantly changing during the search, so the relative direction of the micromouse is established, as shown in Fig. 4 below. The front, right, rear, and left of the micromouse are represented by 0, 1, 2, and 3. Relative direction: the direction with the current walking direction of the mouse as the reference, which is called the relative direction. Absolute direction: refers to the direction of the absolute coordinate plane of the maze, which is called absolute direction. Because the absolute direction coordinates of the maze are fixed, it’s very suitable for recording the maze position coordinates and wall information searched by the micromouse, so the relative direction information obtained by the micromouse Fig. 3 Absolute direction of the maze
0 up
3
left
maze
right 1
down 2
Fig. 4 Relative direction of micromouse
0 front
3
right
left
1
rear 2
Research on Micromouse Maze Search Algorithm Based on Centripetal … Table 1 Conversion from relative direction to absolute direction
Relative direction
Absolute direction
In front of micromouse
Dir
The right side of micromouse
(Dir + 1)%4
Rear the micromouse
(Dir + 2)%4
The left side of micromouse
(Dir + 3)%4
941
search is converted into the absolute direction information of the maze; The direction recognition of the micromouse before entering the next maze lattice needs to be obtained by converting the absolute direction of the maze into the relative direction of the micromouse. The micromouse stores the maze coordinate position and wall data information obtained in the search in the absolute direction of the maze, and its relative direction is converted into the absolute direction as shown in Table 1: The movement of the micromouse at any position in the maze is determined by itself, so it’s necessary to convert the absolute direction of the maze into the relative direction that the micromouse can recognize, which can be obtained by the following Eq. (2): Dir = (Dir _dst − Dir + 4)%4
(2)
where Dir is the direction deviation value; Dir _dst is the absolute direction of the turning target; Dir is the current absolute direction of the micromouse. Thus, the relative direction required for micromouse turning can be calculated according to the deviation value, and the corresponding relationship is shown in Table 2 below: To understand the conversion process of Eq. (2) and Table 2, the following Fig. 5 illustrates its implementation process. When the micromouse is at Point M, the position it needs to reach is Point N. From Fig. 3, the absolute direction of the target position of Point M, Point N, is Dir _dst = 2, and the absolute direction of the current vehicle head of the micromouse is Dir = 1. Therefore, the direction deviation value Dir = 1 can be calculated by Eq. (2). When Dir = 1 obtained from Table 2 above, it’s the right side of the micromouse. At this time, the micromouse needs to turn right from Point M to Point N. Table 2 Conversion from absolute direction to the relative direction
Absolute direction
Relative direction
0
In front of micromouse
1
The right side of micromouse
2
Rear the micromouse
3
The left side of micromouse
942
L. Lu et al.
Fig. 5 Conversion from absolute direction to relative direction Dir
M N Dir_dst
3 Centripetal Search Algorithm The left-hand/right-hand search algorithm and the center-left/center-right search algorithm belong to the depth-first search algorithm, which doesn’t contain heuristic information and belongs to the blind search process. The centripetal search algorithm introduces the heuristic information of the target area and combines the judgment of the running direction so that the left-hand/right-hand search algorithm and the center-left/center-right search algorithm switch each other so that the micromouse can find the target area more intelligently.
3.1 Algorithm Introduction Since the maze is composed of 18 cm * 18 cm 16 * 16 squares, its target area is located in the middle of the maze. The centripetal search algorithm divides the maze into four areas according to the symmetry of the target area. According to the running direction of the micromouse and the characteristics of the left-hand/right-hand search algorithm and the center-left/center-right search algorithm, the algorithm fills each area to make it meet multiple directions to move forward, Preferably select the direction closer to the target area [11–13], and its maze area division and algorithm filling are shown in Fig. 6. In Fig. 6, R - right-hand search algorithm, L - left-hand search algorithm, CR center-right search algorithm, CL - center-left search algorithm. Characteristics of four filling algorithms: (1) Left-hand search algorithm: in the search, when meeting a fork, turn left first, then go straight, and finally turn right; (2) Right-hand search algorithm: in the search, when meeting a fork, turn right first, then go straight, and finally turn left; (3) Center-left search algorithm: in the search, when meeting a fork, go straight first, then turn left, and finally turn right; (4) Center-right search algorithm: In the search, when meeting a fork, go straight first, then turn right, and finally turn left;
Research on Micromouse Maze Search Algorithm Based on Centripetal …
943
Fig. 6 Centripetal search algorithm guidance diagram
The centripetal search algorithm increases the directivity of the target area and improves the efficiency of the maze search. The algorithm is realized by judging the running direction of the micromouse and performing the corresponding algorithm conversion. There is no need to perform a lot of calculations, thus reducing the computational burden of the CPU. The algorithm flow chart is shown in Fig. 7.
3.2 Existing Problems and Optimization The centripetal search algorithm first judges the maze area where the micromouse is located and then combines the running direction of the micromouse to finally match its corresponding algorithm. The phenomenon shown in Fig. 8 will also appear in itself. When the micromouse reaches A, because the running direction of the micromouse is up, the algorithm selection is based on the ➂ area. At this time, the micromouse selects the center-left search algorithm, and the center-left search algorithm implementation process: searching, When meeting a fork in the road, the micromouse has priority to go straight, then turn left, and finally turn right [14–16]. Because there is a road ahead of the micromouse, the micromouse has priority to go straight, so that the micromouse doesn’t directly enter the G target area at A, resulting in increased maze search time. In response to this problem, Shaobo HE [12] et al. filled the maze lattice adjacent to the target area with a specific direction algorithm to achieve local optimization, but only optimized for several directions. When the micromouse runs in the opposite direction of a specific direction, this kind of problem still occurs, and cannot completely solve the problem. Because of the above problems of the centripetal search algorithm, this paper optimizes the 8 adjacent maze lattices of the target region locally. As shown in Fig. 9, the coordinates (7,7), (7,8), (8,7), and (8,8) constitute the target area. The adjacent maze cells that can directly enter the target area are (7,6), (8,6), (6,7), (6,8), (9,7), (9,8), (7,9), and (8,9). Since these eight maze cells are directly connected to
944
L. Lu et al.
Fig. 7 Flow chart of the centripetal search algorithm
the target area and can directly enter the target area, this special maze cell is re-filled with the algorithm, When the micromouse reaches these labyrinths, if there is an entrance, it can directly enter the target area.
Research on Micromouse Maze Search Algorithm Based on Centripetal … Fig. 8 Search path of the centripetal search algorithm
Fig. 9 Improved centripetal search algorithm guidance diagram
945
946
L. Lu et al.
Table 3 Test results before and after algorithm improvement Maze number
Number of search steps before improvement
Number of search steps after improvement
Improve search efficiency
1
75
75
00.0%
2
216
160
25.9%
3
179
179
00.0%
4
168
106
36.9%
5
205
170
17.1%
6
136
136
00.0%
7
173
173
00.0%
8
152
137
9.9%
9
115
115
00.0%
10
208
208
00.0%
3.3 Analysis of Experimental Results To effectively verify whether the improved centripetal search algorithm before and after the improvement improves the efficiency of maze search, randomly select 10 maze charts used in the competition at home and abroad, use the standard IEEE international micromouse to walk the maze, and count the number of search steps in each maze chart before and after the improvement of the algorithm. The algorithm test results are shown in Table 3. The number of search steps is subject to the first search of the target area. It can be seen from the table that among the 10 randomly selected maze charts, the maze search was performed before and after the improvement of the algorithm, and the maze search of 4 charts improved the search efficiency, sometimes the efficiency could be increased by one-third. It can be seen from the algorithm analysis and experimental verification that the improved search algorithm will not reduce the search efficiency compared with the previous one. It completely solves the problem that the micromouse cannot directly enter the target area when it’s located in the maze grid that can directly enter the target area. It can be seen that the improved algorithm has significantly improved its search efficiency in maze search.
4 Conclusion Because of the shortcomings of the centripetal search algorithm, according to the characteristics of the eight adjacent maze lattices that can enter the target area, and based on ensuring the basic filling principles of the centripetal search algorithm, this paper re-filled the eight maze lattices, completely solving the problem that the micromouse cannot enter the target area when it’s located in the maze lattice connected
Research on Micromouse Maze Search Algorithm Based on Centripetal …
947
with the target area. The improved algorithm was verified by randomly selected domestic and foreign competition mazes, the number of search steps is reduced and the search efficiency is improved. However, the algorithm only adds locally enhanced heuristic information to the eight maze lattices adjacent to the target region, while the remaining maze lattices still have weak heuristic information to the target region. Therefore, the search efficiency will be improved by enhancing the heuristic information of the remaining maze lattices in the future.
References 1. Xue, Y.: Research on the design and control method of computer mouse based on suction cup. Manag. Technol. SME 632(12), 185–186 (2020) 2. Long, H., Du, Q., Wang, N., et al.: An artificial intelligence maze of shuttle micromouse. Mod. Inf. Technol. 5(01), 168–170+174 (2021) 3. Liu, X., Gong, D.X.: A simulation study of A-star algorithms and depth-first search algorithm for search in mazes. Manufact. Autom. 33(11), 101–104 (2011) 4. Su, J.H., Huang, H.H., Lee, C.S.: Behavior model simulations of micromouse and its application in intelligent mobile robot education. In: 2013 CACS International Automatic Control Conference (CACS). IEEE (2013) 5. Su, J.H., Cai, X.H., Lee, C.S., et al.: The development of a half-size micromouse and its application in mobile robot education. In: 2016 International Conference on Advanced Robotics and Intelligent Systems (ARIS). IEEE (2016) 6. Wang, Z.Y., Xu, C., Yin, S.: Dynamic system modeling of micromouse using low-cost DC motor. Mech. Electr. Eng. Technol. 50(07), 97–100 (2021) 7. Zhu, Z.K., Han, Y.J., Gu, H.B.: Research and design of micromouse based on STM32 and DFS algorithm. Electron. Eng. Prod. World 29(06), 64–68 (2022) 8. Wang, L., Dong, S., Pan, H.Y.: Search algorithm based on probability distance of micromouse maze. Sci. Technol. Innov. Herald 13(03), 93–95 (2016) 9. Wang, Y.N., Jiang, H., Wang, B., et al.: Intelligent algorithm research and optimization of micromouse maze. Sci. Technol. Innov. Herald 12(32), 129–130+132 (2015) 10. Yuan, C.H., Lu, L., Wang, S., et al.: Research on micromouse walking maze fusion algorithm based on probability distance. Comput. Eng. 44(09), 9–14 (2018) 11. Fu, X.W., Zou, G.: Simulation design of maze-solving algorithm micromouse. J. Jilin Inst. Chem. Technol. 30(11), 85–88 (2013) 12. He, S.B., Sun, K.H.: Design and optimization of micro-mouse solving the maze algorithm based on central method. Comput. Syst. Appl. 21(09), 79–82 (2012) 13. Yuan, C.H., Liu, Q., Liu, X.M.: An oblique sprint algorithm with eight directions for micromouse contests. Electron. Electr. Eng. Technol. 118–121 (2018) 14. Ryu, H., Chung, W.K.: Local map-based exploration using a breadth-first search algorithm for mobile robots. Int. J. Precis. Eng. Manuf. 16(10), 2073–2080 (2015) 15. Xia, Y.: Analyzing and researching on maze-running micromouse algorithm. Coal Technol. 30(01), 194–196 (2011) 16. Kang, B., Liang, Y.L.: The optimization of central algorithm based on robot maze searching. J. Changchun Normal Univ. 30(08), 25–29 (2011)
Optimal Design of Distributed Fault Diagnosis for Electro-Hydraulic Control System of Mountain Micro Pile Drilling Rig Lei Jiang, Yongxing Zou, Baiyi Zhu, and Ruijun Liu
Abstract A mountainous micropile drilling rig is an indispensable engineering mechanical equipment for the construction of ultra-high voltage transmission lines in mountainous areas. Under complex and changeable working conditions, the fault diagnosis of its electro-hydraulic control system becomes more difficult and the accuracy of fault diagnosis decreases, which leads to the decline of the stability of the entire electro-hydraulic control system and the low efficiency of equipment maintenance. In this paper, the method of fault diagnosis accuracy for the electro-hydraulic control system of micropile drills in mountainous areas is proposed. Firstly, the structure of the mountain micropile drill and its electro-hydraulic control system is described. Then, the distributed fault diagnosis topology is constructed, and the fault diagnosis algorithm of the fault diagnosis segmentation decision is designed. Finally, the reliability of the distributed fault diagnosis method is verified by the actual fault injection experiment. Keywords Mountain micro pile drilling rig · Electro hydraulic control system · Fault diagnosis · Equipment maintenance · Distributed · Topological structure
L. Jiang State Grid Hunan Electric Power Company, Hunan, China Y. Zou Hunan Power Transmission and Transformation Engineering Co., Ltd., Hunan, China B. Zhu State Grid Hunan Province Construction Company, Hunan, China R. Liu (B) Jiangxi Dongrui Machinery Co., Ltd., Jiangxi, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_87
949
950
L. Jiang et al.
1 Introduction With the rapid development of construction machinery equipment technology and electro-hydraulic technology [1–5], its technology is widely used in electro-hydraulic robots, industrial production, mining machinery, and other fields [6–10], especially for the performance requirements of the electro-hydraulic control system of mountain micropile drill [11, 12]. Mountain mini-pile drilling rigs are mainly used in mountainous areas for extra-high voltage transmission projects, where they perform difficult and high-load tasks such as mountain positioning drilling and transportation piling in transmission projects. As mountain mini-pile drilling rigs often work in the harsh environment of mountainous areas, the whole equipment is in frequent starting and stopping, drilling and piling actions, the electro-hydraulic control system, which is the core of mountain mini-pile drilling rigs, is very prone to failure, so accurate and reliable fault diagnosis is very critical for the control and maintenance of the electro-hydraulic control system. At present, the electro-hydraulic control system of the mountain mini-pile drilling rig is a nonlinear multi-part strongly coupled control system, whose signals are collected through various types of sensors or signal acquisition circuits, and the signals collected by the sensors are input to the electronic control unit for corresponding logic judgment, and the execution signals or diagnosis results are output to other systems. Especially for the fault diagnosis of various components of the electrohydraulic control systems, experts and scholars at home and abroad have conducted a lot of research. For example, Ref. [13] optimized the performance of basis functions in the network by improving the particle swarm algorithm, thus improving the efficiency and accuracy of fault diagnosis of hydraulic drilling rigs. Ref. [14] proposed a data-driven approach that enables a step towards condition monitoring of a rock drill, robustly detecting very small changes in behavior using a minimum amount of sensors. Thus, in this paper, in order to solve the problem of non-linear coupling of multicomponents, complex internal structure, and interaction of various diagnostic information in the electro-hydraulic control system of mountain micropile drill, the accuracy and efficiency of electro-hydraulic control fault diagnosis are reduced. Firstly, the diagnosis components of the electro-hydraulic control system are decoupled, and the diagnosis unit is divided into different components; Secondly, we need to build the diagnosis topology through the divided diagnosis units; Finally, we need to use the fault injection method to verify and design the distributed fault diagnosis topology, so as to layer the diagnosis functions of components and reduce the coupling between component diagnosis functions. The experimental results verify the effectiveness of the fault diagnosis method proposed.
Optimal Design of Distributed Fault Diagnosis for Electro-Hydraulic … Fig. 1 Structure of Electro Hydraulic Control System
951
Electro Hydraulic Control Unit Drilling arm device
CAN_H CAN_L
Extension Rod E-H System
Sensor
Drilling
CAN_H
Cabin
Smart Sensor CAN_L
Electro Hydraulic Actuator
2 Electro-Hydraulic Control System Fault Diagnosis Fault diagnosis of the electro-hydraulic control system of a mountain mini-pile drilling rig requires fault monitoring of key system components, especially sensor circuit diagnosis, reasonableness diagnosis, and over-range diagnosis; actuator circuit diagnosis and functional diagnosis; electronic control unit communication diagnosis, etc. This paper builds a distributed fault diagnosis topology from the structure of electro-hydraulic control system to realize fault diagnosis of electro-hydraulic control system of mountain mini-pile drilling rig This paper constructs a distributed fault diagnosis topology to realize the fault diagnosis of the electro-hydraulic control system of a mountain mini-pile drilling rig from the structure of the electro-hydraulic control system.
2.1 Structure of Electro Hydraulic Control System The electro-hydraulic control system of mountain micro pile drill is mainly composed of various electro-hydraulic components such as valve bodies, smart liquid level sensors, travel switches, pressure sensors, displacement sensors, mechanical arms, hydraulic pumps, hydraulic cylinders and variable displacement pumps, as shown in Fig. 1 below. The entire electro-hydraulic control system is responsible for driving the actuating mechanism of the mountain micro pile drill to operate after receiving the operating instructions from the driver in the cab for logical judgment. The fault diagnosis of the electro-hydraulic control system components of the mountain micro pile drill is the fault diagnosis of the sensor circuit, the actuator function, the performance of the electronic control unit and the communication.
2.2 Fault Diagnosis of Electro Hydraulic Control System The fault diagnosis of the electro-hydraulic control system of mountain micropile drill is a process of various fault detection for a highly coupled electro-hydraulic control system with multiple components. In the fault detection design, layered fault
952 Fig. 2 Distributed fault diagnosis topology
L. Jiang et al. Master Diagnostic ECU
Primary Diagnostic ECU
A Secondary Diagnostic ECU
Fault Reaction
SmartDevice B C
diagnosis is required. By planning different fault diagnosis units for different fault diagnosis tasks, the efficiency of fault diagnosis can be rapidly improved and the integrity of fault diagnosis of each component can be ensured. This paper adopts a distributed fault diagnosis topology, as shown in Fig. 2 below. The distributed fault diagnosis topology is divided into a master fault diagnosis unit, a primary fault diagnosis unit, a secondary fault diagnosis unit, and a smart device. The main function of the master fault diagnosis unit is to light the fault indicator after arbitration of the diagnosis results of each fault diagnosis unit, and store key fault information and fault codes; The primary fault diagnosis unit is responsible for fault diagnosis of the main unit components, storage of fault codes and freezing of fault information; The secondary fault diagnosis unit is mainly responsible for fault monitoring of the secondary fault diagnosis components, freezing fault information and reporting the fault code of the fault diagnosis unit and smart device; Smart devices mainly refer to components with small fault diagnosis tasks and less input and output information for fault monitoring, such as smart sensor components or actuators loaded with electronic control unit chips. Smart devices only monitor the fault of their own components and do not participate in diagnostic service tasks such as storing fault information and lighting fault indicator lights, its fault code and frozen fault information are reported by the main or secondary fault diagnosis unit. When the master fault diagnosis unit, primary fault diagnosis unit, secondary fault diagnosis unit, and smart device conduct fault monitoring on corresponding components through input information, output fault confirmation status or fault recovery status after fault anti-shake processing, and output the fault level or fault severity to the fault response module for fault processing, In this paper, five fault levels are set for the electro-hydraulic control system of mountain micropile drill, and different fault response mechanisms are implemented for different fault levels or fault severity to ensure that the entire electro-hydraulic control system can operate in a safe and stable state. The fault reaction measures corresponding to the fault level are shown in Table 1 below.
2.3 Sensor/Actuator Circuit Diagnosis The sensor of the electro-hydraulic control system of the mountain micropile drill is mainly responsible for collecting the pressure, displacement, angle, flow, and other
Optimal Design of Distributed Fault Diagnosis for Electro-Hydraulic …
953
Table 1 Failure response mechanism Fault Level
Fault Severity Level
Fault Reaction Mechanism
FL1
SL1
Just Set DTC
FL2
SL2
Limit System Output Power
FL3
SL3
Shut off the System Switch with a delay of 75 s
FL4
SL4
Shut off the System Switch with a delay of 15 s
FL5
SL5
Immediately Shut off the System switch
key information of the whole system. It is a very critical component. The failure of the sensor will easily cause the entire electro-hydraulic control system to receive wrong or invalid signal values, output wrong driving instructions or fail to work normally [15]. The actuator is mainly responsible for driving the drill arm of the micropile drill and other devices. The drill arm device cannot work normally due to the circuit failure of the actuator. The circuit diagnosis of the sensor and actuator adopts the threshold segmentation diagnosis strategy. In this way, the circuit fault diagnosis can avoid the false alarm between a short circuit to ground/power, out-of-range fault, and reasonable fault. When the collected voltage signal value is within the fault threshold range set by the phase, the fault can be confirmed after the corresponding fault filtering time, and the fault state and fault level can be output. At the same time, the circuit fault monitoring needs to meet the fault-enabling conditions. When the fault-enabling conditions are met, the fault monitoring will enter the filtering and anti-shake processing stage. If the enabling conditions are not met, fault monitoring will not be carried out. The precondition for fault recovery is also to meet the enabling conditions. When the enabling conditions are met, the transient fault is in the faultfree state, and after the fault recovery time, the circuit fault is recovered to the fault-free state, as shown in Fig. 3 below. Fig. 3 Sensor circuit fault detection threshold
V Short to Ground Fault Detection
Short to Power Fault Detection
Rationality Fault Detection
Out of Range Low Fault Detection
Out of Range High Fault Detection T
954 Fig. 4 Actuator functional fault detection threshold
L. Jiang et al. PWM
OFF Self Detection IDLE Actuator Supply Power Periodic
Emergency Off PWM Feedback
Shut Down
Normal Run
Actuator Functional Diag
Fault Level Severity Level
AfterRun Actuator Fault State
2.4 Actuator Functional Diagnosis In the electro-hydraulic control system of the mountain micropile drill, the actuator needs to ensure the stability of its function. Functional diagnosis can respond to the failure of the corresponding actuator to avoid damage to the actuator and the stable operation of the entire electro-hydraulic control system. As the mountainous micropile drill often works under complex and changeable working conditions, the actuator functional diagnosis needs to be monitored by stages, mainly including selfinspection fault diagnosis in the actuator initialization stage and functional feedback verification in the operation stage. Among them, the initial self-test fault diagnosis reads back the feedback signal of the actuator by injecting a PWM wave signal that can drive the actuator to operate and compares it with the fault threshold that can be calibrated. If it exceeds or falls below the fault threshold range, the fault will be confirmed after the fault confirmation time; The functional feedback verification in the operation stage continuously detects the speed, current, or pressure signals fed back by the actuator. At the same time, after meeting the functional feedback verification fault monitoring enabling conditions, it is compared with the actuator stuck fault threshold range. Finally, fault filtering and anti-shake processing are carried out. If the actuator functional fault occurs, the fault response needs to be carried out in a timely manner according to the set fault level. As shown in Fig. 4.
3 Fault Diagnosis Experiment of Electro Hydraulic Control System In order to verify that the distributed fault diagnosis method designed can rapidly improve the fault diagnosis efficiency and accuracy of the electro-hydraulic control system of the mountain micropile drill, an experimental environment for fault injection of the electro-hydraulic control system is constructed. By using BOB, arbitrary waveform generator, stabilized voltage power supply, sliding rheostat, and other fault
Optimal Design of Distributed Fault Diagnosis for Electro-Hydraulic … Table 2 E-H Control system fault injection
955
Fault Event
Fault Confirmed Time
Fault Healing Time
Fault Reaction Time
Pressure Sensor S2G
500 ms
500 ms
10.3 ms
Pressure Sensor S2P
500 ms
500 ms
10 ms
Pressure Sensor OC
750 ms
750 ms
100 ms
Pressure Sensor ORH
1000 ms
1000 ms
100 ms
Pressure Sensor ORL
1000 ms
1000 ms
100 ms
Pressure Sensor RAH
500 ms
500 ms
20 ms
Hydraulic Pump Selftest
100 ms
100 ms
10 ms
Hydraulic Pump Stuck
200 ms
500 ms
10 ms
Control Unit Bus Off
50 ms
100 ms
5 ms
signal simulation devices, different fault signals are injected, Observe the fault information and fault response feedback by the electro-hydraulic control system of the mountain micropile drill, the test results as shown in Table 2 below.
4 Conclusion The fault diagnosis of electro-hydraulic control system of mountain micro pile drill is essentially the fault monitoring of each coupling component after decoupling. The design of distributed fault diagnosis topology can clarify the diagnosis tasks of different components, improve the fault diagnosis efficiency of the electrohydraulic control system. The structure of the electro-hydraulic control system can be divided into master fault diagnosis unit, primary fault diagnosis unit, secondary fault diagnosis unit and smart device, reach circuit fault monitoring for sensors/ actuators, functional monitoring for actuators, and communication fault monitoring for electronic control units, and finally achieve complete fault diagnosis for the entire electro-hydraulic control system. Acknowledgements Relevant Project Contract No: SGTYHT/21-JS-226, Thanks for the support of Hunan Power Transmission & Transformation Engineering Co., LTD.
956
L. Jiang et al.
References 1. Wang, S., Chen, Z., Li, J., et al.: Flexible motion framework of the six wheel-legged robot: experimental results. IEEE/ASME Trans. Mechatron. 27(4), 2246–2257 (2021) 2. Li, J., Wang, J., Peng, H., et al.: Neural fuzzy approximation enhanced autonomous tracking control of the wheel-legged robot under uncertain physical interaction. Neurocomputing 410, 342–353 (2020) 3. Chen, Z., Wang, S., Wang, J., et al.: Control strategy of stable walking for a hexapod wheellegged robot. ISA Trans. 108, 367–380 (2021) 4. Li, J., Wang, J., Peng, H., et al.: Fuzzy-torque approximation-enhanced sliding mode control for lateral stability of mobile robot. IEEE Trans. Syst. Man Cybern. Syst. 52(4), 2491–2500 (2021) 5. Chen, Z., Wang, S., Wang, J., et al.: Attitude stability control for multi-agent six wheel-legged robot. IFAC-Papers Online 53(2), 9636–9641 (2020) 6. Li, J., Qin, H., Wang, J., et al.: OpenStreetMap-based autonomous navigation for the four wheellegged robot via 3D-Lidar and CCD camera. IEEE Trans. Industr. Electron. 69(3), 2708–2717 (2021) 7. Chen, Z., Li, J., Wang, J., et al.: Towards hybrid gait obstacle avoidance for a six wheel-legged robot with payload transportation. J. Intell. Rob. Syst. 102(3), 1–21 (2021) 8. Xue, J., Wang, S., Wang, J., et al.: Stewart-inspired vibration isolation control for a wheellegged robot via variable target force impedance control. J. Intell. Rob. Syst. (2022). https:// doi.org/10.1007/s10846-022-01757-3 9. Chen, Z., Li, J., Wang, S., et al.: Flexible gait transition for six wheel-legged robot with unstructured terrains. Robot. Auton. Syst. 150, 103989 (2022) 10. Li, J., Wang, J., Wang, S., et al.: Parallel structure of six wheel-legged robot trajectory tracking control with heavy payload under uncertain physical interaction. Assembly Automation (2020) 11. Li, Q.Y., Sun, H., Chen, X.: Reliability analysis of hydraulic system of anchor drilling rigs based on fuzzy fault tree. J. Donghua Univ. (English Edition) 36(03), 16–22 (2019) 12. Wang, Z., Wei, P.: The feasibility of applying hydraulic energy-storing service rigs to offshore platform. China Offshore Platform (2000) 13. Sun, L.F., Feng, L.: Application of improved PSO-based RBF neural network for fault diagnosis of hydraulic drilling rig. Chinese Hydraulics & Pneumatics (2014) 14. Jakobsson, E., Frisk, E., Krysander, M., et al.: Fault identification in hydraulic rock drills from indirect measurement during operation. IFAC-Papers Online (2021) 15. Rodrigo, P.B., Leonel, A.G., Frank, S.T., et al.: Architectures of bulk built-in current sensors for detection of transient faults in integrated circuits. Microelectron. J. 71 (2018)
The Crawler Strategy Based on Adaptive Immune Optimization Yang Liu and Zhaochun Sun
Abstract To solve the problem that link priority which is determined by many factors are difficult to adapt to the ever-changing web pages through fixed weights, a crawler strategy based on adaptive immune optimization is proposed. By combining the improved ant colony algorithm and adaptive immune optimization algorithm, this method achieves the comprehensive and efficient crawling of the target topic. Firstly, a retrospective ant colony algorithm which is combined by pheromone updating and positive feedback mechanism is used to crawl the web page with limited depth in a single cycle. Secondly, the factors that determine the priority of the link are abstracted as objective functions, and the adaptive immune optimization method is applied to optimize the objective functions related to the topic of the link, and the optimal solution with better convergence and distribution is obtained. Finally, the algorithm is verified by PyCharm experiment platform, and the results show that the algorithm achieves high accuracy and efficiency of crawler. Keywords Crawler strategy · Adaptive immune optimization · Ant colony algorithm
1 Introduction With the explosive growth of information on the Internet, the difficulty of obtaining effective information is constantly increasing, search engine has become an important entrance to obtain information because of its efficient information retrieval ability, web crawler has become the data source of search engine, it is because of its efficient information traversal and fast retrieval ability. However, with the development of Y. Liu (B) China Communications Information Technology Group Co., Ltd., Beijing 101399, China e-mail: [email protected] Z. Sun China Communications Construction, Beijing 100088, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_88
957
958
Y. Liu and Z. Sun
personalized user requirements, how to realize the real-time and personalized information needs of users has become a difficult problem to be solved. However, with the personalized development of user requirements, how to realize the real-time and personalized needs of users for information has to be solved. For this, many scholars have studied the crawler strategy, it is based on text content judgment, De et al. [1] proposed the Fish-Search algorithm, which simulates the process of fish foraging, a binary classification method was applied to calculate the correlation, when a topic related web page is found, the fish (crawler) follows the link, otherwise it dies. In addition, the Shark-Search algorithm [2] improves the relevance calculation of the Fish-Search algorithm, according to the topic similarity between the text near the link and the parent web page, a continuous function is used to calculate the relevance, according to the topic similarity between the text near the link and the parent web page, the textual relevance of links is obtained more accurately. Menczer et al. [3] proposed the Best-First algorithm, which the relevance of vector space is calculated and the highest priority of the page is given to the crawled queue, it reduces the greatest amount of calculation, and the operation efficiency is improved. However, the topic relevance only be considered by the crawling strategies based on the above textual content, the impact of the links is ignored between Web pages on the focused crawler, it is easy to cause the problem of “myopia” in crawling. For the above problems, Web crawling strategy based on hyperlink graph has been widely concerned. Among them, Larry et al. proposed the PageRank algorithm, it believes that if a web page is linked to multiple Web links, the importance of the web page is higher, and the importance of a web page is increased as it is linked to a page of higher importance. In addition, HITS algorithm [4] designed hub attribute and authority attribute for web pages to jointly measure the importance of web pages. They are used to measure the quality of the web site and the substantive content. However, too much attention is paid to the authority of web pages, the attention of topic relevance is insufficient, the problem of “topic offset” is easy to appear, which leads to many irrelevant Web pages crawled and affects the efficiency of crawler, then many irrelevant pages are crawled, it affects the efficiency of crawler. For this, Chen Qi et al. [5] combined HITS and Shark-Search algorithm, web content evaluation and web link relationship algorithm are combined to judge the strengths and weaknesses of the downloaded URL. Shark-Search and PageRank algorithm are combined by Qiu Lei et al. [6], Shark-Search algorithm is used to calculate the score of web pages, and PageRank calculated the weight of URL links between pages to define the importance of pages, and made up for the defects of the two algorithms at the same time. In addition, Liu Shaotao et al. analyzed the Best-First algorithm, HITS algorithm is introduced to reflect the value of links, and a new link selection strategy is proposed. Compared with the single Best-Hrst algorithm, it has better performance. However, the link evaluation problem is treated as a single-objective problem, it is difficult to adapt to the diversity of web pages, and the global search ability is poor, and it is easy to fall into local optimum. The intelligent algorithm provides a new direction for improving the efficiency of focused crawler, It is because of the better global search ability and convergence ability for specific targets. Xiao Jingjie et al. [6] designed a theme crawler strategy based on grey Wolf algorithm, a solution to the
The Crawler Strategy Based on Adaptive Immune Optimization
959
priority problem of crawler is proposed in global crawling, and the results is good. In their study M, WeiYan et al. [7] proposed a focused crawler strategy based on improved genetic algorithm, which optimized the user browsing behavior, expanded the search scope of crawler and improved the crawler effect. However, the above strategies are single-objective problems, that is, the link priority depends heavily on the weight, and the fixed weight is often difficult to adapt to the diversity of web pages, as a result, the global search ability is not strong and the crawling is off-topic. Therefore, multi-objective optimization algorithms have received extensive attention. Liu Chengjun [8] proposed a topic crawling optimization strategy based on improved ant colony algorithm and improved non-dominated sorting genetic algorithm (NSGA-II) to achieve accurate, comprehensive and efficient crawling of the target topic. However, this algorithm is a kind of random search algorithm, which has the problem of multiple operations, slow convergence speed and poor solution distribution characteristics [9]. For this, the adaptive immune Optimization algorithm (AHEIA) is proposed in this paper, which has better distribution and convergence to optimize the factors, it affects the relevance of link topics, and obtain higher crawler accuracy and efficiency.
2 The Fundamentals of Focused Crawler The focused crawler is an intelligent crawler which is aiming at a specific topic, and the specific process is shown in Fig. 1. The focused crawler starts from the seed page, it crawls the content of the page, and evaluates the crawling priority of links by calculating the topic relevance. At the same time, the topic relevance threshold is set. If the topic relevance is greater than or equal to the threshold, the web page is stored, otherwise it is determined to be irrelevant to the topic [8]. There are four main factors related to the topic relevance of a link: (1) The topic relevance of the link page; (2) Topic relevance of link anchor text; (3) The topic relevance of the link anchor text context; (4) The position parameter of the link in the web page. The first three are to calculate the topic similarity between texts, In the fourth, the position in the web page is mainly divided into: the title section, the block title section, the text section and the other section. The factor values of these four block locations are set to 0.2, 0.15, 0.1, and 0.05, respectively. A multi-objective Fig. 1 The flowchart of feature focused crawler
960
Y. Liu and Z. Sun
optimization model is established for the above factors, it affects the relevance of link topics, and the objective function is defined as follows: max F(l) = [ f 1 (l), f 2 (l), f 3 (l), f 4 (l)]T
(1)
f 1 (l) = B1 (l)
(2)
f 2 (l) = B2 (l)
(3)
f 3 (l) = B3 (l)
(4)
f 4 (l) = B4 (l)
(5)
B2 (l) > θ
(6)
Among them, max is the optimal solution of multi-objective optimization, l is the link, F(l) is the target vector, f 1 (l), f 2 (l), f 3 (l), f 4 (l) are the four objective functions, respectively, B1 (l), B2 (l), B3 (l), B4 (l) are the topic relevance of the web page, the topic relevance of the anchor text, the topic relatedness of the anchor text context, and the location factor which is determined by the position in the content page, respectively. θ is the threshold value of the anchor text.
3 The Retroactive Updating Method of Pheromone In this paper, ant colony algorithm [10] is used to crawl the content of web pages. In order to avoid the “premature” phenomenon of ant colony, which may be caused by the cycle of unlimited depth, the crawling method with limited depth is used in a single round cycle, and the immune multi-objective optimization algorithm [11] is used to provide the optimal solution for the new round cycle after the end cycle in this paper. Among them, in order to solve the “myopia” problem, and improve the convergence and diversity, A method is adopted which combines pheromone update and positive feedback mechanism, it is named Backtracking-ACO (BT-ACO) [8]. In this algorithm, the homepage relevance of the web page, it starts at the beginning of the current cycle, and it is related to the gain or penalty pheromone that has passed through all paths. The closer to the target web page, the higher the legacy pheromone, and vice versa. The path of the ant in a single cycle is shown in Fig. 2, web pages n1, n2 are on the forward traceback path, k is the number of hops from page m to page n1. The topic relevance of web page m is on the path between n1 and n2, the amount of backtracking pheromone update is as follows:
The Crawler Strategy Based on Adaptive Immune Optimization
961
Fig. 2 The single round path of the ant cycle
Δτnd1 n 2 (t, m)
=
⎧ Pm ⎪ ⎨ k×α , ⎪ ⎩
(1−σ )(Pm −θ ) σ ×θ
0,
×
τn 1 n 2 (t) , k×β
case 1 case 2 case 3
case1 : (n 1 n 2 ) ∈ Rd (t) ∧ Pm ≥ θ case2 : (n 1 n 2 ) ∈ Rd (t) ∧ Pm < θ case3 : other wise
(7)
(8)
where Pm is the topic relevance of web page m, Rd (t) is the path of ant d, and θ is the topic relevance threshold, α, β is constant, ρ is the control parameters for information attenuation degree. When Pm < θ , that is the corresponding pheromone update amount, when the target web page is irrelevant to the topic, together with Eqs. (7) and (8), it ensures that the pheromone is positive, at the same time, the weakening effect of the forward trace path is greater than the normal attenuation.
4 The Design of Immune Optimization Algorithm The hybrid immune optimization algorithm based on adaptive uniform distribution [11] is applied in this paper, it is composed of clustering, mapping, distribution judgment and distribution enhancement module, respectively. The local mutation strategy is used to supplement the insufficient individuals. The algorithm first takes EC and EQ as the optimization objective, the objective function is as follows: min[ f 1 (M(t)), f 2 (M(t)), f 3 (M(t)), f 4 (M(t))]
(9)
where, the corresponding decision space is M(t) = [M1 (t), M2 (t), M3 (t), M4 (t)], M1 (t) = [M11 (t), M12 (t), …, M1Np (t)], M2 (t) = [M21 (t), M22 (t), …, M2Np (t)], M3 (t) = [M11 (t), M12 (t), …, M1Np (t)], and M4 (t) = [M21 (t), M22 (t), …, M2Np (t)] are the topic relevance of the link page, topic relevance of link anchor text, the topic relevance of the link anchor text context, the position parameter of the link in the web page, respectively; N p is the population size, The initial value of M(t) is randomly generated. Moreover, the optimal solution selected from the decision space is given
962
Y. Liu and Z. Sun
by M*(t) = [M1 *(t), M2 *(t), M3 *(t), M4 *(t)]T , M1 *(t), M2 *(t), M3*(t), M4*(t) are the set value. Since the individuals of the population are unevenly distributed during the iteration process, it is difficult to generate different Pareto solutions in different regions of the Pareto front, then the diversity of the population is significantly reduced. How to judge and increase the distribution of individuals is crucial during the iteration process. Firstly, the individuals were mapped to the corresponding hyperplane in the objective space [12]. Secondly, the distribution judgment and enhancement module are designed. Before the distribution strengthening module is activated, it was necessary to judge the distribution in the iterative process. For this, a distribution judgment module is added. The objective space needs to be divided evenly, and different kinds of individuals with the same number are selected from each interval. The objective space is the population Pt’ which mapped onto the hyperplane H. At the same time, the distribution is judged according to the number of clustering in each uniform interval. When the number of clustering is less than the threshold, it means that the distribution is not satisfied in the interval, it is as follows:
ri (t) < θ (t) case1 ri (t) = θ (t) case2
(10)
case1: The distributed enhancement module is activated. case2: The individuals are sorted according to the crowding distance from large to small, and the first θ (t) individuals are selected in the interval. Where, r i (t) is the number of clusters in the ith interval Di for the tth iteration. θ (t) is the threshold in the tth iteration. The distribution of individuals changes continuously during the iteration process, the value of θ (t) is: [ | | PS (t + 1) = √
2 1 Q(t + 1) − Q i (t + 1) N − 1 i=1 N
(11)
where, PS (t + 1) is distribution information of the population at the (t + 1)th iteration, N is the population size, Qi (t + 1) is the minimum Manhattan distance between the ith antibody and the other individuals. Q(t + 1) is the average of the minimum Manhattan distance of all antibodies. The thresholds are adjusted as follows: ⎧ ⎨ φ(t) + 1 PS (t + 1) < PS (t) φ(t + 1) = φ(t) − 1 PS (t + 1) > PS (t) ⎩ φ(t) PS (t + 1) = PS (t)
N φ(1) = ceil K min φ(t) = max r (t)
(12)
(13) (14)
The Crawler Strategy Based on Adaptive Immune Optimization
963
where, ϕ(t + 1) is the (t + 1)th threshold of the iteration, φ(1) is the initialized threshold, min(t) is the minimum value of φ(t). For the poor distribution at the beginning, and improve the convergence speed, the quotient of the population size and the number of clusters is rounded as the initial threshold. In addition, in order to ensure the individuals can be selected for each cluster, the maximum number of clusters is used as the lower limit of the threshold. However, the number of clusters is less than others in some intervals, and even some intervals are empty. How to select individuals is difficult. Therefore, a distribution strengthening module [11] is designed, it can make the individuals maintain a good distribution, and the same number of individuals are selected, which belongs to different clusters in each interval. ⎧ ⎨ N Pi j ≥ φ(t) Case1 (15) N P < φ(t) Case2 ⎩ ij Case3 N Pi j = 0 Case1: The front φ(t) individuals are selected according to the crowding distance from large to small. Case2: All the individuals are selected, the residual φ(t) − N Pi j individuals are obtained mutation with the largest crowding distance. by Case3: N Pi j individuals are obtained by the two closest individuals which are mutated. In order to solve the problem that the individuals are insufficient or empty in the interval, the limit optimization mutation strategy is adopted in this paper [13]. When the individuals are insufficient, the one with the largest crowding distance is used in the interval. At the same time, the two individuals which closest to the interval are selected, when the individuals are empty, two local optimization mutation strategies are adopted in this paper. One decision variable is mutated by the first mutation strategy, it can only search in a small range, therefore, it has a strong local adjustment ability. Each decision variable is mutated by the other mutation strategy, it avoids falling into local optimum and the search speed is improved.
5 Simulation and Result Analysis 5.1 Experimental Design All experiments are carried out on the PyCharm platform, and they are programmed in Microsoft Windows 8 environment with Python 2.7. The computer processor is 3.6 GHz and 8 GB RAM. The simulated binary crossover and polynomial mutation method are used. The crossover parameter ηc and the mutation parameter ηm are both set to 20, and the crossover and mutation probabilities are 0.9 and 1/N d , respectively.
964
Y. Liu and Z. Sun
Table 1 The accuracy rate of different strategies when N web pages are downloaded N = 10,000 N = 20,000 N = 30,000 N = 40,000 N = 50,000 ACO-MOIA
0.675
0.728
0.751
0.769
0.793
Shark-Search+PageRank 0.646 [14]
0.639
0.617
0.627
0.619
Best-First+HITS [15]
0.687
0.681
0.691
0.686
0.679
IGA [7]
0.663
0.692
0.706
0.727
0.742
ACO+NSGA-II [8]
0.657
0.688
0.701
0.723
0.734
BT-ACO+NSGA-II [8]
0.664
0.708
0.723
0.746
0.771
ACO+the improved NSGA-II [8]
0.659
0.696
0.713
0.737
0.756
BT-ACO+the improved NSGA-II [8]
0.668
0.715
0.734
0.759
0.779
The accuracy rate of different strategies when N web pages are downloaded in Table 1, where N d is the number of decision variables. The shape parameter N q is set to 11, the number of web pages per query result Net = 20, The number of tagged keywords is M = 3, it is not completely belong to their own topic. The maximum number of iterations is 5. In addition, the precision (Formula (8)) and crawling time (Formula (9)) are used to prove the basis for the effectiveness. |A| |B|
(16)
T = t|A|=N
(17)
Pr =
where, Pr is the ratio of the number of pages A to the number B, B is related to the topic of all pages processed by the crawler, T is the time which is required to obtain N web pages judged to be topic relevant. N denotes the number of detection points.
5.2 Experimental Result In order to verify the effectiveness, seven algorithms are compared with in this paper. They are Shark-Search algorithm combined with PageRank algorithm [14], the combination of Best-First algorithm and HITS algorithm (Best-First+HITS) [15], the improved genetic algorithm (IGA) [7], the combination of ant colony algorithm and NSGA-II algorithm (ACO+NSGAII) [8], the improved ant colony algorithm and NSGA-II algorithm combination (BT-ACO+NSGA-II) [8], the combination of ant colony algorithm and improved NSGA-II algorithm (ACO+improved NSGA-II) [8], the combination of improved ant colony algorithm and improved NSGA-II algorithm (BT-ACO+improved NSGA-II) [8], respectively. The subject of this experiment is
The Crawler Strategy Based on Adaptive Immune Optimization
965
“the chip mobile phone Huawei Zhengfei Ren”. Five runs were averaged under each strategy. When the number of downloaded web pages N is 10000, 20,000, 30,000, 40,000 and 50,000, the precision experiments are carried out on the proposed ACO-MOIA algorithm [8], and the experimental results of seven compared algorithms are shown in Table 1. It can be seen that the ACO-MOIA algorithm has the highest precision for different numbers of web pages. When the number of web pages are 50,000, the precision reaches 0.793. At the same time, the precision rate keeps improving with the number increase of the web pages. In order to visually compare the precision of each strategy, the line charts of the eight algorithms are shown in Fig. 3. From that we can see that the ACO-MOIA strategy has the highest precision in this paper. Besides, except Best-First+HITS [15] and Shark-Search+PageRank [14], the precision of other strategies increases with the increase of the number of web pages. In addition, in order to verify the computational efficiency of the ACO-MOIA algorithm, the average time consumption (unit: seconds) of the eight strategies are calculated in this paper, they are shown in Table 2. It’s seen that the time consumption of the proposed ACO-MOIA algorithm is close to Best-First+HITS, in the other cases, the time-consuming is far less than other comparison strategies, so it has high computational efficiency.
Fig. 3 Accuracy rates of crawlers with different strategy themes
966
Y. Liu and Z. Sun
Table 2 The average time consumption of different strategies when downloading the number N of web pages N = 10,000 N = 20,000 N = 30,000 N = 40,000 N = 50,000 ACO-MOIA
13,867.6
26,352.2
40,011
51,826.7
62,539.8
shark-Search+PageRank [14]
13,956.2
30,338.2
47,454
64,785.8
82,239
Best-First+HITS [15]
13,522.4
30,173
46,474.2
63,144
80,625
IGA[7]
15,035
29,894
43,917.4
58,994
70,136.2
ACO+NSGA-II [8]
14,721.2
29,305.6
43,778
59,055.2
71,572
BT-ACO+NSGA-II [8]
15,574
30,474.6
44,152.4
57,179.2
68,411
ACO+the improved NSGA-II [8]
14,956.8
29,172.2
43,125
58,622
70,903.2
BT-ACO+the improved NSGA-II [8]
15,723
30,578.4
44,057
56,726.8
66,377.2
6 Conclusion In order to improve the accuracy and efficiency of crawler, a focused crawler algorithm based on immune optimization is proposed. The ant colony algorithm and immune optimization algorithm are well combined. The experiments show that the strategy can effectively improve the precision and crawler efficiency, and the conclusions are as follows: (1) Pheromone update and positive feedback mechanism are combined, penalty pheromone is introduced to gain pheromone, and retrospective pheromone update mechanism is proposed by combining these two types of pheromones. It effectively solves the “myopia” problem of the algorithm and the possible “premature” phenomenon of the ant colony is avoided. (2) Adaptive immune multi-objective optimization method is applied, the distribution judgment and distribution enhancement module are adopted, which can effectively improve the diversity of the algorithm, and the optimal solutions of multiple optimization objectives are obtained. (3) In this paper, a crawling method with limited depth is used in a single round of cycle, and the immune multi-objective optimization algorithm [11] is used to provide the optimal solution for the new cycle round after the end of the cycle, so as to avoid the unlimited depth cycle and effectively avoid falling into local optimum.
The Crawler Strategy Based on Adaptive Immune Optimization
967
References 1. Bifulco, I., Cirillo, S., Esposito, C., et al.: An intelligent system for focused crawling from Big Data sources. Expert Syst. Appl. 184, 115560 (2021) 2. Baldassarre, G., Giudice, P.L., Musarella, L., et al.: The MIoT paradigm: main features and an “ad-hoc” crawler. Future Gener. Comput. Syst. 92, 29–42 (2019) 3. Menczer, F., Pant, G., Srinivasan, P.: Topical web crawlers: evaluating adaptive algorithms. ACM Trans. Internet Technol. (TOIT) 4(4), 378–419 (2004) 4. Yeh, C.Y., Chou, S.C., Huang, H.W., et al.: Tube-crawling soft robots driven by multistable bucklingmechanics. Extreme Mech. Lett. 26, 61–68 (2019) 5. Lo, L.B., Qi, C., Wu, Q.X., et al.: Research on theme crawler based on Shark-search and hits algorithm. Comput. Technol. Dev. 20(11), 76–79 (2010) 6. Lei, Q., Yuansheng, L., Ming, C.: Research on theme crawler based on shark-search and PageRank algorithm. In: Proceedings of International Conference on Cloud Computing and Intelligence Systems. San Francisco, pp. 268–271. IEEE Press (2016) 7. Yan, W., Pan, L.: Designing focused crawler based on improved genetic algorithm. In: 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI), pp. 319– 323. IEEE (2018) 8. Chengjun, L.: Research and Implementation of Focused Crawler System Based on Query Expansion and Multi-Objective Optimization. Beijing University of Posts and Telecommunications (2020) 9. Junfei, Q., Fei, L., Cuili, Y.: An NSGAII algorithm based on uniform distribution strategy. Acta Autom. Sinica 45(7), 1325–1334 (2019) 10. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. B Cybern. 26(1), 29–41 (1996) 11. Qiao, J.F., Li, F., Yang, S.X., et al.: An adaptive hybrid evolutionary immune multi-objective algorithm based on uniform distribution selection. Inf. Sci. 512, 446–470 (2020) 12. Sindbya, K., Miettinen, K., Deb, K.: A hybrid framework for evolutionary multi-objective optimization. IEEE Trans. Evol. Comput. 17(4), 495–511 (2013) 13. Sanyi, L., Wenjing, L., Junfei, Q.: A based local search of NSGA2 algorithm based on density. Control Decis. 1, 60–66 (2018) 14. Lei, Q., Yuansheng, L., Ming, C.: Research on theme crawler based on shark-search and pagerank algorithm. In: Proceedings of the International Conference on Cloud Computing and Intelligence systems, San Francisco, pp. 268–271. IEEE Press (2016) 15. Shaotao, L., Hongsheng, L.: The theme crawler algorithm based on fusing link structure. J. Huaqiao Univ. (Nat. Sci. Ed.) 38(2), 195–200 (2017)
Test Design of Sea Based Intelligent Missile Networking in the Proving Ground Ao Sun, Yan Wang, and Zejian Zhang
Abstract Based on the analysis of the development status of American intelligent missiles, and in the context of the transformation of the transformation of future naval warfare from information to intelligence, missile networking operations will become a subversive mode of high-end naval warfare, this paper proposes the concept of new operational test styles. Such as missile and UAV coordination in the proving ground, and cluster attack. It also gives the core idea of the missile networking operation test in the proving ground, which is based on the close verification of the long-range fine hit “kill chain”, to promote high-end sea battle preparation and realize the sea based intelligent missile operation enabled by intelligent technology as soon as possible. Keywords Intelligence · Proving ground · Missile networking · Operational test
1 Introduction In response to the development of anti intervention/regional denial (A2/AD) capabilities of China and Russia, the United States has carried out research and test verification of new operational concepts such as “combat cloud”, “mosaic warfare”, “distributed killing”, and “penetrating attack” [1–3]. After Pelosi crossed the Taiwan Strait, the situation in the Taiwan Strait has become increasingly tense. In order to cope with the constant provocations of strong enemies and how to achieve strategic breakthrough, the shooting range can only provide strong support for winning high-end naval battles by stepping up the development and testing of new quality and new domain weapons with greater power, faster speed, longer range, and stronger penetration, and by responding to the distributed and coordinated operations of strong enemies.
A. Sun · Y. Wang (B) · Z. Zhang 91550 Troop, Dalian 116041, Liaoning, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_89
969
970
A. Sun et al.
Table 1 Characteristics of intelligent missile networking operational capability Main capabilities
Capability characteristics
Battlefield awareness
Through the missile borne data link, the missile does not need to add new reconnaissance loads, but can use its own seeker to conduct full range, continuous and real-time battlefield surveillance, providing a basis for rapid decision-making. At the same time, it can carry out around reconnaissance for sensitive areas, realizing the integration of observation and strike
Precision strike capability
The cruise missile adopts the scene matching terminal guidance mode, which requires that there must be a number of scene matching areas that meet the matching constraints within a certain range around the target. The “man in the loop” mode of terminal guidance through the missile borne data link can effectively overcome this bottleneck problem
Attacking target adaptability
The mission planning and remote control center can quickly update flight path data or target data and send them to the missile, which can enable the missile to attack time sensitive targets, especially low speed moving targets
Effective penetration capability
Through the missile borne data link, the flight path can be changed according to the latest threat information to avoid the enemy air defense area
2 Brief Introduction to Intelligent Missile Network Operation 2.1 Intelligent Missile Networking Operation Definition The missile networking and cooperative operation is to use networking communication, man in the loop and other technologies to add mission planning, operational telemetry, missile borne data link and other systems to traditional missiles. Through functional integration, it supports the real-time communication between missile ground, missile satellite and missile, so as to achieve the full online, planning control, networking attack and autonomous cooperation of missiles. It has realized the “how far to fly and how far to control” of missiles, innovated the operational mode, and improved the operational effectiveness and flexibility of operational application [4, 5]. So as to suppress and consume the enemy’s high-value target strike means with low price missiles, and support such consumptive warfare as Russia Ukraine military conflict.
2.2 Operational Advantages of Intelligent Missile Networking The operational application based on intelligent missile networking will greatly improve the operational capabilities of missiles such as real-time reconnaissance,
Test Design of Sea Based Intelligent Missile Networking ...
971
precise strike, striking time sensitive targets, and effective penetration [6]. The specific capability improvement of intelligent missile networking is shown in Table 1.
3 Status Quo of Intelligence of US Navy Missiles With the intelligent development of the war form, the missile presents the operational requirements of full network access, full time controllability and cluster coordination. The US military is speeding up the construction and improvement of missile networking, collaboration, and autonomous capabilities, so that different types of missiles can achieve real-time interaction and autonomous cooperative operations, and improve the decision-making efficiency and operational effectiveness of missiles. Among them, the over the horizon anti-ship missile and networking engagement capability are the key factors [7–9].
3.1 Intelligence Status of “Tomahawk” Missile “Tomahawk” Block4+ is a new type of “Tomahawk” missile developed by the US Navy on the basis of “Tomahawk” Block4, using network centric warfare technology and experience. It adopts two-way data link, can quickly receive the latest information from the command and control system, change the course to attack new or more important targets, and has better situation awareness and battlefield damage assessment capabilities [10]. As the “Tomahawk” Block4+ missile can re target and attack moving targets in flight, the US Navy plans to use the missile and small unmanned aerial vehicles to coordinate operations.
3.2 Intelligence Status of “Long Range Anti Ship Missile” In order to meet the demand of the US Navy for the next generation anti-ship combat capability, DARPA and the Office of Naval Research launched the “Long Range Anti ship Missile” (LRASM) project in 2009. At the same time, it has the characteristics of intelligent operations, such as autonomous threat perception, autonomous online track planning, multi missile coordination, target value level differentiation, target accurate detection and recognition, target location selection, electronic spectrum monitoring and positioning, and distinguishing different enemy radar signals. Through information fusion, the battlefield situation awareness ability of the command and control center can be improved, so as to provide support for real-time remote control of coordinated attacks of missile groups.
972
A. Sun et al.
3.3 Status Quo of Intelligence of “Standard-6” Missile The “Standard-6” missile is a new generation of “Standard” series missiles developed by Raytheon to deal with the growing air threat. On the basis of its semi-active radar guidance capability, the active radar seeker equipped with AIM-120 air-to-air missile can actively seek, identify targets and carry out accurate attack outside the line of sight, and can intercept long-range low altitude sea skimming targets. In January 2016, the United States Navy successfully completed the first anti-ship test of the “Standard—6” missile, verifying the anti-ship combat capability of the missile beyond the line of sight. To sum up, the US military mainly improves the missile’s situation awareness, target planning and operational evaluation capabilities by installing advanced data links, sensors and anti-jamming seekers, and even integrating artificial intelligence technology.
4 Analysis on Key Technologies of Missile Network Operation It is the general trend of the future intelligent war to improve the overall operational efficiency of the missile cluster through system coordination in missile networking operations. On the one hand, we should pay attention to the development of missile networking operations, on the other hand, we should pay attention to the solution of relevant technical problems, and ultimately improve the missile networking capabilities. Based on the V model of system engineering, this paper presents a dual V model of intelligent missile networking system, as shown in Fig. 1.
R10
R1:Design of missile cluster system A1:Design of bomb cluster model
R1
R2:Single missile system design A2:Single bomb model design
A1
R2
R3:Subsystem level design A3:Subsystem model design
R9
R8
A2
R3
R4:Single machine level design A4:Single machine model design
R7
A3
R4
R5:Actual product development A5:Artificial system development
A4
R5
R6
A6
A7
A8
A9
R6:Single machine test verification A6:Single machine calculation test
Fig. 1 Dual V model of intelligent missile networking system
R10:Operational use of missile cluster system A10:Parallel operation of missile group system
R9:Test verification of missile cluster system A9:Calculation test of missile cluster system
R8:Test verification of single missile system A8:Calculation test of single missile system
R7:Subsystem test verification A7:Subsystem calculation test
A5
A10
Test Design of Sea Based Intelligent Missile Networking ...
973
4.1 Navigation Guidance Coordination Technology Navigation coordination is to use the range finder and satellite navigation receiver equipped with each missile to achieve inter missile navigation information sharing, correct inertial deviation, and reduce the probability of being detected while accurately attacking the target through a distributed launch/reception mechanism. The guidance and control coordination is mainly to determine the size of the missile cluster, the allocation of detectors and the allocation of hit targets, determine the distance, orientation and coordination mode between missiles according to the missile flight speed and its detector characteristics, and study the scheduling management, strategy design and optimization of cooperative launch units in flight.
4.2 Data Link Technology Between Missiles The advanced data link technology with certain stealth performance, fast response speed and strong anti-jamming capability in the confrontation environment is the core technology of missile networking operations. Through the data link, target information, environmental information and cooperative information can be transferred and exchanged among missiles. The missile can also form a tactical data transmission/ exchange and information processing network with the command and control center and various combat platforms through the data link to realize information sharing. At present, the multi missile cooperative data link mainly needs to solve the problems of guidance information network design, low delay transmission, space–time unification, information interaction, intelligent networking, rapid and accurate target positioning and tracking, etc.
4.3 Intelligent Command and Decision-Making Technology The specific tasks of intelligent missile cluster networking operations include formation flight and collision avoidance, system penetration and obstacle avoidance, collaborative search and identification, collaborative tracking and positioning, collaborative occupation and attack, and accident management. Man in the loop missile formation cooperative command and control mainly includes: cooperative control architecture, man missile system capability matching, man missile cooperative situation awareness and understanding, information transmission and distribution, unexpected response and uncertainty planning, cooperative behavior self synchronization and self-learning, etc. As shown in Fig. 2.
974
A. Sun et al.
Fig. 2 Nested structure diagram of intelligent missile command ring Closed Loop of Battle Command
Closed Loop of Battle Command
Closed Loop of Battle Command
4.4 Operational Effectiveness Evaluation Technology The evaluation of operational effectiveness of missile networking is not only related to the tactical and technical indicators of missiles, but also related to the operational use of missiles and specific target conditions. It should be achieved in confrontation as far as possible. That is, based on the battle scenario, simulation results, quantitative and qualitative combination, historical case analysis and real information, make full use of the battle simulation method and effectiveness evaluation aided decision support system to evaluate the effectiveness. For example, the comparison of the operational effects of 8 same type networked missiles and 8 same type volley missiles, 4 A missiles plus 4 B missiles networking and 8 A missiles networking, etc. The requirement for high speed command and control is not only limited to the use of a single hypersonic weapon, but also needs to shorten the entire OODA process. Therefore, the battle command process of intelligent missile system can be described by three closed loops and two layers of nesting, as shown in Fig. 1 above. Physically, the missile cluster intelligent command system can be divided into command and control center, TT&C communication system and several missile subsystems. As shown in Fig. 3, it mainly includes command decision-making layer, indication and display analysis layer, measurement and control communication link layer, situation awareness layer, cluster control layer and task function execution layer.
Test Design of Sea Based Intelligent Missile Networking ... Fig. 3 Topological structure diagram of intelligent missile command system
975
Command and n DecisionMaking Makin i g Level Accusation center Finger Display Analy l sis Analysis Level Measurement and Control Communication System TT&C Commu m nication Communication Link n Level
Situa u tion awareness Situation Level
Cluster Control Cont n rol Level
Operati t onal Function Operational Executi u on Level Execution
5 Analysis and Design of Missile Netting Operational Test Mode The emergence of sea based intelligent missiles, a new quality capability, will give birth to new high-end naval warfare modes, such as fire deterrence and control cascade connection, multi area attack, network attack, and will change the future naval warfare against ships and land. It is necessary to focus on the mandatory design of intelligent missile combat style, to ensure “full time combat” and “silent combat”, to achieve multi platform launch, short, medium and long range connection, horizontal/ceiling attack combination, sub/supersonic/hypersonic combination, ballistic/cruise combination, and multi-dimensional three-dimensional fire coverage capability. The basic framework of intelligent missile networking system is given, as shown in Fig. 4.
5.1 Operational Test Mode of the Same Type Missile Cluster with the Same Guidance Mode Taking cooperative optical detection mode as an example, this mode refers to the target cooperative detection of multiple missiles using their respective optical seeker based on missile borne data link. The location advantage of multi bullet distribution can be used to expand the target search area and improve the target discovery Fig. 4 Basic framework of intelligent missile networking system
Parallel execution Actual system
Manual system
Model driven
Data driven Calculation test
Management and control guide intelligence
Experiment and evaluation predictive intelligence
Learning and training description intelligence
976
A. Sun et al.
probability; At the same time, the multi missile coordination can form the equivalent staring function of moving targets, and also better adapt to the complex battlefield environment such as geography, meteorology and confrontation.
5.2 Operational Test Mode of Different Guidance Modes for the Same Type of Missile Cluster This mode refers to that each missile in the missile cluster uses different detection equipment (such as visible light, infrared, radar, etc.) of different systems to conduct cooperative guidance based on missile borne data link. The main feature is that the detection information of different guidance seekers can be shared, and the reliability of missile weapon guidance system can be improved. At the same time, the traditional missile weapon mainly releases the bait according to the preset penetration procedure. Through the missile borne data link, the time and mode of releasing the bait can be determined online according to the real-time battlefield environment.
5.3 Operational Test Mode of Multi Missile Cluster Attacking the Same Platform The test verifies the capability of multi platform launch, short, medium and long range connection, sub/ultra/hypersonic combination, ballistic/cruise combination, and multi-dimensional three-dimensional fire coverage. The main method is to complete the positioning, identification, jamming, attacking and other operational tasks of the follow-up target based on the data link after the first target is hit, so as to ensure that the target can be hit at all times and can be hit in silence.
5.4 Integrated Operational Test Mode of Intelligent Missile Cluster and Unmanned Platform Use the basic information of maritime targets provided by space-based reconnaissance satellites, combined with high-altitude UAVs to carry out close detection and precise positioning of targets, and guide intelligent missile clusters to conduct networking and coordinated attack. At the same time, UAVs release decoys to attract target air defense fire and suppress jamming, so as to buy time for intelligent missile attack.
Test Design of Sea Based Intelligent Missile Networking ...
977
6 Conclusion In the future, the intelligent networking operation of missiles will focus more on the comprehensive support of knowledge, intelligence and information. It is necessary to build an operational system, close kill links, and increase efficiency through intelligent empowerment. From the perspective of the new round of military revolution and the evolution of high-end naval warfare in the future, the unmanned combat field is the key to compete with the United States, and the intelligent and superb field is the focus of our corner overtaking. We should focus on the mandatory design of intelligent missile combat style, so as to achieve accurate design of future wars, and have greater confidence and probability to win future high-end naval battles.
References 1. Zhang, G., Cheng, J., Li, T.: Overview of the development of the new generation of long-range anti-ship missiles in the United States. Aerodyn. Missile J. 12, 35–42 (2019) 2. Wang, X., Hao, Z., L., Zhou Y., P.: The architecture and technology of American intelligent missile air warfare. Aerodyn. Missile J. 11, 91–97 (2021) 3. Liang, Y., Wang, S., L.: Analysis of key technical elements of anti-ship missile and discussion on intelligent development. Aerodyn. Missile J. 7, 52–55 (2021) 4. Gina, C.: Soldiers wanted for growing air defense mission. ARMY 69(7), 65–68 (2019) 5. Douglas, B.: Trends in missile technologies. Int. Inst. Strateg. Stud. 3(11), 18–22 (2019) 6. Liu, X.: Overview of the development of LRASM anti-ship missile. Young Soc. 29(10), 181– 182 (2018) 7. Chen, H. Q., Wang, L., Y.: Liu G. Application and enlightenment of foreign missile radar stealth technology. Aerodyn. Missile J. 7, 37–40 (2019) 8. Jason, S.: Iron dome shortcomings prompt army to overhaul IFPC 2 Inc. 2 with new ‘shoot off’ competition. Inside Def. 5(5), 5–8 (2020) 9. Fang, Y.P., Wang, L.P., Zhao, S.: Analysis on penetration ability of American late-model longrange anti-ship missile. Aerosp. Electron. Warf. 30(3), 3–9 (2014) 10. Zhang, H.Y.: Research on the new type of tactical ballistic missile development of U.S. army against the background of multi-domain battle. Tactical Missile Technol. 1, 30–36+59 (2018)
Wastewater Quality Prediction Model Based on DBN-LSTM via Improved Particle Swarm Optimization Algorithm Jian Yang, Qiumei Cong, Shuaishuai Yang, and Jian Song
Abstract For the key water quality of wastewater treatment process, Chemical Oxygen Demand (COD) is difficult to be monitored online. In this paper, a wastewater quality prediction model based on improved Particle Swarm Optimization algorithm Deep Belief Network–Long Short-Term Memory (DBN-PSO-LSTM) is proposed, in which DBN can be used as an unsupervised learning framework to retain original features of input data as much as possible, while reducing the input feature dimension. However, the learning accuracy of DBN for time series with strong volatility is not high. Therefore, the advantages of LSTM in sequence modeling can be used to model and predict output COD. Inappropriate selection of DBN parameters will lead to convergence to local optimal solution. Therefore, the improved PSO algorithm is used to optimize DBN structure to achieve the optimal structure. The optimized DBN is used as the feature extraction part of input variable, and DBN extracted the result is used as the input to LSTM network for prediction. The experimental results show that the proposed method is used to establish the COD soft sensing model of wastewater treatment process, which has better prediction accuracy than other models. Keywords Chemical oxygen demand · Wastewater treatment process · Deep belief network · Long short-term memory · Particle swarm optimization algorithm
1 Introduction Because of its advantages of good nonlinear function mapping, neural networks are often used in parameter soft sensor problems of typical complex systems with strong nonlinear and time-varying characteristics, such as wastewater treatment system. In order to solve the problem that it is difficult to accurately measure key water quality parameter NH3-N in the process of sewage treatment online, Qiao Junfei et al. [1] J. Yang · Q. Cong (B) · S. Yang · J. Song Liaoning Petrochemical University, Fushun, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_90
979
980
J. Yang et al.
proposed a prediction method of effluent ammonia nitrogen based on RBF neural network. The results show that the prediction error of this model is relatively small and the prediction accuracy is high. In order to improve the prediction accuracy of the neural network, Yang Zhuang et al. [2] proposed a prediction of COD based on the GM-RBF neural network, using the grayscale theory to predict the development and changes of the system behavior characteristics, combined with the high-precision approximation ability of the radial basis neural network, improve the accuracy of the prediction model. Although traditional neural network has a good prediction performance in the actual wastewater treatment process, it is highly dependent on the input feature variables, easy to fall into the local optimal solution and slow convergence and other shortcomings [3]. In order to overcome the short-comings of neural networks and improve the prediction accuracy, many deep learning algorithms have been developed and applied [4]. Compared with neural networks, the main advantages of deep learning algorithms are strong learning ability, good adaptability [5], and strong ability of feature extraction, self-learning and generalization. Therefore, deep learning algorithms can be effectively used for the modeling of complex nonlinear systems. DBN is a typical deep learning network, which has strong reasoning ability and sufficient description ability of the relationship between variables, and can accurately perceive the internal laws of object evolution, and has been widely used in many fields [6]. Qiao Junfei et al. [7] proposed a DBN based on adaptive learning rate, which introduced the adaptive learning rate DBN (Adaptive learning rate DBN, ALRDBN) into the DBN Contrastive divergence (CD) algorithm, improving the convergence speed of the DBN and the prediction accuracy of the network. Wang Gongming et al. [8] proposed a depth determination method for continuous DBN based on reconstruction error. This method analyzes the reconstruction error of Continuous Restricted Boltzmann Machine (CRBM) and Continuous Deep Belief Network (CDBN) network energy correlation, set the reconstruction error threshold and design the network depth decision mechanism to realize the self-organization adjustment of CDBN hidden layers. The experimental results show that the CDBN network depth determination method based on reconstruction error can determine the optimal number of hidden layers of CDBN, which effectively improves the efficiency of network depth decision-making. Although DBN has strong learning ability, it also has some limitations [9]. For example, the learning process is slow, inappropriate parameter selection will lead to the learning falling into local optimal solution, and a single DBN model can only extract part of effective information in the data [10], and the prediction accuracy is not high for the data with strong volatility. The prediction model based on hybrid algorithm can more fully extract effective information in the data, reduce the prediction error caused by parameters, and improve the prediction accuracy. Therefore, a hybrid prediction model of DBN-PSO-LSTM is proposed in this paper. Firstly, the improved PSO algorithm is applied to optimize the DBN network structure and further improve the network performance, and then the structure is used to deeply
Wastewater Quality Prediction Model Based on DBN-LSTM ...
981
extract the input variable features and import them into the LSTM network for prediction. The effectiveness of the mixed model is verified by simulation, and it is used to predict COD of effluent water quality in wastewater treatment process, and good prediction results are obtained.
2 Model Online Modeling Strategy The online soft sensor modeling method in this paper mainly consists of data preprocessing part, DBN model, LSTM model, and its structure is shown in Fig. 1. where x(k) represents the process input variable, z(k) represents the process intermediate variable, y(k) represents the actual value of the output variable, yˆ (k) represents the estimated value of the output variable, e(k) represents the modeling error. Where e(k) = y(k) − yˆ (k), k = 1, 2, … N, N represents the number of samples. The model mainly consists of two parts: DBN and LSTM network; optimization algorithm consists of two parts: improved PSO algorithm and adaptive learning algorithm. The main step of the online modeling strategy is to extract features from the collected data after preprocessing by the optimized DBN model, input the extracted features into the LSTM module for data prediction, and calculate the Mean Square Error (MSE) between the predicted results and the actual value. MSE is used as the fitness function of the improved PSO algorithm [11], and the fitness function is used to evaluate the closeness between the individual (the number of neurons in each layer of DBN) and the optimal solution in the improved PSO algorithm. The optimal individual has the lowest fitness value, that is, the optimal number of neurons in each layer of DBN.
2.1 DBN Model Structure DBN is a deep probabilistic directed graph model, and its network structure is similar to that of fully connected feedforward neural network [12]. The nodes of each layer are not connected to each other, and the nodes of two adjacent layers are fully connected. The bottom layer of the network is the observable variable v, and the Fig. 1 Model structure of online soft sensor
982
J. Yang et al.
Fig. 2 DBN model structure ...
Output layer
Restricted Boltzmann Machines (RBM)
Hidden layer 2
...
...
h
RBM Hidden layer 1
Input layer
...
v
...
...
RBM
nodes of other layers are the hidden variables h. The connections between the top two layers are undirected, and the connections between other layers are directed. The constituent unit of DBN is Restricted Boltzmann Machines (RBM), each RBM can be used individually as a classifier. RBM has only two layers of neurons, one is the visible layer, which is composed of dominant neurons and is used to input training data [13]; the other is the hidden layer, which is composed of recessive neurons and is used to extract features of the training data. The DBN structure is formed by stacking several RBMs sequentially, and the output of the previous RBM is used as training input of the next RBM. Its DBN structure is shown in Fig. 2.
2.2 LSTM Model Structure LSTM is a variant of recurrent neural network [14], which can effectively solve exploding or vanishing gradient problem of simple recurrent neural network. The mathematical model definition of LSTM structure is shown in formulas (1)–(6). ct = f t 0 ct−1 + i t 0 z t
(1)
h t ot = tanh 0 (ct )
(2)
hi i t = σ (xt Wtxi + h t−1 Wt−1 )
(3)
) ( xf hf f t = σ xt Wt + h t−1 Wt−1
(4)
) ( ho ot = σ xt Wtxo + h t−1 Wt−1
(5)
) ( hz z t = tanh xt Wtx z + h t−1 Wt−1
(6)
The state of LSTM at time t consists of two vectors, namely ct and h t , where ct is the memory component and h t is the hidden state component. Three gate structures, i, f and o, control the input, forgetting and output, respectively. z is the update candidate,
Wastewater Quality Prediction Model Based on DBN-LSTM ...
983
Fig. 3 LSTM model structure
and h t−1 is the output at time t − 1. Wt xi , Wt x f , Wt xo , Wt x z are the weight matrix parameters of input vector x j at time t respectively. Wt−1 hi , Wt−1 h f , Wt−1 ho , Wt−1 hz respectively represent the weight matrix parameters of the hidden layer vector h j−1 at t − 1 time. The structure of LSTM is shown in Fig. 3. σ and tanh represent the sigmoid function and the hyperbolic tangent function, respectively.
2.3 Improved Particle Swarm Optimization Algorithm Standard PSO algorithm: vi (t + 1) = wvi (t) + c1r1 pbesti (t) − xi (t) + c2 r2 [gbesti (t) − xi (t)]
(7)
xi (t + 1) = xi (t) + vi (t + 1)
(8)
where: i = 1, 2, … N, N is the total number of particles in the population, w is the inertia of the particle, vi (t) is the velocity of particle i at the t-th iteration, xi (t) is the velocity of particle i at the t-th iteration, c1 and c2 are learning factors, usually c1 = c2 = 2. r1 and r2 are random numbers between (0, 1), pbest i (t) is the position where particle i has the best fitness at the t-th iteration, gbest i (t) is the position of all particles in the population with the best fitness at the t-th iteration. Inertia Weight Improvement. In the standard algorithm, the inertia weight w represents the relationship between the update speed of the last particle and the update speed of the current particle. In the early stage of evolution, w should be larger to ensure that each particle can fly quickly and independently to fully search the space. In the later stage, w should be smaller as far as possible, learn from other particles in the population, and control the change of inertia weight by adaptive method according to the current and maximum number of iterations of the particle. The adaptive change of inertia weight is shown in formula (9). w = wstart −
wstart − wend ger
1 + e iter 2
(9)
984
J. Yang et al.
where ger is the maximum iteration number of PSO, iter is the current iteration number, wstart is the initial inertia weight, and wend is the inertia weight at the maximum iteration number [15]. Learning Factor Improvement. In PSO algorithm, c1 and c2 represent individual learning factor and social learning factor respectively [16], which determine the influence degree of the particle’s own experience information and other particles’ experience information on the particle’s next motion. In order to balance the global search ability and local search ability, c1 should be larger in the early stage and c2 should be larger in the later stage. The learning factor improvement formula is shown in Eqs. (10)–(11). c1 = c1start − c2 = c2start +
c1start − c1end ger
(10)
1 + e iter 2
c2end − c2start ger
(11)
1 + e iter 2
where c1start and c1end are the initial value and maximum iteration termination value of individual learning factor, and c2start and c2end are the initial value and maximum iteration termination value of social learning factor, respectively. Random Number Improvement. In standard PSO, because the individual learning and social learning term selection coefficient values combined with a random number, unable to make full use of particle information [17], so the introduction of the concept of entropy weight, instead of the standard algorithm of random number r1 and r2 , make full use of information between other particles to improve search precision and convergence speed of the algorithm. The calculation formula of the entropy weight method is as follows: Standardized processing: Yi j =
X i j − X jmin X jmax − X jmin
(12)
where: Yij is the jth index value of the ith unit after normalization, X i j is the original value of the jth index data of the ith unit, Xjmax and Xjmin are the maximum and minimum values of each jth index data. Define standardization: Yi j =
Yi j m E Yi j
(13)
i=1
The information entropy value e and information utility value d, and the information entropy value of the jth index is:
Wastewater Quality Prediction Model Based on DBN-LSTM ...
Ej =
−1 m E lnm Yi j lnYi j
985
(14)
i=1
Information utility value: dj = 1 − ej
(15)
The weight of j index is obtained as: Wj =
dj m E dj
(16)
i=1
where, W j is used to represent the entropy weight information of individual optimal solution to the jth dimension at the t iteration of all particles. r1 andr 2 are replaced by it, as shown in Eqs. (17)–(18). r1 = 1 − W j
(17)
r2 = W j
(18)
Introduce Annealing Strategy. Introducing annealing strategy determine particle premature and promptly make it jump out of local extreme value point, using the current best particle iteration particles and the fitness of particle average temperature difference control, through the way of reducing the difference combining probabilistic kick features, to find the global optimal solution of the objective function, the method not only has not changed annealing algorithm performance. Moreover, the convergence of PSO algorithm is ensured to a large extent. Formula is as follows: f (t + 1) − f (t) f avg − f max { 1 /\ ≤ 0 P= 1 /\ > 0 1+e/\
/\ =
(19)
(20)
where: f (t + 1) is the fitness after the particle position changes, and f (t) is the fitness before the particle position changes. /\ is the fitness change of the position before and after particle iteration, and P is the acceptance probability of the particle.
986
J. Yang et al.
3 Simulation Experiments 3.1 Identification of Nonlinear Systems Taking nonlinear objects y(k) =
y(k − 1)y(k − 2)[y(k − 1) + 2.5] + u(k − 1) 1 + y(k − 1)2 + y(k − 2)
(21)
) ( as an example. The input signal is u(k) = sin 6.28×k , the network input vector of 25 the model is X (k) = [y(k − 1), y(k − 2), u(k − 1)]. The DBN part uses a stacked structure of 3-layer RBM and optimizes the neuron parameters of each layer using a modified PSO. The initialization of the weights between the visual nodes to the implicit nodes is done using the Xavier initialization method. The initial values of RBM weights for each layer are uniformly distributed in [−r, r], where r is shown in Eq. (22). / r=
6 Mi−1 + Mi
(22)
where Mi−1 is number of nodes in the visible layer and Mi is number of nodes in hidden layer. The learning rate of DBN weights is set to 0.8, and the bias learning rate of both visual and hidden nodes is set to 0.1. The learning rate of LSTM module is set to 0.005, and the learning algorithm is Adam. After determining the network structure, the model is optimized using the improved PSO algorithm, and on the basis of the optimization of PSO algorithm, we can obtain the Fig. 4 the change curve of the fitness value. From Fig. 4, it can be seen that the optimal fitness value becomes smaller rapidly and converges at last, indicating that the model structure tends to be optimized as the number of iterations increases. The simulation results of the nonlinear system are shown in Fig. 5, from which it can be seen that the real data generated by the nonlinear system are well fitted with the predicted values. In addition, from Fig. 6, the error between the predicted and real values is small and tends to be stable, which verifies the effectiveness of the proposed method. Fig. 4 Change curve of fitness value
Wastewater Quality Prediction Model Based on DBN-LSTM ...
987
Fig. 5 Simulation curve of nonlinear system
Fig. 6 Bars of error between predicted and true values
3.2 Soft Sensor Model for Water Quality The wastewater treatment process is a class of complex industrial processes with strong nonlinearity, and the hybrid model network proposed in this paper is used for the black-box identification of the wastewater treatment process. The first 150 sets of data were used as training samples for offline modeling, and the last 50 sets of data were used as test samples for online soft measurement of COD in a wastewater treatment plant with 200 input/output data pairs. The influent COD, suspended solids concentration SS, aeration oxygen, PH, total influent ammonia nitrogen concentration, and aeration basin redox potential ORP were used as model input variables [18], and the effluent COD values were used as model output variables (all in mg/L). In order to evaluate the performance of the model, various metrics are used to analyze and estimate the developed DBN-PSO-LSTM hybrid model. In order to verify the superiority of the proposed model and whether the proposed method has overfitting phenomenon, Table 1 also shows the prediction performance of the training set and test set of the corresponding model. As shown in Table 1, the prediction performance of these models is almost consistent with that of the training and test sets, so there is no overfitting of these models. The results also show that the RMSE, MAPE, and R2 of the proposed model are 3.6365, 0.0524 and 0.8785, respectively, which are better than other model indexes. The prediction performance of the proposed method for effluent COD value is shown in Fig. 7. It can be easily found from Fig. 7 that the predicted value is basically consistent with the real value. And can be seen from the Fig. 8, the model predicted value and actual value fitting is good, can be seen from the perspective of correlation actual value close to the target and the results show that the model for forecasting COD value in the
988
J. Yang et al.
Table 1 Comparison of prediction performance of various models
Fig. 7 Prediction results of different models
Fig. 8 Regression diagram of target value and actual value
process of wastewater treatment can be predicted effectively completed accurately fast prediction results, improve the convergence rate and stability of the model.
4 Conclusion To address the problem that it is difficult to continuously detect the effluent COD value online in the wastewater treatment process, this study proposes a new hybrid DBNPSO-LSTM model suitable for the dynamic characteristics of wastewater treatment process for wastewater treatment process monitoring. The model can use fewer variables or samples to describe the complex wastewater treatment process, and using the characteristics of DBN, it can effectively perform deep feature extraction on the actual wastewater data, and use the improved PSO algorithm to optimize the DBN network to further improve the performance of DBN, and the improved PSO solves the shortcomings of the traditional PSO algorithm of slow convergence and easy to fall into local optimum. Finally, using the advantage
Wastewater Quality Prediction Model Based on DBN-LSTM ...
989
that the LSTM network can have a good prediction effect on the time-series data, it can effectively have a better prediction on the wastewater treatment data, and the simulation experiments by the wastewater treatment process show the effectiveness of the proposed method. However, there are still some shortcomings in this paper, which do not effectively solve the problems of slow training speed and low efficiency of DBN pre-training stage, and the accuracy of water quality prediction still needs to be improved, which is a problem to be solved in the later research.
References 1. Qiao, J.F., An, R., Han, H.G.: Research on effluent ammonia nitrogen prediction based on RBF neural network. Control Eng. 23(09), 1301–1305 (2016) 2. Yang, Z., Wu, L., Qiao, J.F.: Sewage environment prediction based on GM-RBF neural network. Control Eng. 26(09), 1728–1732 (2019) 3. Lin, W., Yi, Z., Tao, C.: Back propagation neural network with adaptive differential evolution algorithm for time series forecasting. Expert Syst. Appl. 42(02), 855–863 (2015) 4. Niu, G., Yi, X., Chen, C., et al.: A novel effluent quality predicting model based on geneticdeep belief network algorithm for cleaner production in a full-scale paper-making wastewater treatment. J. Clean. Prod. 265, 121787 (2020) 5. Fan, Y., Hao, X., Hu, X.L.: Video associated cross-modal recommendation algorithm based on deep learning. Appl. Soft Comput. 82, 105597 (2019) 6. Cheng, F., Fu, X., Yang, Z.: A short-term building cooling load prediction method using deep learning algorithms. Appl. Energy 195, 222–233 (2017) 7. Wang, G.M., Li, W.J., Qiao, J.F.: Prediction of total phosphorus in effluent based on PLSR adaptive deep belief network. Chin. J. Chem. Ind. 68(05), 1987–1997 (2017) 8. Wang, G.M., Li, W.J., Qiao, J.F.: A depth determination method for continuous deep belief networks based on reconstruction error. In: Abstract Collection of the 27th China Process Control Conference (CPCC2016) 9. Hinton, G.E.: A practical guide to training restricted Boltzmann machines. In: Montavon, G., Orr, G.B., Müller, K.R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 599–619. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_32 10. Zhou, C.M., Liu, M.P., Wang, J.W.: Research on water quality prediction model based on CNN-LSTM. Hydropower Energy Sci. 39(03), 20–23 (2021) 11. Zhang, Y.J., Xu, W., Tang, L.B., et al.: Energy consumption modeling of communication base stations based on PSO-LSSVM with rolling time window. J. Hunan Univ. (Nat. Sci. Ed.) 44(02), 122–128 (2017) 12. Hinton, G.E., Osindero, S., The, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006) 13. Wang, G.M.: Optimal Design Method and Application of Deep Belief Network Structure. Beijing University of Technology (2019) 14. Sepp, H., Jürgen, S.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997) 15. Chen, G., Huo, D., He, Z.Q., et al.: Optimization method of crossed antenna array based on immune particle swarm algorithm. Sens. Microsyst. 40(07), 28–31 (2021) 16. Cao, P., Liu, M.: PMU optimal configuration method based on improved integer programming method combined with zero injection node. Power Syst. Prot. Control 49(16), 143–150 (2021) 17. Sun, Y., Xia, Q.H.: Analysis and improvement of particle swarm optimization under semisupervised clustering objective. J. Beijing Univ. Posts Telecommun. 43(05), 21–26 (2020) 18. Lu, C., Yang, C.L., Qiao, J.F.: Ammonia nitrogen soft sensing method based on spike selforganized radial basis network. Inf. Control 46(06), 752–758 (2017)
Research on Autonomous Collision Avoidance Method of Typical General Aviation Aircraft Based on Cognitive System Jie Zhang, Xiyan Bao, and Hanlou Diao
Abstract For the typical general aviation aircraft autonomous collision avoidance process, the collision avoidance rules for general aviation aircraft autonomous avoidance of fixed obstacles, random obstacles and other aircraft are established, and the general aviation aircraft autonomous collision avoidance algorithm based on the cognitive system is constructed. A complex low-altitude flight collision avoidance system built on Visual Studio 2013 is used to verify the behavior of typical general aviation aircrafts in avoiding fixed and random obstacles autonomously. Based on the idea of control variables and reinforcement learning, trend analysis and law summary of simulation data are performed. The relationship between the selection of autonomous avoidance methods for general aviation aircrafts and the obstacle distance is derived from the result analysis, which verifies the effectiveness of the typical autonomous avoidance methods for general aviation aircrafts proposed in this study. Keywords General aviation aircraft · Flight environment · Rules for collision avoidance · Autonomous collision avoidance methods
1 Introduction The strong development of the general aviation industry has made the contradiction between the increasing demand for general aviation and the relatively backward capacity of aviation safety and security increasingly prominent, and the poor J. Zhang Nanjing Highway Development Center, Nanjing 210014, Jiangsu, China X. Bao Jiangsu Provincial Department of Transportation, Nanjing 210001, Jiangsu, China H. Diao (B) China Design Group, Nanjing 210014, Jiangsu, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_91
991
992
J. Zhang et al.
safety environment for general aviation has become increasingly prominent. China’s existing airspace is more restricted, the control work pressure is increasing, and general aviation safety accidents repeatedly appear. Compared with developed countries in Europe and the United States, China’s general aviation operating safety environment still needs to be improved, and the autonomous collision avoidance capability of aviation aircraft needs to be improved. Long-term and in-depth research on aircraft collision avoidance has been carried out at domestic and abroad, and certain achievements have been made in conflict detection and collision avoidance models and algorithms [1]. Several common collision avoidance models in aircraft collision avoidance model research, such as the geometric collision avoidance model centered on an aircraft that avoids flight conflicts by adjusting its speed or changing its heading [2, 3]. There are also geometric collision avoidance models that are centered on the intruder and deconflicted by changing only the heading or only the speed, as well as optimizing by changing both the heading and the speed [4, 5]. There are two main types of research directions in aircraft collision risk research. A category for route transport aircraft, such as the REICH model, the cross-route model and the stochastic analysis model, which perform collision risk calculations mainly by analyzing airspace and aircraft flight characteristics [6]. The Event model and stochastic differential equation approach consider the impact of human factors, warning systems, and communication and navigation monitoring system performance on collision risk [7, 8]. The other category is for general aviation aircraft, using velocity vector method and mathematical model of flight trajectory to analyze the collision risk of general aviation aircraft in uncontrolled airspace, and using cognitive reliability model to analyze the reaction speed of general aviation aircraft pilots [9]. In the design of airborne collision avoidance logic, the iterative equations of optimal collision avoidance logic are derived based on Markov decision process (MDP) using dynamic programming method, and the design process of optimal collision avoidance logic for ventilation aircraft is given [10]. Through the study of typical general aviation aircraft autonomous avoidance methods, we explore the cognitive system-based aircraft autonomous avoidance algorithm, develop different avoidance rules when the general aviation aircraft conflicts with fixed obstacles, random obstacles and other aircraft during flight, combine the simulation of the general aviation aircraft autonomous avoidance process, determine the situations and advantages and disadvantages of different autonomous avoidance methods, and then optimize the general aviation aircraft autonomous avoidance method. This study has positive implications for ensuring the safety and improving the safety environment of general aviation operations.
Research on Autonomous Collision Avoidance Method ...
993
2 Autonomous Collision Avoidance Rules and Algorithms 2.1 Algorithm Rules Adaptive behavior modeling using cognitive systems mainly includes: ➀ problem space modeling ➁ operator modeling ➂ learning modeling. In addition, the learning mechanism of reinforcement learning can enable the system to take more optimal solutions in solving problems, and generally improve the efficiency of the system operation [11, 12]. In the design application of this study, when a general aviation aircraft generates a flight conflict, three types of obstacle avoidance rules are adopted according to different conditions: adjusting speed, adjusting heading and adjusting altitude: (1) When the horizontal distance of general aviation aircraft from the thunderstorm air mass, low clouds, other aircraft and other random obstacles is above B km, the operation of obstacle avoidance by adjusting speed is used. The specific operation is to increase the speed to quickly cross the conflict or reduce the speed to avoid the conflict. (2) When the horizontal distance between general aviation aircraft and fixed obstacles such as high mountains is above A km, or the horizontal distance from random obstacles such as thunderstorm air masses, low clouds, other aircraft is above A km and below B km, the avoidance operation of adjusting heading is used. The specific operation is to change the flight heading angle from the left and right sides of the obstacles such as high mountains, thunderstorm air masses or low clouds; two aircraft at the same height to meet head to head when each change the course to the right. (3) When the horizontal distance between high mountains and other fixed obstacles or thunderstorm air masses, low clouds, aircraft and other random obstacles is below A km, the specific operation is to rise height to avoid mountains, clouds and other obstacles; one aircraft normal straight flight, another aircraft to change the pitch angle for climb or descent; or two aircraft to climb or descent in the opposite direction to avoid other aircraft.
2.2 Reinforcement Learning There are two important concepts in reinforcement learning: the value function and the payoff function. In the cognitive system, the value function corresponds to the reinforcement learning rule and the function value (Q-value) corresponds to the numerical preference of the operator [13]. In the process of solving the problem, after each action is executed, the Q-value needs to be updated according to the reward of the environment feedback, and the method used in this paper to update the Q-value is the Sarsa method.
994
J. Zhang et al.
Q(s, o) ← Q(s, o) + α r + v Q s , o
(1)
where s is the current state, s is the next state, o is the current operator, o is the next state operator, r is the reward obtained, α is the learning rate, v is the future reward discount parameter. In the general aircraft autonomous collision avoidance algorithm, take α = 0.1, and v = 0.9. The payoff function is the reward received back from the environment after each action is executed. The ideal obstacle avoidance situation for a general aviation aircraft is to move away from the obstacle with each obstacle avoidance flight. Therefore, the payoff function is designed according to the distance of the aircraft from the obstacle after taking avoidance measures: when avoiding obstacles, the aircraft adjusts speed, altitude or heading and then moves away from the obstacle, feedback the reward value; the aircraft adjusts speed, altitude or heading and then approaches the obstacle, feedback the penalty value; when the aircraft collides with the obstacle, feedback the maximum penalty value. The avoidance payoff function for general aviation aircraft is as follows: ⎧ −10, di,t = dmin ⎪ ⎪ ⎨ −1, dmin < di,t < dmax , and di,t − di,t−1 < 0 r= ⎪ +1, dmin < di,t < dmax , and di,t − di,t−1 ≥ 0 ⎪ ⎩ 0, di,t ≥ dmax
(2)
where, r is the reward value of the feedback during obstacle avoidance. di,t is the distance between the aircraft and the obstacle at moment t. dmin = AKm. dmax = BKm. When the distance between the aircraft and the obstacle is greater than dmax , it means there is no obstacle and the aircraft continues to operate normally. When the distance between the aircraft and the obstacle is less than dmin then it means that the aircraft collides with the obstacle. When the distance between the aircraft and the obstacle is in between indicates that an obstacle is detected and needs to be avoided.
3 Simulation Scenario 3.1 Simulation Platform In this study, the flight conflict autonomous avoidance system for complex lowaltitude airspace is used to validate and simulate the autonomous avoidance behavior of a typical general aviation aircraft. The system is built on Visual Studio 2013 and can achieve collision detection and avoidance between general aviation aircraft and fixed obstacles, general aviation aircraft and random obstacles, and general aviation aircraft itself.
Research on Autonomous Collision Avoidance Method ...
995
Fig. 1 General aviation aircraft autonomous avoidance of fixed obstacles
3.2 Experimental Design (1) General aviation aircraft autonomous avoidance of fixed obstacles General aviation aircraft autonomous avoidance rules for fixed obstacles: general aviation aircraft from the fixed obstacle horizontal distance is greater than A km, take the heading adjustment or speed and heading adjustment. General aviation aircraft from the fixed obstacle level distance is less than A km, take the altitude adjustment. In the simulation process of general aviation aircraft autonomous avoidance of fixed obstacles, set the flight speed of general aviation aircraft is 260 km/h, flight height is 1500–2100 m; fixed obstacles are high mountains, the coordinates are (710,480), the impact range is 130 units, the height is 1500, 1800, 2100, 2400 m; collision detection distance is the system self-set value 10 km, the simulation scenario is shown in Fig. 1. (2) General aviation vehicle autonomous avoidance of random obstacles General aviation aircraft autonomous avoidance of random obstacles rules are: general aviation aircraft from random obstacles horizontal distance is greater than B km, take the speed adjustment; general aviation aircraft from random obstacles horizontal distance is greater than A km and less than B km, take the heading adjustment or speed and heading adjustment; general aviation aircraft from random obstacles horizontal distance is less than A km, take the altitude adjustment. In the general aviation aircraft autonomous avoidance random obstacle simulation scenario, set the flight speed of the general aviation aircraft is 260 km/h, the flight altitude is 1500–2100 m; the random obstacle is a thunderstorm air mass, the thunderstorm diameter is 100 units, the thunderstorm protection zone is 30 units, the coordinates of the starting point of the thunderstorm is (500,498), the coordinates of the end point of the thunderstorm is (1170,399), the speed of the thunderstorm is 3 units, the random degree of the thunderstorm is 10–30, the simulation scenario is shown in Fig. 2.
996
J. Zhang et al.
Fig. 2 General aviation aircraft autonomous avoidance of random obstacles
4 Results Analysis According to Figs. 3, 4, 5 and 6, when the fixed obstacle height is 1500 m, the best distance to adjust the height is 20 km; when the fixed obstacle height is 1800 m, the best distance to adjust the altitude is 22 km; when the fixed obstacle height is 2100 m, the best distance to adjust the altitude is 33.5 km; when the fixed obstacle height is 2400 m, the best distance to adjust the altitude is 42 km. According to the above summary analysis, when other environmental conditions are the same, the best altitude adjustment distance of general aviation aircraft facing fixed obstacles with the increase of obstacle height has a trend of increasing. Fig. 3 Adjustment law when the obstacle height is 1500 m
Fig. 4 Adjustment law when the obstacle height is 1800 m
Fig. 5 Adjustment law when the obstacle height is 2100 m
Research on Autonomous Collision Avoidance Method ...
997
Fig. 6 Adjustment law when the obstacle height is 2400 m
Fig. 7 The adjustment law under different detection distance
According to Fig. 7, after backpropagation optimization summary, the optimal altitude adjustment distance A for general aviation aircraft facing random obstacles is 29.5 km and the optimal heading adjustment distance B is 54.5 km.
5 Conclusion In this study, the rules for autonomous avoidance of fixed obstacles, random obstacles and other aircraft by general aviation aircraft are developed, the autonomous avoidance algorithm of general aviation aircraft based on the cognitive system is established, and the relationship between the selection of autonomous avoidance methods and the distance of obstacles by general aviation aircraft is derived through
998
J. Zhang et al.
the analysis of simulation operation results, which proves the effectiveness of the proposed typical general aviation aircraft autonomous avoidance methods. In future research, different avoidance strategies for different general aviation aircraft types in the face of the same obstacle can be determined according to the characteristics of the general aviation aircraft type.
References 1. Kuchar, J.K., Yang, L.C.: A review of conflict detection and resolution modeling methods. IEEE Trans. Intell. Transp. Syst. 1(4), 179–189 (2013) 2. Soltani, M., Ahmadi, S., Akgunduz, A.: An eco-friendly aircraft taxiing approach with collision and conflict avoidance. Transp. Res. Part C Emerg. Technol. 121, 102872 (2020) 3. Jiang, W., Lyu, Y., Li, Y.: UAV path planning and collision avoidance in 3D environments based on POMPD and improved grey wolf optimizer. Aerosp. Sci. Technol. 02, 121 (2022) 4. Bilimoria, K.: A geometric optimization approach to aircraft conflict resolution. J. Sci. Food Agric. 82(2), 192–202 (2013) 5. Kochenderfer, M.J., Chryssanthacopoulos, J.P.: Robust airborne collision avoidance through dynamic programming. Aviat. Saf. 22(3), 415–425 (2011) 6. Richards, A., How, J.P.: Aircraft trajectory planning with collision avoidance using mixed integer linear programming. In: American Control Conference, American, pp. 1936–1941. IEEE (2002) 7. Lin, C.E., Wu, Y.Y.: Collision avoidance solution for low-altitude flights. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 225(7), 779–790 (2011) 8. Chryssanthacopoulos, J.P., Kochenderfer, M.J.: Collision avoidance system optimization with probabilistic pilot response models, American. In: American Control Conference, pp. 2765– 2770. IEEE (2011) 9. Menon, P.K., Sweriduk, G.D., Sridhar, B.: Optimal strategies for free-flight air traffic conflict resolution. J. Guid. Control Dyn. 33(2), 202–211 (1999) 10. Li, F., Li, X., Zhang, X.: Asynchronous filtering for delayed Markovian jump systems via homogeneous polynomial approach. IEEE Trans. Autom. Control (99), 1 (2019) 11. Eversberg, L., Ebrahimi, P., Pape, M., Lambrecht, J.: A cognitive assistance system with augmented reality for manual repair tasks with high variability based on the digital twin. Manuf. Lett. (34), 49–52 (2022) 12. Mladineo, M., Zizic, M.C., Aljinovic, A.: Towards a knowledge-based cognitive system for industrial application: case of personalized products. J. Ind. Inform. Integr. (2021) 13. Gao, Y., Matsunami, Y., Miyata, S., Akashi, Y.: Multi-agent reinforcement learning dealing with hybrid action spaces: a case study for off-grid oriented renewable building energy system. Appl. Energy (326), 120021 (2022)
A Study on Platform of Intelligent Metering Builds on Cloud Computing Xinyi Li, Hang Sun, Shijie Wen, Chen Chen, Hongzhong Zhang, and Longhai Deng
Abstract To meet the needs of intelligent metering platform in the future, the current status of metering automation system and problems have been described and a new system solutions based on cloud computing is presents in this paper. The feasibility and necessity of cloud computing applications in the intelligent metering platform are analyzed. A distributed cloud computing method is proposed in the light of massive electric data and a massive electric data cloud computing platform based on Hadoop is built. In this platform, virtualization technology is used to get the unified organization for various types of heterogeneous hardware and software, while he HDFS and HBase technologies are used to get the efficient distributed storage and management for massive and MapReduce technology is used to get distributed paralleled processing for user service and resource scheduling. Now, the new intelligent metering platform has practical applications, third-party test results show that most of the functions the system meet the requirement in the future smart metering platform. Keywords Intelligent metering platform · Cloud computing · Hadoop · Server cluster · Resource integration
1 Introduction The development of smart grid puts new requirements for traditional metering automation systems. According to the “12th Five-Year Plan requirements of China Southern Power Grid Co., Ltd. for intelligent metering services”, the intelligent metering system is not only required to achieve some new intelligent functions such as two-way information interaction, energy efficiency assessment, demand response, smart serve, electric vehicle charge analysis, but also necessary to analyse data and process event on the existing functions of metering platform, and provide selflearning, fault self-diagnosis and automatic optimization functions. In the cause of X. Li (B) · H. Sun · S. Wen · C. Chen · H. Zhang · L. Deng Guizhou Power Grid Co., Ltd., Guiyang Power Supply Bureau, Guiyang 55001, Guizhou, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_92
999
1000
X. Li et al.
achieving the above requirements, it is difficult to complete based on the functionally single metering automation system. Therefore, the marketing management system, 95,598 platform, EMS system, and the integration system are necessary. However, the above systems are independence, so the disadvantages of independent systems are mutually isolated, discrepant standard, non-system scalability, and difficulty of contact in each system [1]. Therefore, it is necessary to upgrade the existing metering platform to intelligent metering platform that can adapt to largescale information sharing requirements and diversified business needs, so integration between the various services can be achieved, and situation of isolated information between each system can be eliminated [2], then system overall operation efficiency can be improved [3]. Since Google’s first concept of cloud computing, companies such as Amazon, IBM, Microsoft and Yahoo have proposed their respective cloud computing solutions, which indicate the development direction for distributed parallel calculations in the wide area. Cloud computing is the development of distributed processing, parallel processing, and grid computing, which is the commercial implementation of science concepts. The core of cloud computing is virtualization, which allows cloud computing systems to dynamically provide computing resources based on service providers and customers. The essence of cloud computing is to store data, applications, and services in the cloud, and take advantage of the powerful computing power of virtual machine which is combined by parallel and distributed computing systems to achieve user service systems [4]. As a new type of shared infrastructure and network-based computing mode, cloud computing will integrate resource aggregation, resource openness, resource simulation. Because of above, the way of users using resource are more transparent and easier. Cloud computing has the characteristics of strong expansion ability, low cost, high efficiency, high reliability and low emissions [5]. Since the advantages of cloud computing technology, cloud computing technology is widely used in the business platform of electrical power system [6–8]. On the basis of a large number of reference, this article proposes a set of intelligent metering business analysis platforms based on Hadoop framework, and uses the MapReduce programming model to strengthen the management of data and improve data analysis.
2 System Requirement According to the future demand, the intelligent grid metering automation system will based on cloud computing technology, and interact with marketing management system, 95,598 system, production management system, intelligent residential district system, and provincial marketing management system to achieve two-way mutual assistance support for measuring services and other business services. The architecture of the system is shown in Fig. 1. According to Fig. 1, it is easily to find that the purpose of this system is integrated by monitoring systems in different fields (such as Plant station acquisition
A Study on Platform of Intelligent Metering Builds on Cloud Computing
1001
Fig. 1 Architecture of the intelligent metering platform
terminals system, key customer load management system, distribution management system, system of Low-pressure collecting search meter, intelligent cell system), then using the virtualization platform to virtualize the resources involved in the abovedescribed system, the network device is virtualized, shielding the mutual communication blocked problem due to hardware resources, and unified resource management in virtual machines.
3 Core Technology Hadoop is a distributed system infrastructure, which is developed by Apache HTTP Server. The application system based on Hadoop [9] can not only run on a cluster consisting of inexpensive hardware facilities, but also can quickly build a distributed system with high reliability and good scalability. In this system, HDFS, MapReduce, and HBase are main components. HDFS and MapReduce are both core components of Hadoop. HDFS is a basal layer of Hadoop system that response for data storage management. It is suitable for processing largescale data sets. MapReduce [10] is a parallel computing model that can effectively split input data, and processes multiple data at the same time, which is suitable for processing of massive data. In this project, the Hadoop cluster local area network is composed of a NameNode server, a SecondaryNameNode server, a JobTracker server and multiple servers. The NameNode server is responsible for managing data segmentation, data storage, and monitoring running situation of DataNode. The application needs to read the data file, firstly access the NameNode server, and get the distribution of data files on each DataNode, then communicate directly with DataNode. Once a DataNode downtime is found, NameNode will notify the application to access the
1002
X. Li et al.
point of the downtime, and add a copy of the data blocks to ensure platform normal running. The SecondaryNameNode server is used to monitor the HDFS status, and communicate with NameNode. If there are some problems of NameNode, SecondaryNameNode can be a backup serve. The JobTracker server is not only responsible for managing the decomposition and summary of the calculation task, but also responsible for monitoring the operation of each TaskTracker node. Once any tasks fail, JobTracker will automatically restarts this task.
4 Structure of Platform Combined with the characteristics of the intelligent metering platform, the smart metering cloud platform adopts a distributed and layered structure in design, and it can be divided into five layers for the realization of the entire system, including cloud device, cloud platform, basic services, advanced applications and Customer layers. The cloud device is composed of host, storage, network, and other hardware. In this system software design, VMware virtualized platform management technology is used, which make virtualization of the hardware resources come true. In addition, due to the implement technology of virtualization, the utilization efficiency of resources is not only ensured, but also enable system management personnel to work all in the system business applications without being affected by different types of hardware resources and operating systems. Cloud platform consists of data storage, computing services, load management, data isolation and backup management. The mainly function of this is storage massive data for applying a high-performance computing condition, which is achieved by advanced cloud computing technology such as distributed file system, distributed database management system, and data using distributed file systems. The service layer consists of system model management, data center management, data inquiry service, information service, report service, communication management, Specification management, operation and maintenance management, and privilege service. This layer is the strong foundation of the business system, since each module of advanced applications requires these services. This layer adopts Mapreduce as the programming model and computing framework for processing massive data. For large-scale data, the method of task decomposition and result aggregation is adopted. In this system model management, the data structure of the whole system based on IEC61970/IEC61968 is provided to realize the interactive operation of the system. The application layer consists of energy consumption management, operation and maintenance management, energy conservation management, device assessment, information release, and data interfaces. The customer system is mainly access to the entire system. The power customers and power supply sectors can achieve two-way interaction through the PC client, mobile, LED large screen, touch screen and television. This layer mainly uses Flex
A Study on Platform of Intelligent Metering Builds on Cloud Computing
1003
Fig. 2 Structure of the intelligent metering platform
technology to ensure the ease of use of the system, and SWIZ technology is used to achieve model- view-controller (MVC) design. The structure of intelligent metering platform is shown in Fig. 2.
5 Characteristics of the Platform Compared to the original one, the new automatic intelligent meter platform has the following advantages: • Design of the master station. Cloud platform as the underlying architecture helps to integrate host machine, storage space, network and other functions. With the whole integrated ecosystem, virtualized operation handles a large range of services, for instance, data storage and isolation, analysis and computing, loading and backup. Efficient utilization, timely response, balanced services and rapid processing are ensured with the cloud design. • Design of the core system. A unified data center mode is adopted with block-based design featured by center modeling, linear service management, one-stop data collection, systematical analysis and calculation and leveling network platform. Instant alarm and information are shared to avoid repetition. A simple and flexible plug-and-play design is thereby ready for further high-level running. • Design of the network. Centered with the main control station, a digital network is formed to connect the plant electric energy telemetry system, core customer loading system, distribution monitoring and metering system, low-voltage meter reading system and intelligent community system. Every unit in the cloud platform is coded separately for modeling and future analysis, ranging from municipal and county bureaus, Power supply station and substations, major lines and sub-liners, different transformers to meter terminal, systems, and key customers, etc.
1004
X. Li et al.
• Design of deployment. Centralized deployment is suggested here, with only one main station for each city. All county-level power supply bureaus will connected to the main control system and all business must be completed through the central platform. Thus, no additional station will be needed. • Design of the interface. The central system is connected not only to all the internal operational machines and terminals but also to external management systems such as marketing systems, production systems and the 95,598 system, which allows an access to archive materials, payment condition, purchase and sale and customer feedback.
6 Test Result In order to verify the functions of the platform, the project invites third-party agencies for pretests. Test standards are designed under requirements of the “GB/T 25,000.51-2010 “Engineering Software—Product Quality Requirements”, “Evaluation (SQuaRE) GB/T 16,260.1-2006”, “Quality Requirements and Test of Commercial Spot (COTS) Software Products” and “Software Engineering— Product quality—Part 1: Quality models”. A total of 25 performance dimensions are measured, such as Scalable Vector Graphics (SVG) display, archive inquiry, load analysis, power analysis, and purchase and sale. SVG display is a two-dimensional graphics of vector image format based on extensible markup language with support for interactivity and animation. The method which is used in this test is choosing a random city, then check whether it can upload map of this city or not. The reason for testing archive inquiry is checking the time of uploading information of corresponding plant station that help system staff to manage information of plant station and substation. The load analysis is mainly detecting the completed time of the load analysis of the specified date. The Tables 1 and 2 below show some of the main results: It can be concluded from the test data that the cloud system performed quite well with basic routine services, the smart automatic metering system gives positive response within 1 s, displaying correct analysis result. Electric quantity and quantity parameters are the most time-consuming queries, because of the large amount of consumption data from individual users. For future improvement, it is necessary to focus on the processing algorithm of residential electricity consumption data and to meet CSG’s smart metering requirements through the dynamic configuration of hardware resources and software resources.
A Study on Platform of Intelligent Metering Builds on Cloud Computing Table 1 Design of main performance dimensions
Number Performance dimensions
1005
Number of test user Running time (min)
1
SVG display
50
5
2
Archive inquiry
10
5
3
Station data management
10
5
4
Electric quantity
10
5
5
Load analysis
10
5
6
Line loss analysis
10
5
7
Busbar loss analysis
10
5
8
Quantity 10 parameters inquiry
5
9
City data inquiry
10
5
Basic archive inquiry
10
5
10
Table 2 Result of main performance dimensions Number Function name Performance dimensions
Description of performance dimensions
1
SVG display
Load a random city on the map of whole city
0.313
2
Archive inquiry
Load the information corresponding plant station
0.627
3
Station data management
Load corresponding plant station information
0.435
4
Electric quantity
Complete the electricity consumption query on the specified date
5.157
5
Load analysis
Complete the load analysis on the specified date
0.498
6
Line loss analysis
Complete the line loss analysis of plant station on the specified date
0.173
7
Busbar loss analysis
Complete the busbar loss analysis of plant station on the specified date
0.784
8
Electric quantity and Inquire Electric quantity and quantity quantity parameters inquiry parameters
9
City data inquiry
Show the basic data of a random city’s Power Supply Bureau
0.632
Basic archive inquiry
Show the basic archive of a random city’s Power Supply Bureau
0.269
10
Answering time (sec)
21.726
1006
X. Li et al.
7 Conclusion In order to meet the business demands of the Southern Power Grid on future measurement platforms, this paper identifies the limitations of traditional metering platform, proposes a Hadoop-based distributed cloud computing system and realizes most of the designed functions. Third-party test results indicate that the centered cloud platform succeeds to respond within seconds in most circumstances, while the processing of residential data needs more in-depth research from the scheduling algorithm and resource dynamic allocation methods.
References 1. Song, Y.Q., Zhou, G.L., Zhu, Y.L.: Present status and challenges of big data processing in smart grid. Electr. Power Syst. Technol. 37(4), 927–935 (2013) 2. Zhao, T., Zhang, Y., Zhang, D.X.: Application technology of big data in smart distribution grid and its prospect analysis. Electr. Power Syst. Technol. 38(12), 3305–3312 (2014) 3. Liu, Z.W., Wen, Z.L., Zhang, H.T.: Cloud computing and cloud data management technology. Comput. Res. Dev. 49(S1), 26–31 (2012) 4. Zhang, S.X., Liu, J.M., Zhao, B.Z.: Cloud computing-based analysis on residential electricity consumption behavior. Electr. Power Syst. Technol. 37(6), 1542–1546 (2013) 5. Geelan, J.: Twenty one experts define cloud computing virtualization (2012). http://virtualiz ation.sys-con.com/node/612375 6. Liu, M., Chu, X.D., Zhang, W.: Design of cloud computing architecture for distributed load control. Electr. Power Syst. Technol. 36(8), 140–144 (2012) 7. Rao, W., Ding, J.Y., Lu, Q.K.: Cloud computing platform construction for smart grid. East China Electr. Power 39(9), 1493–1496 (2011) 8. Mu, L.S., Cui, L.Z., An, N.: Research and practice of cloud computing center for power system. Electr. Power Syst. Technol. 35(6), 171–175 (2011) 9. Dong, X.H., Li, R.X., Zhou, W.W.: Performance optimization and feature enhancements of hadoop system. Comput. Res. Dev. 50(S2), 1–15 (2013) 10. Li, Y.L., Dong, J.: Study and improvement of mapreduce based on Hadoop. Comput. Eng. Des. 33(8), 3110–3116 (2012)
Intelligent Planning of Battlefield Resources Based on Rules and Capability Chuanhui Zhang, Qianyu Shen, Peiyou Zhang, and Jiping Zheng
Abstract In view of the complex battlefield situation, it is of great significance to decompose the resource capability requirements of multiple tasks such as reconnaissance tasks and strike tasks. In order to meet the requirements of rapid matching, accurate scheduling and collaborative use of combat resources, it is necessary to quickly and accurately find the battlefield resources that meet the requirements. This paper studies from three aspects: “task-resource”, “target-resource”, “environmentresource” rule construction and management technology, battlefield resource capability dynamic evaluation technology, and battlefield resource intelligent planning technology based on base rules and Capability. It establishes a multi-dimensional resource comprehensive planning method, supports fine-grained resource scheduling on demand, and realizes comprehensive utilization of battlefield resources. Keywords Operational tasks · Capability evaluation · Rule matching · Resource planning
1 Introduction Under the condition of informatization, the integrated joint operation needs to realize the rapid integration and intelligent planning of battlefield resources. Therefore, it is necessary to quickly and intelligently discover the operational resources available in the battlefield and carry out resource planning to support the rapid collaborative application of battlefield resources. In this process, it is necessary to build and manage resource planning rules, dynamically evaluate resource Capability, and finally study resource planning algorithms based on rules and Capability. In combination with user permissions, it provides resource scheduling and control interfaces to support on-demand scheduling of resources. C. Zhang (B) · Q. Shen · P. Zhang · J. Zheng Northern Information Control Research Institute Group Limited Corporation, Nanjing 211100, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_93
1007
1008
C. Zhan et al.
In the context of integrated joint operations, it aims at the operational requirements of cross regional collaboration, high integration and natural gathering and dispersion of all services and arms, and manages and deploys operational resources for operational tasks [1]. Based on the unified engagement situation, Support the determination intention to jointly recognize “the continuous control of force status, the whole process monitoring of plan implementation, the dynamic combination of forces and firepower, the adjustment of combat operations on the spot, and the independent coordination of front-line forces, and gradually promote the transformation of advantages in decision-making to advantages in action [2]. Literature [3] designed a reactive scheduling model based on resource flow, and proposed a competitive approach to superior resources based on dual priorities. Literature [4] the problem of combat resource scheduling under the constraint of complex task relationship is described mathematically and a resource scheduling model is built. Literature [5] proposed a competitive resource method based on dual priority for resource uncertainty in resource scheduling process. Therefore, this paper studies the matching rules of tasks and resources, the dynamic evaluation technology of battlefield resource capability, and the resource planning research based on rules and Capability to maximize the effectiveness of battlefield resources.
2 Construction of Resource Planning Rule and Management Technology 2.1 Building Resource Matching Rules The construction technology of resource matching rules includes environment based resource matching rules construction, task based resource discovery rules, goal based resource matching rules construction and resource impact factor calculation. The construction of resource scheduling rules mainly adopts a unified digital representation method of resource scheduling rules. The resource scheduling rules are learned and deduced through rule fuzzy inference technology, which maximizes the efficiency of scheduling rules. (1) Calculation of resource impact factors Resource impact factor calculation is a prediction method of impact factors based on deep learning technology. Using simulation and test results, the original sample set of task and environment impact factors on resources is obtained; The convolution neural network in depth learning is used to convolution the input sample data; Pooling technology is used to select typical features, reduce the number of influence factors and computational overhead, and full connection technology is used to obtain training output results [6, 7]. ➀ Convolution operation
Intelligent Planning of Battlefield Resources ...
1009
The purpose of convolution operation is to extract features from the input data: Z l+1 (i, j ) = [Z l ⊗ wl ](i, j ) + b =
K l f k=1
f
x=1
y=1
[Z kl (s0 i + x, s0 j + y)wkl+1 (x, y)] + b
(1) where, (i, j ) ∈ {0, 1, · · · L l+1 }, L l+1 = L l +2s0p− f + 1. In the convolution layer, in order to help express complex features, it is necessary to include excitation functions, as follows: Ali, j,k = f Z i,l j,k
(2)
➁ Pooling technology The general representation of pooling is as follows: Alk (i, j ) = [
f f x=1
y=1
1
Alk (s0 i + x, s 0 j + y) p ] p
(3)
where, S 0: step size, p: Prefabricate fixed parameters. ➂ Full connection technology In the convolutional neural network, there is also a full connection layer. The role of the full connection layer is similar to that of the hidden layer in the traditional feedforward neural network. (a) Back propagation algorithm The back propagation algorithm mainly consists of two parts. In order to stimulate propagation and update weight, both of them iterate repeatedly. The condition for iteration to stop is that the response of the network to the input reaches the predetermined target range. (b) Resource scheduling rule file model In terms of the definition of rule language, the document needs to meet the W3C standard by establishing a rule file model for guidance, as shown in Fig. 1. Fig. 1 Rule file model
1010
C. Zhan et al.
Fig. 2 Composition diagram of rule base
2.2 Building Resource Matching Rules Resource scheduling rule base is a set of resource matching rules based on task, target and environment. It mainly adopts digital method and standardized structure formation to build databases for different types of rule file models and provide models and algorithms to verify the consistency of rules; You can add, delete, modify, and query rules visually. The composition of the rule base is shown in Fig. 2. The rule base, model base, comprehensive database and fact base based on fuzzy reasoning technology are selected for automatic learning and reasoning, and the decision results matching the current situation are output, which can accept manual intervention commands. It mainly includes fuzzy classification method, fuzzy sample establishment, fuzzy matching, and fuzzy reasoning model construction.
3 Dynamic Capability Evaluation of Battlefield Resources 3.1 The Method of Establishing capability Index Based on Decision Matrix Based on the field knowledge atlas of battlefield resources, it helps domain experts to design the task capability decomposition and comparison relationship, builds the task capability decision matrix supporting hierarchical decomposition, and forms the overall framework of the capability index system, as shown in Fig. 3. On this basis, guided by the task capability decision matrix, combined with the analysis results [8] of domain experts and domain knowledge mapping technology [9], the ability target capability index mapping and the ability index combat technology index mapping are obtained through human–computer collaborative analysis to form index association and index hierarchy structure; Furthermore, based on the mapping of capability objectives to capability indicators and the mapping
Intelligent Planning of Battlefield Resources ...
1011
Fig. 3 Construction of capability index system
of capability indicators to combat technology indicators, experts, supported by the domain knowledge map, designed the measurement methods of capability objectives and capability indicators, and generated the quantitative evaluation methods of each indicator in the indicator system, so as to establish a complete capability indicator system and integrate it into the domain knowledge map.
3.2 Task-Capability Decision Matrix Analysis Technology The task-capability decision-making matrix is task oriented. It evaluates the comprehensive comparison and qualitative evaluation results of our military and foreign military in all aspects against the combined space of tasks and Capability, forming a highly comprehensive macro evaluation result of the system capability situation, and providing decision support based on data analysis for the overall macro decision-making [10, 11]. In the “Task—Analysis Year—Battle Scale—Opponent” subspace of the top-level decision, the decision matrix can form slices or slices of “Task—Analysis Year” and other multi dimension combinations according to the decision requirements. Taking a typical decision matrix composed of “task analysis year” as an example, a row in the matrix represents the ability evaluation and comparison results of a task against a designated opponent under a specified operational scale, as shown in Fig. 4. For the analysis results of a certain analysis year of a task, you can drill down to further explore the distribution of the comparison results in the “task capability” decision analysis subspace, as shown in Fig. 5. Fig. 4 Example of top slice of decision matrix
1012
C. Zhan et al.
Fig. 5 Decision matrix for “task capability” subspace slice
The decision analysis point determined by the combination of dimension values in the decision space represents the comparative advantage of our army compared with foreign forces, and can be divided into different levels, with the goal of completing the specified task and the specified ability as the evaluation and comparison benchmark, under the analysis scenario of the specified analysis year and the specified operational scale. For the evaluation expression of Capability, the evaluation expression of each capability has particularity and consistency, but different evaluation standards are available for different scenarios. In other words, for different tasks, although the ability evaluation has a consistent way of expression, the quantitative indicator requirements are different. Accordingly, for each decision analysis point, due to its differences in analysis year, operation scale, combat opponents, task requirements, etc., many factors such as operational specialty, analysis data set, data analysis granularity, data availability, data credibility and evaluation methods should be considered when carrying out capability comparison and evaluation for corresponding decision analysis scenarios. Therefore, for each decision analysis point included in the decision matrix, it is necessary to separately analyze the evaluation model and its comparison indicators used, as shown in Fig. 6. Task capability index is a qualitative evaluation index to evaluate the capability requirements for the specified task objectives. It determines the grading of the capability evaluation results according to the capability evaluation criteria set for the task requirements. Competency index is a quantitative index proposed by experts in the field according to the connotation of competence, which is used to evaluate the level of Fig. 6 Index system hierarchy
Intelligent Planning of Battlefield Resources ...
1013
specific side competence. The task capability indicators reflecting the comprehensive capability can be obtained by synthesizing multiple capability indicators reflecting the level of lateral capability, and the method of synthesizing each capability indicator is the capability target measurement method for task capability indicators. This method needs to be determined by expert design. Because the multi-dimensional decision-making analysis space of “mission capability battle scale combat adversary analysis year” is very large, the required solution models are various, and the expertise and experience involved are very extensive, which is a long-term project that requires a lot of investment. It cannot be completed only by relying on the strength of a small number of scientific research institutions. Therefore, this project takes the decision-making matrix as the framework, and combines the domain knowledge map and social computing technology to study the task oriented capability index system construction and management mechanism. As shown in Fig. 7, the mechanism proposed in this paper is supported by the human–computer cooperation between domain experts and domain knowledge maps, and fully excavates the internal relationship of conceptual knowledge with the knowledge exploration and recommendation technology support experts based on domain knowledge maps. According to the analysis target design defined by the decision matrix, “mission capability support relationship”, “capability indicator support relationship”, “combat technology indicator support relationship”, “capability target measurement” A series of knowledge, such as “capability index measurement”, are fused with the index knowledge obtained by using the domain knowledge map to form an index system. Fig. 7 Task oriented capability index system construction and management mechanism
1014
C. Zhan et al.
Fig. 8 Task oriented resource discovery and scheduling diagram
4 Intelligent Planning of Battlefield Resources Based on Rules and Capability 4.1 Summary Capability based resource discovery is more flexible and intelligent [12] When combined with rule-based resource discovery, the discovery results are more accurate and objective, and more suitable for real-time changes in battlefield tasks. Therefore, for network communication resources, perception resources, fire resources, and computing storage resources in the battlefield, this paper uses the Kmeans++ algorithm and improved particle swarm optimization algorithm [13, 14] for resource planning, as shown in Fig. 8.
4.2 Task and Resource Capability Description (1) The task description format is as follows: The description format of battlefield tasks can be formalized into two tuples T = , where A = {A1 , A2 , …An } is the combat task set, and R is the relationship set between combat tasks. (2) Description format of resources The attributes of battlefield resources can be formally described as quaternions: RS =
(4)
Intelligent Planning of Battlefield Resources ...
1015
where, RSID: resource number, RSName: resource name, RSLocation: resource location, RSCap: resource capacity attribute. Expressed by a group of capacity components. RSCap = (RSCap1 , RSCap2 , …, RSCapm ).
4.3 Rapid Resource Planning (1) Fast clustering based on capability value The resource data containing these capability values are clustered using the K-means algorithm. The similarity is measured by the Euclidean distance. The number of K-means algorithms is given manually based on experience. The ratio of intra class distance to inter class distance after clustering is repeatedly calculated for at least 10 times, and the one with the lowest ratio of intra class distance to inter class distance is taken as the final clustering result, and the clustering result is dynamically updated. After the algorithm clustering is completed, each category should be resources with similar Capability, while resources with different Capability should be resources with different Capability, that is, resources within a category have similar Capability, while resources between categories have different Capability. This result can be used as the basis for rapid matching of resources based on Capability. (2) Matching between operational tasks and battlefield resources For this problem, the objective function mainly includes time objective and capacity utilization objective, that is, under the same parameter conditions, the smaller the time, the better, and the higher the capacity utilization, the better. For the specific execution of tasks, the following constraints must also be met: When the current task is executed, all previous tasks have been completed; Operational resources have been in use for the task; The number of resources with use status is greater than the number of operational resources required by the task; Reusable resources, such as network communication and gun barrel, can only execute one task at a time; Expendable resources (such as ammunition) are not affected by the timing of subtasks, except that they can only handle one task. For this matching process, the following steps can be described: 1) Decision variable constraints. For reusable resources RSm (1 m K ) And Tasks T Ai (1 i n), If there is an allocation relationship between wim = 1: One is resource RS m Assigned to processing task TAj after processing task TAi , i.e. transfer variable x jim = 1. The other is that resources RS m are used for the first time, i.e. transfer variable x jim = 0. For consumable resources RSm (1 m K ), transfer variable x jim = 0, Therefore, the following two resources can be handled uniformly. i.e. x iim = x jjm = 0, Then resource-task allocation variables wim and resource transfer variables x ijm between tasks have the following constraints:
1016
C. Zhan et al.
N j=0
x jim ≤ wim
(5)
where, i = 1, 2, . . . , N ; m = 1, 2, . . . , K . 2) Capacity constraints The task TAi can be successfully executed only when the capacity of all resources assigned to the task is not less than the task demand ACapi , that is N m=1
SCapim wim ≥ ACapli
(6)
where, i = 1, 2, . . . , N ; l = 1, 2, . . . , L. L is the number of resource capacity types. 3) Time constraint Different tasks have a sequential relationship in the execution process. Therefore, the task processing time with temporal characteristics includes the following constraints: s j − si ≥ AE xeT imei (7) ai j = 1 where, i, j = 1, 2, · · · , N , N is the total number of tasks. In terms of time constraints, the research team ensures that the start time S j of resource RS m to start processing task TAi must be greater than or equal to the time when resource RS m completes task TAj , that is DT T Ai , T A j s j ≥ si + xi jm + AE xeT imei vm
(8)
(3) Solve the goal The task based resource planning algorithm model has two goals: 1) Time target Minimize the total time of task execution under the condition that all the above constraints are met, that is minTime. 2) Capacity utilization target task TAi (i = 1, 2, …, n), Its capacity demand vector is ACapi = For each i i capacity vector ACap1i , ACap [15] for resources RS m , 2 , · · · , mACapl . To m RSCap = RSCap1 , RSCap2m , · · · , RSCaplm , for ∀ j ( j = 1, 2, · · · , L), all had ACapij ≥ ACap mj . The objective function of capacity utilization rate is max U R S = max
N L i=1
j=1
n i=1
ACapij L
m∈G r (T Ai )
m j=1 RSCap j
where, G r (T Ai ) represents the resource group assigned to task TAi .
(9)
Intelligent Planning of Battlefield Resources ...
⎧ ⎨ min T ime ⎩ max U R S = max
1017
N L i=1
j=1 n
i=1
ACapij L m m∈G r (T Ai ) j=1 R SCap j
(10)
where, i, j = 1, 2, · · · , N ; 1 ≤ l ≤ L; N, L refers to the total number of tasks and the type of resource capacity. (4) Optimal programming based on improved particle swarm optimization algorithm and clustering algorithm 1) First, initialize the parameters of the particle swarm optimization algorithm [16], where the initial values of c1 and c2 are the standard recommended values of 1.494, r 1 and r 2 are random numbers, ranging from [0, 1], the initial speed is set to 0, the number of population is set to Qn = 30, the maximum number of iterations is set to t max = 30, and the error threshold is set to ε = 10−2 ; 2) t = 0, the initial population of the particle swarm optimization algorithm uses the clustering algorithm to match the clustering result by the capability value of the combat task, and the fitness value is the objective function. 3) In each iteration, the local optimal position of each particle adopts the value of its initial position; For the global optimal position of the particle swarm, the value of the local optimal position generated by each iteration is used.
5 Conclusion In terms of battlefield resource planning, this paper fully analyzes the relationship between battlefield resources and combat tasks, environments and targets, as well as the requirements for the rapidity and accuracy of battlefield resource planning in combination with the complex battlefield situation. By drawing on the advantages of clustering algorithms in the field of machine learning, this paper comprehensively considers the advantages and disadvantages of genetic algorithms and particle swarm optimization algorithms in heuristic search algorithms, a resource planning method based on clustering algorithm and genetic algorithm is innovatively proposed to improve the particle swarm optimization algorithm. Using the basic attributes of battlefield resources, such as capability, location, association relationship, etc., clustering algorithm is used to intelligently classify resources and update them in real time to prepare for fast matching of resources. The simple and fast advantages of particle swarm optimization in heuristic algorithm are taken as the basic algorithm, the fast matching of clustering algorithm results can solve the problem of slow search speed caused by the introduction of genetic algorithm. Improve the accuracy of resource planning, and provide support for mission planning and adjustment in the operational planning and implementation stages.
1018
C. Zhan et al.
References 1. Guan, X.J., Chen, C.B., Lu, M.: Research on battlefield resource planning and scheduling system. Ind. Control Comput. 34(2), 125–126 (2021) 2. Liu, Z.P., Si, G.Y., Tang, Y.B., Liu, S.J.: Research on joint operation resource scheduling model. Sci. Technol. Guide 37(13), 23–31 (2019) 3. Yang, Y., Wang, X.Q.: Optimization of PID parameters in DC motor control system based on particle swarm optimization and bacterial foraging optimization algorithms. J. Yanshan Univ. 40(3), 270–275 (2016) 4. Sun, Y., Mao, S.J., Zhou, Y., Mao, X.B., Zhu, X.Q.: Operational resource scheduling method under complex task relationship constraints. Command Inf. Syst. Technol. 12(6), 39–44 (2021) 5. Liu, Z.P., Si, G.Y., Tang, Y.B., Liu, S.J.: Research on joint operation resource scheduling model. Sci. Technol. Herald 37(13), 23–31 (2019) 6. Huang, G., Liu, Z., Lanurens, V. D. M., et al.: Densely connected convolutional networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017) 7. Singh, B., Davis, L. S.: An analysis of scale invariance in object detection snip. In: Proceedings of the IEEE Conference on Conmputer Vision and Pattern Recognition, pp. 3578–3587 (2018) 8. Gupta, S., Kenkre, S., Talukdar, P.: Care: open knowledge graph embeddings. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 378– 388 (2019) 9. Liu, Y.C., Li, H.Y.: Overview of research on domain knowledge atlas. Comput. Syst. Appl. 29(6), 1–12 (2020) 10. Luo, X.M., He, R., Zhu, Y.L.: Basic principles of equipment operational test design and evaluation on research. J. Armored Force Eng. Coll. 28(6), 1–7 (2014) 11. Zhao, X.A., Jiang, Z.P., Huang, X.S.: Comprehensive analysis framework for combat capability of weapon system of systems. Fire Command Control 36(7), 7–10 (2011) 12. Long, H.J., Liu, J.: M: A review of research on the planning methods of naval battlefield combat resources. Ship Electron. Eng. 9(321), 16–21 (2021) 13. Wu, Z.H., Yang, R.F., Guo, C.X., Ge, S.C., Chen, X.L.: Analysis and verification of finite time servo system control with PSO identification for electric servo system. Energies 12(18), 3578–3580 (2019) 14. Karczmarek, P., Kiersztyn, A., Pedrycz, W., Al, E.: K-means-based isolation forest. Knowl. Based Syst. 195(11), 105659.1–105659.15 (2020) 15. Shen, X.N.: Research on Multi-Objective Optimization Method Based on Evolutionary Algorithm. Nanjing University of Science and Technology, Nanjing (2008) 16. Chen, J., Gong, S.Y.: Optimal allocation of combat resources based on genetic algorithm. Sci. Technol. Eng. 13(29), 8647–8650, 8656 (2013)
New Features Extraction Method for Fault Diagnosis of Bearing Based on Legendre Multiwavelet Neural Network Chengyou Luo, Xiaoyang Zheng, Dongqing Jia, and Zeyu Ye
Abstract This paper presents a new bearing fault diagnosis method of feature extraction based on Legendre multiwavelet neural network (LWNN). Specifically, the proposed method uses LWNN to approximate the rolling bearing signal. The fault characteristics of the rolling bearing signal are obtained from the approximation coefficient matrix of LWNN. Then, only the standard deviation (SD) and maximum (MAX) of the coefficient matrix are calculated to design the feature vector. The feature vector is used as the input of BP classifier to accomplish the fault diagnosis of bearing. The proposed method not only overcomes the shortcomings of traditional fault diagnosis methods which largely rely on artificial feature extraction and expert experience, but also avoids the difficulties of deep network architecture designing and training. Finally, experiments are carried out on the bearing dataset of Case Western Reserve University to verify the effectiveness of the proposed method. Keywords Classification · Rolling bearing fault diagnosis · Legendre multiwavelet neural network · BP neural network
1 Introduction Modern industry uses a variety of complex rotating machines, which are closely linked to ensure the operation of the entire system. Rolling bearings as extremely important supporting parts, have been widely used in many rotating machinery. Their health status are closely related to the stable operation of mechanical systems [1]. In order to extract effective features for classification, researchers have carried out indepth studies in three aspects: extraction methods of complex features, traditional machine learning methods and deep learning techniques [1–6]. For example, Yuan et al. adopted the improved multivariate multiscale sample entropy of the original signal as the input of BP neural network (BPNN) to realize the fault diagnosis of the C. Luo · X. Zheng (B) · D. Jia · Z. Ye School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_94
1019
1020
C. Luo et al.
rolling bearing [1]. Li et al. selected BPNN to locally learn dissimilar features in time domain and frequency domain from signals of different scales, and then obtained fault diagnosis result by SVM classifier [2]. Yang et al. proposed a fault diagnosis scheme combining hierarchical symbol analysis and convolutional neural network (CNN) for centrifugal pumps and motor bearings [3]. Shao et al. proposed an intelligent fault diagnosis method based on extreme learning machine and deep wavelet autoencoder [4]. Although these methods have had good results, the selection of effective features depends on a large amount of expert experience, and the complex structure of deep neural networks are difficult to meet the requirements of identifying various fault types in real life [2, 4–6]. In order to solve the above problems, this paper proposes a new bearing fault feature extraction technique based on LWNN. This proposed method needs to use the coefficient matrix of LWNN approximation signal as fault characteristics, and then calculate the SD and MAX of the coefficient matrix to design the feature vector, which is used as input of BPNN classifier for classification. The main contributions and advantages of this work can be summarized as follows: (1) This paper proposes LWNN to extract the fault features of rolling bearing signal, which can effectively and efficiently extract internal features of various bearing faults from different decomposition levels without losing any information. (2) This method uses BPNN classifier with simpler structure to avoid the difficulty of deep neural network design and training. The structure of this article is as follows: The method is described in detail in Sect. 2. The Sect. 3 demonstrates the effect of this method and the comparison experiment of existing methods. The final conclusion is described in Sect. 4.
2 The Proposed Method This proposed method uses the coefficient matrix of LWNN approximation signal as fault characteristics, then calculate SD and MAX of the coefficient matrix of LWNN as the input of BP classifier. The specific process of the method is shown in Fig. 3.
2.1 Legendre Multiwavelet Neural Network LWNN has a three-layer feed forward neural network, which is composed of a single input layer, hidden layer and output layer and is illustrated in Fig. 1. Input Layer: First set the decomposition scale n and the order p of Legendre multiwavelet basis. Then the input data vector is parted into 2n subintervals denoted by x = [x In0 , ..., x Inl , ..., x In(2n −1) ]T , which are passed into the 2n neurons in input layer.
New Features Extraction Method for Fault Diagnosis of Bearing Based …
1021
Fig. 1 The structure of LWNN
Hidden Layer: Main nneural cell φ p,nl in this layer is essential to a linear combination of the dilated and translated Legendre polynomials. L k (x) represent the Legendre polynomial of order k and is defined as L 0 (x) = 1, L 1 (x) = x, L k+2 (x) =
2k + 3 k+1 x L x+1 (x) − L k (x). k+2 k+2
(1)
The Legendre multiwavelet basis function defined on decomposition scale 0 is √ 2k + 1L k (2x − 1), x ∈ [0, 1), φk (x) = 0, x ∈ / [0, 1).
(2)
The subspace V p,n is spanned by 2n p base functions which are obtained from φ0 , · · · , φk−1 by dilation and translation, i.e. n V p,n := span φk,nl (x) = 2 / 2 φk,n (2n x − l), 0 ≤ k ≤ p − 1, 0 ≤ l ≤ 2n − 1 . (3) The output y Inl of the lth neural cell in the hidden layer to the output layer is y Inl =
p−1
sk,nl φk,nl (x Inl ).
(4)
k=0
Output Layer: The output of LWNN is calculated by y=
n 2 −1
l=0
y Inl .
(5)
1022
C. Luo et al.
Fig. 2 The result and error of approximation bearing fault 1 by LWNN
Specifically, the resolution of LWNN is n = 6 and the order of Legendre wavelet is p = 8 for the approximation of bearing fault 1. It can be seen from Fig. 2 that LWNN can perform a good approximation of the rolling bearing signal.
Fig. 3 Flowchart of the proposed approach for the bearing fault recognition
New Features Extraction Method for Fault Diagnosis of Bearing Based …
1023
2.2 The Flowchart of the Proposed Method The following flow chart describes the diagnosis principle of the proposed method in detail, where the resolution of LWNN is n = 6 and the order of Legendre wavelet is p = 8. According to the flow chart shown in Fig. 3, the general steps describing the proposed method in detail are as follows: Step 1: According to the method proposed in this paper, 10 kinds of vibration signals of bearing health condition under different working conditions are collected. The 1000 samples of the dataset consist of 100 samples randomly selected for each fault type. Step 2: In order to obtain the optimal resolution level n and wavelet order p of LWNN, 10 samples of each fault type under each load are selected for training determination. Step 3: The feature coefficient matrix is obtained by training 1000 samples by LWNN. Then calculate the SD and MAX of the coefficient matrix to design the feature vector, which is used as input of BPNN classifier for classification. Step 4: Finally, 800 samples are randomly selected for training BPNN classifier, and other 200 samples are taken as input to the trained BPNN classifier to verify the bearing fault diagnosis performance.
3 Experimental Results Verification 3.1 Experimental Data Set The bearing vibration signal data set provided by the Bearing Data Center of Case Western Reserve University is used to verify the proposed method. The bearing data during testing are measured by the acceleration sensor at four different loads (0 hp, 1 hp, 2 hp and 3 hp). The dataset used in the work includes four bearing conditions: ball fault (BF), inner race fault (IR), outer race fault (OR) and health condition (NC). The damage size of different fault conditions are 0.007, 0.014 and 0.021 inches.
3.2 Analysis of Diagnosis Results This subsection introduces the effectiveness of the proposed method on the data set, and compares the proposed method with CNN, Teager energy operator demodulation and deep autoencoder (TEOD-DAE), Bidirectional gated recurrent unit (BiGRU) and BPNN combined with Daubechies wavelet (DW-BPNN). The network structure adopted by BPNN classifier is 18-50-10 in the experiment. The classifier is trained 50 times by Levenberg–Marquardt gradient descent method. Experimental results show that this method can complete the fault diagnosis task with higher precision.
1024
C. Luo et al.
In order to make the experimental results more objective, the ten-fold cross validation evaluation are used in each group of experiments. As can be seen from Table 1, the highest and lowest fault accuracy rates are 99.5% and 98.5% respectively under 0HP and 1HP loads. It can be seen from the confusion matrix in Fig. 4 that only IR14 has three wrong judgments under 1HP load, and the accuracy of other 9 kinds of faults are 100%. Table 2 shows the results of this method compared with other methods. It can be seen from Table 2 that the accuracy of the proposed method in this paper is higher than that of other deep learning methods and DW-BPNN method under all loads. The proposed method is 0.4%, 0.9%, 0.3 and 0.2% higher than the optimal results of other methods. Table 1 Diagnosis results of the proposed method Loads BF07 BF14 BF21 IR07 IR14 IR21 OR07 OR14 OR21 NC
Average
0HP
0.95
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00
1.00 0.995
1HP
1.00
1.00
1.00
1.00
0.85
1.00
1.00
1.00
1.00
1.00 0.985
2HP
1.00
0.95
1.00
1.00
0.95
1.00
1.00
1.00
1.00
1.00 0.990
3HP
1.00
0.95
1.00
1.00
0.95
1.00
1.00
1.00
1.00
1.00 0.990
Fig. 4 Multiclass confusion matrices of the proposed approach for the 0HP and 1HP loads
Table 2 Comparison of the method obtained results with other methods Method
0HP
1HP
2HP
3HP
CNN [5]
98.60%
97.60%
98.20%
96.80%
TEOD-DAE [6]
94.90%
94.80%
95.63%
95.27%
BiGRU
82.95%
95.35%
95.55%
96.15%
DW-BPNN
99.10%
96.10%
98.70%
98.80%
The proposed method
99.50%
98.50%
99.00%
99.00%
New Features Extraction Method for Fault Diagnosis of Bearing Based …
1025
4 Conclusions In order to improve the reliability and safety of modern industrial machinery, this paper presents a new bearing fault diagnosis method of feature extraction based on LWNN. The method uses LWNN to approximate the signal to obtain the coefficient matrix and design the feature vector through the coefficient matrix. The feature vector is used as input of BP classification to complete fault diagnosis. For comparisons, the popular fault recognition methods such as CNN method, TEOD-DAE method, BiGRU method and DW-BPNN method are also conducted on the same bearing dataset. The experimental results show that the proposed method has the highest diagnostic accuracy and can extract more accurate features than the existing fault diagnosis methods.
References 1. Yuan, R., Lv, Y., Li, H., Song, G.: Robust fault diagnosis of rolling bearings using multivariate intrinsic multiscale entropy analysis and neural network under varying operating conditions. IEEE Access 7, 130804–130819 (2019) 2. Li, J., Yao, X., Wang, X., Yu, Q., Zhang, Y.: Multiscale local features learning based on BP neural network for rolling bearing intelligent fault diagnosis. Measurement 153, 107419 (2020) 3. Yang, Y., Zheng, H., Li, Y., Xu, M., Chen, Y.: A fault diagnosis scheme for rotating machinery using hierarchical symbolic analysis and convolutional neural network. ISA Trans. 91, 235–252 (2019) 4. Haidong, S., Hongkai, J., Xingqiu, L., Shuaipeng, W.: Intelligent fault diagnosis of rolling bearing using deep wavelet auto-encoder with extreme learning machine. Knowl.-Based Syst. 140, 1–14 (2018) 5. Ding, X., He, Q.: Energy-fluctuated multiscale feature learning with deep convnet for intelligent spindle bearing fault diagnosis. IEEE Trans. Instrum. Meas. 66(8), 1926–1935 (2017) 6. Pei, X., Zheng, X.: Intelligent bearing fault diagnosis based on Teager energy operator demodulation and multiscale compressed sensing deep autoencoder. Measurement 179, 109452 (2021)
Research on Slide-Stainer Layout Dynamic Optimization Debin Yang , Bingxian Liu , Kehui Wang , Kun Gui , Kewei Chen , and Fangyan Dong
Abstract Slide-Stainer is of great significance for the digitalization and intellectualization of pathological staining. In view of the problems of inaccurate layout and demand docking, the Slide-Stainer layout optimization problem is defined based on common staining methods in pathology, aiming at obtaining a scientific and reasonable Slide-Stainer layout, making the whole staining process more efficient and smoother. Also, the result can provide guidance for future specification design of Slide-Stainer. First of all, this paper describes the Slide-Stainer layout optimization problem, and defines the required variables and basic constants of Slide-Stainer. Secondly, based on the definition of the problem, a mathematical model of SlideStainer layout optimization problem was established, and an optimization function including binocular criteria was established. Finally, the optimal layout design result under the condition was obtained by practical working simulation using the common pap staining method, the rationality of the layout was verified by the Y algorithm. The results showed that the staining operation with this layout had high efficiency. Keywords Slide-Stainer · Pathological slides · Mathematical model · Computational intelligence · Layout optimization
1 Introduction With the continuous development of digital pathology, it is possible to make remote pathological diagnosis online, which greatly alleviates the shortage of pathologists and highlights the importance of Slide-Stainer. Slide-Stainer, as shown in the Fig. 1, is an innovative and excellent staining instrument, which follows the staining steps manually operated by pathologists to complete the whole process of slide staining D. Yang · K. Chen Ningbo University, Ningbo 315211, China B. Liu · K. Wang · K. Gui · F. Dong (B) Ningbo Konfoong Bioinformation Tech Co., Ltd., Ningbo 315400, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_95
1027
1028
D. Yang et al.
and replaces the complicated links manually operated. It has the advantages of simple operation, high efficiency, stable staining quality, etc., which meets the quality requirements of pathology department and modern laboratories and is increasingly welcomed by pathologists [1]. However, the Slide-Stainer on the market at present has a single specification and lacks intelligent layout optimization system, which makes the vat idle and overload time more, reduces the overall efficiency of pathological staining, resulting in low popularizing rate, low efficiency, waste and so on. Therefore, in order to adapt to different market demands and help medical units effectively cope with the change of staining demand, Slide-Stainer layout must be optimized, according to the scientific calculation of staining specifications and guide the vat layout, as far as possible to reduce waiting and blocking time. Slide-Stainer layout optimization problem is a derivative of Slide-Stainer optimization scheduling problem. By taking the objective function of Slide-Stainer scheduling decision as the evaluation index, we can evaluate the advantages and disadvantages of Slide-Stainer specifications and vat layout, so as to reverse optimize Slide-Stainer layout design, which is directly related to the utilization rate and production efficiency of Slide-Stainer. Hu et al. [2] applied the assembly line balancing method based on PSO algorithm to solve the assembly line balancing problem. Chartlton [3] utilizes mathematical analysis method to solve the assembly line balancing. Kayar [4] uses Hoffman method, one of the heuristic methods, to balance the assembly line. Then the assembly line is rebalanced by arena simulation program and the results under two different assembly line balance resolutions are given. Most of the existing researches [5–10] are based on the balance problem of industrial production assembly line, which is similar to the Slide-Stainer layout optimization problem solved in this paper. However, because the Slide-Stainer needs to flexibly change the layout setting according to different staining methods, and also faces the scene of multiple processes executing at the same time, it is different from the traditional assembly line balance problem. In addition, the optimization procedure should not be too complicated due to the limitations and cost of Slide-Stainer. Based on the above, a new mathematical model is proposed in this paper, which is suitable for Slide-Stainer layout optimization. The results obtained will be evaluated by Y algorithm [11] to verify the effectiveness of the results. Fig. 1 3D model of Slide-Stainer
Research on Slide-Stainer Layout Dynamic Optimization
1029
Fig. 2 Site plan of Slide-Stainer
2 Description of Slide-Stainer Layout Optimization Problem When staining slides, the Slide-Stainer will drive multiple slides to complete the single or multiple staining methods in the optimal order. The staining method includes J staining stages s j , each staining stage specifies the reagent required for this stage and staining time Pt j , and the sequence relationship between each staining stage is fixed and cannot be changed. Usually, there are a certain number of staining vats vm inside the Slide-Stainer, staining vats can be set up as corresponding functional vats, such as: reagent vat, water vat, loading vat, unloading vat, oven, etc., they will be allocated to the staining stage according to the needs of the staining stage. In the Slide-Stainer layout optimization problem, there are two decision scenarios. One is to evaluate the vat utilization rate, execution efficiency, cost, size and other parameters according to the needs of buyers before the production of Slide-Stainer, so as to customize the most appropriate specification design. The other is in the case of given Slide-Stainer specifications, that is, the total number of vats is known, according to the staining method to be executed, to evaluate the number of vats used, total staining time, stage blocking time and other parameters, to provide guidance for the allocation of vats within the Slide-Stainer. Site plan of Slide-Stainer is shown in the Fig. 2. Some characteristics to be noted in the Slide-Stainer layout optimization problem are as follows: • • • •
A vat can accommodate only one slice at a time Each staining stage has at least one staining vat. A staining vat can only serve one staining stage. The total number of staining vats owned by all staining stages cannot exceed the total number of staining vats in the Slide-Stainer.
3 Mathematical Model Variables used in this paper are shown in Table 1, and their specific description is as follows:
1030
D. Yang et al.
Table 1 Variables used in this article Item
Define
Vat allocation
Global takt time
Zm. j = {0, 1} t j = Pt j /M j t = Jj=1 Pt j /M
Objective function
X = α X1 + β X2
Stage takt time
Universal set Z = Z 1,1 , . . . , Z m, j , . . . , Z M, J t = t1 , . . . t j , . . . , t J t X
Where: Z m. j = {0, 1}
(1)
Zm. j is a binary variable that represents the stage setting of the vat. When Zm. j = 1, it means that vm is assigned to the staining stage s j . t j = Pt j /M j
(2)
t j refers to the beat time of each stage, and its value is equal the ratio of the staining time Pt j required by the slide in the staining stage s j to the number of vats M j assigned in this stage, the value of M j is constrained by the following formula: Mj =
M m=1
z m, j
(3)
The number of vats M j in the other stage also satisfies the following formula: J j=1
t=
Mj ≤ M
J j=1
Pt j /M
(4) (5)
t represents the global staining takt time, which is numerically equal to the ratio of the sum of staining time Pt j of all staining stages s j to the total Slide-Stainer vats M. X1 =
J t j − t j=1
(6)
X 1 is called the takt deviation, which is numerically equal to the sum of the absolute value of the difference between the takt time of each stage and the global takt time from the first to the last stage, which can indicate the busy degree of each stage site, thus helping to evaluate whether the whole staining process is smooth or not. The smaller X 1 is, the faster the completion time of a single slide is and the more balanced the takt time of each stage is.
Research on Slide-Stainer Layout Dynamic Optimization
X2 =
J j=1
Mj =
M
1031
J
m=1
j=1
z m, j
(7)
X 2 is the total number of vats used in the Slide-Stainer, which is equal to the sum of the number of vats used in each stage. In order to reduce the number of vats used in the solution process and thus reduce the reagent cost, a smaller number of vats is preferred, that is, the smaller X 2 is, the better. X = α X1 + β X2
(8)
The objective function value X of the mathematical model is a multi-objective function, which consists of two sub-function values X 1 and X 2 . The two sub-functions are labeled with different weight values α and β. In addition, the decrease of the number of vats X 2 will inevitably lead to the increase of the takt deviation X 1 , so in order to obtain the optimal objective function value, the subfunction values X 1 and X 2 are assigned weight values α and β, and the total objective function value is obtained in the form of weighted sum. The specific objective function is as follows: f = min(X )
(9)
Through the above model, the stage with long staining time and blocking time will be allocated more vats while the number of total vats is reduced as much as possible, so as to make the whole staining process smoother and faster. Due to machine size constraints, this article focuses more on as few vats as possible, so X 2 should have a higher weight. According to algorithmic calculation and experimental experience, weight coefficients α and β are assigned values of 0.1 and 0.9 respectively.
4 Experimental Verification The above model is realized by Python code, and is now solved by Gurobi solver. The staining method of the experiment was pap staining method. At first, according to the limitations of the shape specifications of the Slide-Stainer, the initial number of station M was set at 40. Therefore, the task of the experiment is to allocate each vat reasonably though the mathematical model, the complete layout scheme will be expressed through all m j , it can achieve the weighted optimal solution of staining efficiency and reagent use during the staining process. At the same time, in order to facilitate the comparison, this paper will also compare the obtained layout results with the simplest average distribution layout. The procedure of pap staining method is shown in the Table 2. This list is provided by local bioinformation companies. The result of the experiment will be shown in the Table 3. There are 14 processes in pap staining method, so J is 14. In order to obtain the optimal layout during the execution of this staining method, the corresponding
1032 Table 2 PAP staining method
D. Yang et al.
sj
r eagent
Pt j
1
95% ethanol
4 min
2
water
3 min
3
Hematoxylin
1.5 min
4
water
3 min
5
95% ethanol
2 min
6
95% ethanol
2 min
7
water
1 min
8
papanicolaou
3 min
9
95% ethanol
1 min
10
95% ethanol
1 min
11
absolute alcohol
1 min
12
absolute alcohol
1 min
13
Xylene
1 min
14
Xylene
1 min
constraint conditions are: Mj =
40 m=1
14 j=1
t=
14 j=1
z m, j
M j ≤ 40
(10) (11)
Pt j /40 = 38.25s
(12)
The corresponding objective function is: X1 = X2 =
14 t j − t j=1
40 14 m=1
j=1
z m, j
min: X = 0.1 ∗ X 1 + 0.9 ∗ X 2 The experimental results are as follows: X = 44.89 X 1 = 97.99 X 2 = 39
(13) (14) (15)
Research on Slide-Stainer Layout Dynamic Optimization
1033
Other information is listed in the following Table 3. The experimental results can be seen from the Table 3. It can be seen that among the 40 vats, 39 vats are assigned to the current work, and one vat is idle, the maximum stage takt time is t1 = 48 s, and the minimum takt time is 30 s. In order to verify the validity of the results, the layout results are substituted into the Y algorithm for verification. Six slides are continuously stained in the experiment, and the pap staining method is performed on all the six slides. The staining required for a single slide is known to be 1588.6 s, and the scheduling results are shown in Fig. 3(a).
Table 3 Experiment result table sj
r eagent
Pt j
Mj
tj
1
95% ethanol
4 min
5
48 s
2
water
3 min
4
45 s
3
Hematoxylin
1.5 min
2
45 s
4
water
3 min
4
45 s
5
95% ethanol
2 min
3
40 s
6
95% ethanol
2 min
3
40 s
7
water
1 min
2
30 s
8
papanicolaou
3 min
4
45 s
9
95% ethanol
1 min
2
30 s
10
95% ethanol
1 min
2
30 s
11
absolute alcohol
1 min
2
30 s
12
absolute alcohol
1 min
2
30 s
13
Xylene
1 min
2
30 s
14
Xylene
1 min
2
30 s
1034
D. Yang et al.
Fig. 3 The scheduling scheme based on layouts
As shown, it took 1914.1 s for the Slide-Stainer to complete the staining of 6 pathological slides continuously. The whole staining process was smooth. The staining stages of each slide all met the staining duration requirements and there was no mechanical arm drive competition phenomenon, which proved that the staining layout was suitable for the staining scene of multiple slides under continuous loading. Then, the average distribution layout, that is, three vats are allocated in the first twelve stages and two vats are allocated in the last two stages, is substituted into the Y algorithm, and the result is 2216.3 s. As shown in the Fig. 4, the total work time was 15.8% longer than using the optimal layout, and one more vat was used. This is enough to prove that the optimal layout is superior to the average layout.
Fig. 4 Comparison diagram based on two layouts
Research on Slide-Stainer Layout Dynamic Optimization
1035
5 Conclusion Slide-Stainer layout optimization problem is a derivative of Slide-Stainer optimization scheduling problem. When Slide-Stainer performs Staining operation, it is necessary to carry out reasonable configuration of Slide-Stainer internal vats in order to achieve the best staining efficiency and vat utilization. At the same time, the evaluation index based on Slide-Stainer layout can also provide reference for the structural design of new Slide-Stainer in the future, which has important significance for promoting the application of staining intelligence and the popularization of differentiation. Aiming at the Slide-Stainer layout optimization problem, firstly, this paper describes the basic information of Slide-Stainer and Slide-Stainer layout optimization problem, and defines the relevant parameter variables to solve this problem. Secondly, a mathematical model of Slide-Stainer layout optimization problem is proposed, and a multi-objective optimization function including two sub-function values is defined. Finally, the pap staining method in pathology was used as simulation experiment, and the optimal layout design was obtained by case simulation. At the same time, in order to verify the effectiveness of the layout, Y algorithm was used as the evaluation algorithm to conduct scheduling planning for continuous slides staining, and the results showed the rationality and effectiveness of the layout. In future work, more optimization objectives should be considered, so that the mathematical model can coordinate the staining process more freely according to the operator’s wishes, such as improving staining efficiency, saving reagent consumption, and focusing on optimization of specific steps. In addition, the Slide-Stainer layout optimization problem and staining scheduling problem in this paper are two independent problems, but in the future work, the two models can be optimized and combined, so as to make the linkage between the two more smooth, which is the direction of the future work.
References 1. Masmoudi, M., Houria, Z.B., Al Hanbali, A., Masmoudi, F.: Decision support procedure for medical equipment maintenance management. J. Clin. Eng. 41(1), 19–29 (2016) 2. Hu, X., Zhang, Y., Zeng, N., Wang, D.: A novel assembly line balancing method based on PSO algorithm. Math. Probl. Eng. 2014(pt.9), 1–10 (2014) 3. Guschinskaya, O., Dolgui, A., Guschinsky, N.: A combined heuristic approach for optimization of a class of machining lines. In: IEEE International Conference on Automation Science and Engineering, pp. 154–159. IEEE (2005) 4. Kayar, M., Akalin, M.: Comparing heuristic and simulation methods applied to the apparel assembly line balancing problem. Fibres Text. Eastern Europe (2016) 5. Che, A., Kats, V., Levner, E.: An efficient bicriteria algorithm for stable robotic flow shop scheduling. Eur. J. Oper. Res. 260(3), 964–971 (2017) 6. Li, W., Han, D., Gao, L., Li, X., Li, Y.: Integrated production and transportation scheduling method in hybrid flow shop. Chin. J. Mech. Eng. 35(1), 1–20 (2022)
1036
D. Yang et al.
7. Lu, C., Gao, L., Li, X., Pan, Q., Wang, Q.: Energy-efficient permutation flow shop scheduling problem using a hybrid multi-objective backtracking search algorithm. J. Clean. Prod. 144, 228–238 (2017) 8. Liu, S.Q., Kozan, E.: Scheduling a flow shop with combined buffer conditions. Int. J. Prod. Econ. 117(2), 371–380 (2009) 9. Dong, F., Chen, K., Hirota, K.: Computational intelligence approach to real-world cooperative vehicle dispatching problem. Int. J. Intell. Syst. 23(5), 619–634 (2008) 10. Chen, K., Dong, F., Wang, X., Zhu, Y., Chu, X., Hirota, K.: Research on vehicle scheduling problem of multi warehouse collaborative distribution and its application. Sci. Discov. 9(6), 401 (2021) 11. Yang, D., Liu, B., Wang, K., Gui, K., Chen, K., Dong, F.: A linear programming mathematical model for Slide-Stainer scheduling problem with transportation. In: The 10th International Conference on Information Systems and Computing Technology, Guilin, pp. 316–323. IEEE(2022)
Preparation of Nano-Diamond Thin Film by Single-Screw Extrusion Structure 3D Printer Xiuxia Zhang, Kewang Li, Jinquan Chu, and Shuyi Wei
Abstract With the deepening of research, wide band gap semiconductors have shown an extremely attractive prospect because of their performance and superior reliability. Among them, nano-diamond has made certain achievements in various fields of industry. Nano-diamond thin films have been widely used in optical, thermal, medical and other fields. Nano-diamond has become a hot topic for researchers all over the world. In order to study and expand the application of nano-diamond, it is often necessary to prepare nano-diamond thin films. At present, the main methods to prepare nano-diamond thin film are chemical vapor deposition, inkjet printing, screen printing, etc. Based on the excellent characteristic of single screw extruder and using nano-diamond materials, a 3D printing process for preparing nano-diamond thin film is presented in our laboratory. Nano-diamond thin film was prepared by single screw extrusion 3D printer. The design of single screw extrusion nozzle and the film printing in 3D were mainly studied. Keywords Wide band gap semiconductors · Industry · Nano diamond thin film · Single-screw extrusion · 3D printing
1 Introduction With the unprecedented progress of modern science and technology, the emergence of nanomaterials has led to the development of many industries. Nanomaterials have become the most promising materials in the twenty-first century, and have also been X. Zhang School of Information Engineering, North Minzu University, Yinchuan, China K. Li · J. Chu School of Mechanical Engineering, Xi’an Jiao Tong University, X’i’an, China S. Wei (B) School of Electronics and Information Engineering, North Minzu University, Yinchuan 750021, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_96
1037
1038
X. Zhang et al.
widely used in clean energy, superconductors, alloys, etc. [1]. Among them, nanodiamond thin film stands out because of its many excellent properties. In addition to the characteristics of small conventional particles and large specific surface area, it also has self-cleaning properties, stable chemical properties, high temperature resistance and corrosion resistance [2]. In order to study the nano-diamond size of 3D printer, it is necessary to optimize the design of the nozzle of 3D printer [3], so that it can meet the printing requirements of nano-diamond size materials. To solve these problems, scholars have proposed solutions based on screw extrusion. In terms of structural design, there are different screw structures such as the screw and barrel structure of the strong shear section of the shear-type single-screw extruder, and the coaxial variable-speed single-screw extruder with planetary gear. With the promotion of single-screw technology, it is gradually applied to the extrusion structure of 3D printers. Qin designed a new 3D printing technology of symmetrical intermeshing twin-screw extrusion, which is used for mixed food materials such as cassava flour and corn flour [4]. Ren optimized the structure of the single screw extrusion 3D printer nozzle by using two methods of heat insulation and heat dissipation [5, 6]. Li developed a process analysis screw extrusion device for 3D printing precursor ceramic materials [7]. Based on the excellent characteristics of single-screw extruder, nano-diamond film is printed by using the preparation materials of nano-diamond film and the optimized 3D printer.
2 Structure Design and Simulation Analysis of Single Screw Extrusion 2.1 Structural Design of Single Screw Nozzle Optimizing the nozzle structure is one of the best ways to expand the use range of desktop 3D printers. The optimized nozzle could not only complete the basic work of the printer, but also undertake the complex functions of size material transmission, heating, heat dissipation, extrusion and mixing [8]. Our laboratory designed a new single-screw grooved film 3D printer nozzle for fluid materials and used SolidWorks software for modeling and simulation. The overall structure is as shown in Fig. 1. The nozzle is composed of four parts: feeding structure, mixing structure, conveying structure and extrusion structure. Compared with the traditional nozzle, the main advantage of the new nozzle is to overcome the defect that the traditional nozzle cannot print the size. The working principle of the screw nozzle is that the transmission of materials to the printer port is completed by the motor control screw rod, which can fully integrate the size in the transmission process and make the materials evenly heated, and also avoid the pretreatment of the size; The screw rod transmission not only overcomes the blocking problem of the size material, but also realizes the controllable flow rate
Preparation of Nano-Diamond Thin Film by Single-Screw Extrusion …
1039
Fig. 1 Overall structure of 3D printer nozzle
of the size material; The nozzle feeding structure is composed of two inlet size tanks, which can simultaneously transfer the same or different materials into the size tank, and the size material enters the size conveying section through the guide rod. During the extrusion process, when the material enters the barrel, under the rotating motion of the screw, it is subject to the joint action of the screw edge, screw groove and the inner wall force of the barrel, so that the material can be transported, compacted, melted, sheared, mixed and pressured, so as to ensure that the size material is uniformly heated when entering the extrusion head; In order to realize the large-area regular manufacturing of film type paste, the slotted film extrusion head is adopted.
2.2 Simulation Analysis Use SolidWorks software to model the screw. The flow field is divided into four parts: fluid region, spiral region, inlet and outlet by using Boolean operation method. The mesh is divided and connected with interface surface as shown in Fig. 2 [7]. Interface between SolidWorks and ANSYS to import the solid model. The fluid in the channel is nano diamond paste. The material ratio of nano diamond paste is as follows (weighed by a microbalance): 0.20 g of nano diamond powder, 0.1 g of ethyl cellulose, 12.5 mL of terpineol in a cup, and fully stir the nano diamond Fig. 2 The nozzle extrusion flow channel
1040
X. Zhang et al.
powder and ethyl cellulose. Set the parameters of terpineol size: viscosity 6.4 mP, density 0.964 g/cm3 , ethyl cellulose viscosity 9.8 mP * s, density 1.07 g/cm3 , nano diamond density 3.1 g/cm3 , viscosity 0. Set the inlet flow rate and outlet pressure to 0.01 m/s and 0 pa respectively.
2.3 Simulation Result The simulation is a single-screw groove model 3D printer specially designed for slurry materials. After numerical simulation and optimization of the nozzle, it is concluded that: The pitch of the single screw most suitable for size extrusion is 6; After replacing the traditional circular nozzle with a rectangular nozzle, higher printing accuracy and thin film forming regularity can be obtained; After data analysis, it is found that the extrusion mouth with length of 1 mm and width of 0.5 mm can improve the thin film quality while ensuring the extrusion efficiency; Finally, the fluid–structure coupling and modal analysis of the single screw show that the single screw has stable structural characteristics and high dynamic performance, which meets the requirements of nozzle design.
3 Preparation of Nanometer Diamond Thin Film This laboratory uses 3D printing technology to prepare nano-diamond thin films, firstly, nano-diamond slurry was prepared. The main experimental materials are: nano diamond powder, terpineol, ethyl cellulose, anhydrous ethanol (concentration greater than 99%), deionized water, and microbalance. In order to ensure the adhesion of the nano-diamond thin film on the substrate and the pretreatment of the thin film substrate, first use coarse carborundum or abrasive paper to polish, then thoroughly clean the glass with anhydrous ethanol and deionized water to remove various impurities and pollutants on the glass surface, and finally place it in the clean area for use. Preparation of nano-diamond size: in order to ensure the uniformity and continuity of the prepared thin film, to realize pre-grinding of nano-diamond (particle size of 50 nm) to disperse its aggregates, then weigh with a microbalance, and then mix ethyl cellulose and nano-diamond in the mass ratio of 1:2 as the solute, and terpineol as the solvent. In this experiment, 0.1 g ethyl cellulose, 0.2 g nano-diamond and 12 ml terpineol were selected. Then add the mixture into the magnetic stirrer for heating and stirring for 30 min, and then naturally cool the prepared nano diamond slurry to room temperature. Preparation of nano-diamond thin film. The nano-diamond thin film can be obtained more conveniently by using 3D printer. Its principle is to use the principle of single-screw extrusion and 3D printer layer-by-layer manufacturing to achieve the film preparation. The 3D printer is shown in Fig. 3. The printer is mainly composed
Preparation of Nano-Diamond Thin Film by Single-Screw Extrusion …
1041
Fig. 3 Desktop FDM3D printer
Fig. 4 Single-screw extruder
of printer base 1, hot bed 2, Z axis 3, X axis 4, nozzle 5, and nozzle 6. The physical object of the single-screw extruder is shown in Fig. 4 below [9]. Then the printer will print the model according to the G code. At this time, the size in the size tank will be rotated and extruded with a single screw. During the extrusion process, the size will be fully mixed and evenly heated, and finally the printing will be completed. After the final film formation, the formed film will be thermally sintered, which has two main functions: making the film on the substrate surface more uniform and flatter; The ethyl cellulose contained on the surface of the film is fully decomposed and evaporated, and the transmittance and self-cleaning ability of the nano diamond thin film are improved. The sintering process is divided into three stages: multi-stage heating, multi-stage constant temperature and cooling. First, keeping the sample at room temperature for 25 min, then heating it to 323 k for 50 min, then heating it to 373 k for 100 min, then heat it to 473 k for 100 min, and finally naturally cool the sample to room temperature. The sintering curve is shown in Fig. 5. Nanodiamond thin film with uniform and flat surface was obtained through multistage sintering. Figure 6 shows a 20 nm photo. It can be seen that a continuous film was formed at the above sintering temperature.
Fig. 5 Sintering curve
1042
X. Zhang et al.
Fig. 6 Nanodiamond thin film at 20 nm
4 Conclusion This experiment analyzed the 3D printer of the single-screw groove model in detail, and obtained the best numerical theory through simulation. The structure of the nozzle was optimized by simulation. The size material with a pitch of 6 mm stayed longer and stirred more evenly in the single screw; The traditional circular extrusion port is optimized into a rectangular groove film outlet, and the rectangular groove film outlet is selected to be 1 mm long and 0.5 mm wide; Finally, the fluid–structure coupling analysis and modal analysis of the single screw are carried out. In the preparation of nano-diamond thin film, the process of size preparation and film sintering are analyzed, and then the film model is drawn and the printer parameters are set to carry out the printing test of nano-diamond thin film, and the printed film is tested by scanning electron microscope.
References 1. Hu, X.J., Zheng, Y.H., Chen, C.K., et al.: Study on doping, surface/interface regulation and properties of nano-diamond films. J. Artif. Cryst. 51(05) (2022) 2. Yang, X.C., Zhang, X.X., Yang, Y.: Research of nano-diamond film prepared on solar cell window. In: IEEE International Conference on Nanotechnology, Beijing, China, pp. 5–8 (2013) 3. Liu, Y.H., Xiao, J.J., Fan, R.R., Cheng, G.Y., Qi Y.S., Lin, D.L.: Status quo and analysis of 3D printing of creative products based on “internet plus”. Mech. Eng. 14(06), 68–70 (2021) 4. Qin, J.J.: Research on symmetrical meshing twin-screw 3D food printer. Guangxi University (2016) 5. Ren, L., et al.: Optimization design and temperature field analysis of single-screw extrusion 3D printer. Plastic Indust. 48(12), 90–95 (2020) 6. Ren, L., Bai, H.Q., Bao, J., Jia, S.K., Qin, W.: An Yiwei structural design and simulation analysis of screw extrusion 3D printer. China Plast. 35(04), 98–105 (2021) 7. Li, Y.G., Ye, X.M., Ji, H.C., Zhang, X.J., Zheng, L.: Design and optimization of screw extruder for 3D printer of precursor ceramic materials. J. Beijing Univ. Technol. 45(12), 1173–1180 (2019) 8. Dai, Z.Y., Shao, H., Wang. Z.L., et al.: Study on the characteristics of film formation in the pressurized channel of single-screw compressor. J. Xi’an Jiaotong Univ. 1–8 9. Su, Q.R., Li, S.W., Zhang, Q.Y.: Design and manufacture of a desktop 3D printer. Electron. World 629(23), 174–175 (2021)
Design of Automobile Lubricating Oil Storage Tank with Pretreatment Chamber Haiyu Li
Abstract Based on the practical application, this paper puts forward a design scheme of automobile oil storage tank with pretreatment chamber in the field of lubricating oil storage device technology. At present, the widely used lubricating oil storage device is not equipped with a pretreatment chamber in the process of use, which cannot block the possible impurities in the lubricating oil raw materials, resulting in the impurities entering and blocking the fine lubrication pipeline, and cannot guarantee the normal oil supply of the storage device. This design is mainly aimed at the shortcomings of this existing technology, to provide an automotive oil storage tank with pretreatment chamber, including the containment box and the anti-collision and shock sleeve set on the outside of it, to block the possible impurities in the lubricating oil raw materials, ensure the normal oil supply of the storage box, at the same time, the side of the containment box set level indicator plate, A window is set on the anticollision damping sleeve to observe the liquid level indicator board, so as to realize the monitoring of oil volume, and effectively prevent the oil overflow phenomenon caused by the liquid level can not be observed when canning lubricating oil, which has a certain practical significance. Keywords Pretreatment chamber · Automobile lubricating oil · A storage box · Impurities · Crash proof and shock absorbing sleeve
1 Introduction A lubricant is a liquid or semisolid lubricant used between two relatively moving objects to reduce the friction and wear caused by contact between the two objects, lubricating oil is mainly used for lubrication, cleaning, rust prevention, auxiliary cooling, sealing and buffering, etc. It is mainly used in various types of automobiles H. Li (B) School of Mechanical and Electrical Engineering, Weifang Vocational College, Weifang, Shandong, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_97
1043
1044
H. Li
and mechanical equipment to reduce friction, the Lube oil storage tank is used to hold the finished lube oil container, so as to facilitate the storage of a large number of lube oil effect [1]. Although there are many kinds of Lube oil storage tanks on the market at present, there are still some problems when they are used, there is no pre-treatment room in the inner part, which can not block the possible impurities in the raw material of lubricating oil, so that the impurities can enter into and block the fine lubricating pipeline, and can not guarantee the normal oil supply of the storage device, and in the process of transportation, there are certain security risks. In view of the deficiency of the existing technology, the technical problems to be solved in this paper are as follows: A lubricating oil storage tank with a pretreatment chamber is provided, in order to solve the problem that there is no pre-treatment chamber in the Lube oil storage device on the market, which can not block the possible impurities in the lube oil raw materials, lead to impurities will enter and block the fine lubrication pipeline, can not guarantee the storage device normal oil supply problem, and the product is double-layer design, the external installation of anti-collision shock sleeve, can effectively play the role of anti-collision shock absorption, can block the possible impurities in the lubricant raw materials to ensure the normal supply of oil in the storage tank.
2 Design of Scheme In order to solve the above technical problems, the technical scheme of the automobile oil storage tank with pretreatment chamber is as follows: This design is an automotive oil storage box with a pretreatment chamber, including a storage box and a collash-proof shock absorber set on the outside. The bottom of the accommodating box is provided with a suction pipe through the anticollision damping sleeve, the central roof of the accommodating box is provided with a downward depression of the first cylindrical cylinder, the upper end of the first cylindrical cylinder is connected with a sealing cover [2], the first cylindrical cylinder and the sealing cover is enclosed into a pretreatment chamber, the side of the sealing cover is provided with a connecting plate with external thread part, the upper wall of the first cylindrical cylinder is provided with an internal thread groove with external thread part, The central part of the sealing cover is provided with a downward depression of the second cylindrical cylinder, the cover body of the sealing cover is located in the outer of the second cylindrical cylinder is fixed with oil tubing, the bottom plate of the second cylindrical cylinder is screw connected with a pull rod, the upper end of the pull rod is provided with an end, the lower end of the rotation is limited in the upper part of the connecting block, the lower end of the connecting block is connected through the first cylindrical cylinder bottom plate through the fixing screw movable column, The lower part of the movable column is provided with a three-way hole, and the three-way hole is composed of two transverse holes and a vertical hole connected to each other. The bottom plate of the first column cylinder and the second column cylinder are symmetrically provided with
Design of Automobile Lubricating Oil Storage Tank with Pretreatment …
1045
annular shallow groove, and the two annular shallow groove are jointly clamped with a cylindrical filter cylinder for intercepting impurities.
3 Specific Implementation The automobile oil storage tank with pretreatment chamber is shown in Figs. 1, 2 and 3 respectively. Figure 1 is the first use state diagram of the automobile oil storage tank with pretreatment chamber, Fig. 2 is the second use state diagram of the automobile oil storage tank with pretreatment chamber, and Fig. 3 is the structure diagram of the automobile oil storage tank with pretreatment chamber. The list of parts represented by each label in Fig. 1, Fig. 2 and Fig. 3 is as follows: 1- Containment box, 2- crashproof and shock absorber sleeve, 3- pumping tubing, 4- first cylinder, 5- sealing cover, 6- second cylinder, 7- refueling pipe, 8- tie rod, 9connecting block, 10- movable column, 11- tee hole, 12- column filter cylinder, 13level indicator plate, 14- window. The storage box includes a containment box (1) and a refueling pipe (7). The main feature is that the containment box (1) is externally installed with a crash-proof and shock-absorbing sleeve (2), and the right outer wall surface of the containment box (1) is provided with a liquid level indicator board (13), and the external surface of the liquid level indicator board (13) is provided with a window (14). The top of the accommodating box (1) is inlaid with a first cylindrical cylinder (4), and the top Fig. 1 Schematic diagram of the first use state of the automobile lubricating oil storage tank with pretreatment chamber
Fig. 2 The second use state diagram of automobile lubricating oil storage tank with pretreatment chamber
1046
H. Li
Fig. 3 Structure diagram of automotive oil storage tank with pretreatment chamber
of the inner wall of the first cylindrical cylinder (4) is connected with the sealing cover (5), and the top right position of the sealing cover (5) and the refueling pipe (7) through each other, the middle position of the sealing cover (5) is inlaid with a second cylindrical cylinder (6), and the middle position of the second cylindrical cylinder (6) is provided with a tie rod (8), And the bottom of the tie rod (8) through the second column cylinder (6) under the sealing cover (5) and the connecting block (9) of the internal clamping, connecting block (9) below the top of the movable column (10) screw connection, and the movable column (10) at the bottom of the position is provided with a three-way hole (11), the bottom of the sealing cover (5) is provided with a cylindrical filter cylinder (12), The cylindrical filter cylinder (12) is arranged in the internal position of the first cylindrical cylinder (4), and the right surface of the accommodating box (1) is provided with a suction pipe (3). The anti-collision damping sleeve (2) set on the outside is made of rubber, and the length of the anti-collision damping sleeve (2) is greater than two-thirds of the height of the accommodating box (1). A level indicator board (13) is arranged on the side of the holding box 1, and a window (14) is arranged on the anti-collision and shock absorber sleeve (2) for the convenience of observing the level indicator board (13). The central roof of the accommodating box (1) is provided with a downward depression of the first cylindrical cylinder (4), the upper end of the first cylindrical cylinder (4) is connected with a sealing cover (5), the first cylindrical cylinder (4) and the sealing cover (5) into the pretreatment chamber [3], the outer wall surface of the sealing cover (5) and the first cylindrical cylinder (4) of the inside and the pull rod (8) of the outside are threaded connection, And the highest point of the sealing cover (5) and the highest point of the containment box (1) in the same straight line. The side of the sealing cover (5) is provided with a connecting plate with an external thread part, and the upper part of the inner wall of the first cylindrical cylinder (4) is provided with an internal thread groove with the external thread part. The central part of the sealing cover (5) is provided with a second cylindrical cylinder (6), and the cover body of the sealing cover (5) is located on the periphery of the second cylindrical cylinder (6) with a fixed oil pipe (7). The bottom plate of the second cylindrical cylinder 6 is screw connected with a tie rod 8, the upper end of the tie rod 8 is provided with an end, the rotation of the lower end is limited to the upper part of the connecting block 9, the lower end of the
Design of Automobile Lubricating Oil Storage Tank with Pretreatment …
1047
connecting block 9 is connected with a movable column 10 through the bottom plate of the first cylindrical cylinder 4 through a fixed screw. The top of the tie rod (8) is provided with a end, and the top of the end is inlaid with a hexagonal groove, and the diameter of the tie rod (8) is greater than the top diameter of the end. The lower end of the tie rod (8) is provided with a T-shaped rotary head, and the upper part of the connecting block (9) is provided with a T-shaped rotary slot. The bottom of the tie rod (8) is evenly distributed with ball (15), and the outside of the tie rod (8) is integrated with a first clamping block (16). The first card block (16) is arranged below the second card block (17), and the second card block (17) and the inner wall of the reserved slot (18) is an integrated structure, and the reserved slot (18) is embedded in the top position of the connecting block (9). The lower part of the movable column (10) is provided with a three-way hole (11). The three-way hole (11) is composed of two transverse holes and a vertical hole connected to each other. The center line of the vertical hole coincides with the axis of the movable column (10). When the pull rod (8) is rotated upward, the connecting block (9) drives the movable column (10) to lift. The inner cavity of the first cylindrical cylinder (4) is connected with the inner cavity of the accommodating box (1) through the three-way hole (11). At this time, the refueling pipe (7) can be used for normal refueling. The bottom plate of the first cylindrical cylinder (4) and the second cylindrical cylinder (6) is symmetrically provided with annular shallow groove, and the two annular shallow groove is jointly clamped with a cylindrical filter cylinder for intercepting impurities (12). The bottom plate of the first cylindrical cylinder (4) is provided with a shaft seal at the perforation of the movable column. The cylindrical filter cylinder (12) is made of stainless steel, with oil resistance, and the cylinder is equipped with a filter hole [4]. The structure is designed by setting the first cylindrical cylinder (4) and sealing cover (5) to surround the pretreatment chamber, the second cylindrical cylinder (6) and the first cylindrical cylinder (4) located in the middle of the seal cover (5) to jointly hold the cylindrical filter cylinder (12), and then the lifting mechanism composed of the tie rod (8) and connecting block (9) to drive the moving column (10) displacement, forming the refueling control switch. In this way, the possible impurities in the lubricating oil raw materials are blocked to ensure the normal oil supply of the storage tank.
4 Design Advantages This paper adopts the technical scheme of automobile oil storage tank with pretreatment chamber. The advantages of this design scheme are as follows: In the designed automobile lubricating oil storage tank, when the pull rod is rotated upward, the movable column is driven by the connecting block to lift, and the inner cavity of the first cylindrical cylinder is connected with the inner cavity of the
1048
H. Li
accommodating box through the three-way hole. At this time, the refueling tube can be used for normal refueling. The end of the upper end of the pull rod is provided with an inner hexagonal groove, the bottom plate of the first cylindrical cylinder is located in the movable column perforation is equipped with a shaft seal, the cylindrical filter cylinder is made of oil-resistant stainless steel material, the cylinder is distributed with a filter hole, the side of the accommodate box is provided with a liquid level indicator plate, the anti-collision damping sleeve is provided with a window for the observation of the liquid level indicator plate through the window, so as to monitor the amount of oil, It can prevent the overflow of lubricating oil caused by not being able to observe the liquid level when canning lubricating oil. The structure design is reasonable, the first cylindrical cylinder and the sealing cover are arranged into a pretreatment chamber, the lubricating oil storage tank with a pretreatment chamber; The first cylindrical cylinder and the sealing cover are enclosed into the pretreatment chamber. The second cylindrical cylinder and the first cylindrical cylinder located in the middle of the sealing cover are used to grip the cylindrical filter cylinder, and then the lifting mechanism composed of the pull rod and connecting block drives the movement displacement to form the refueling control switch. In this way, the possible impurities in the lubricating oil raw materials are blocked to ensure the normal oil supply of the storage tank. The outer wall of the sealing cover and the tie rod and the outer wall of the first cylindrical cylinder are threaded connection. Through the threaded connection, the removal of the sealing cover can be well completed. The removal of the sealing cover can facilitate the replacement of the cylindrical filter cylinder, so that the pretreatment of the whole storage box is very good. The comparative advantages between the automobile lubricating oil storage tank with pretreatment chamber and the ordinary lubricating oil storage tank designed in this paper are shown in Table 1. Table 1 The advantages of automobile lubricating oil storage tank with pretreatment chamber compared with ordinary lubricating oil storage tank Advantage
Automotive oil storage tank with pretreatment General lubricating oil chamber storage drum
Collision protection and shock absorption
There are anti-collision damping sleeve, can play the role of anti-collision damping
Easy to produce deformation phenomenon
Filter out impurities
There is a pretreatment chamber to filter impurities through a cylindrical filter cylinder
It is easy to clog the pipe when pumping oil
Level indication
There is a liquid level indicator board, through Oil volume cannot be which the oil quantity can be observed visually measured
Design of Automobile Lubricating Oil Storage Tank with Pretreatment …
1049
5 Conclusion In this paper, an automotive oil storage tank with pretreatment chamber is designed. The structure mainly includes the storage tank and the anti-collision and shock sleeve set on the outside of the tank. It meets the national standards and can block the possible impurities in the lubricating oil raw materials to ensure the normal oil supply of the storage tank. Through the test, the product can block the possible impurities in the lubricating oil raw material, and ensure the normal oil supply of the storage tank, which has certain practical significance and market prospect.
References 1. Xiao, S.C.: Application of vacuum filters for lubricating oil. Architect. Eng. Technol. Des. 32, 946–946 (2014) 2. Hu, J.P., Wang, N.X.: Research and development of a silting soil extractor. Prospect. Eng. Geotech. Drill. Excavat. Eng. 1, 55–59+84 (2015) 3. Li, H.Y.: An automobile lubricating oil storage tank with a pretreatment chamber: China, ZL 202010772700.8 (2022) 4. Zhang, J.X.: Air Pollution Monitoring Environmental Monitoring. China Light Industry Press, Beijing (2006)
Research on Multi UAV Monitoring Interface Based on “Task-Situation-Operator” Hao Zhang, Fengyi Jiang, Zefan Li, Qingbiao Kuang, and Yuan Zhong
Abstract This paper analyzes the three major problems affecting multi UAV monitoring. First of all, from the perspective of man-machine authority, the automation degree and complexity of each operation item under typical combat tasks are analyzed for UAV systems. Then, taking the four machine monitoring as an example, through the design of a multi UAV monitoring cognitive information architecture, combined with the monitoring and evaluation of operators’ physiological data, an operator intention recognition model was established, and the human–computer interaction interface was designed in the form of space page layout switching and time-domain page intelligent display. Finally, a multi UAV monitoring demonstration and verification environment is built for simulation testing. The test results show that the human-machine interface designed in this paper is reasonable and feasible, which can meet the monitoring and control requirements under four-UAV monitoring, reduce the operator’s decision-making time, and improve the operator’s human-machine interactive efficiency. Keywords Multi UAV monitoring · Human-compute authority · Space domain interface · Time domain interface
1 Introduction Over the years, the US military has been developing the concepts of multiple UAVs Control (MAC) and multiple UAVs Management (MAM). With the help of the MQ-1 “Predator” UAVs of the U.S. Army, the MAC ground station was designed by the general atomic energy company in 2005, which realized the simultaneous monitoring of four predator UAVs by two flight operators and four load operators, as is shown in Table 1, laying the foundation for the development of multi aircraft monitoring of the ground station. H. Zhang · F. Jiang · Z. Li · Q. Kuang (B) · Y. Zhong China National Aviation Radio Electron Institute, Shanghai, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_98
1051
1052
H. Zhang et al.
Table 1 Development of MQ-1 predator Type MQ-1 “Predator” (standard mode)
MQ-1 “Predator” (MAC ground control station, one station four UAV verification)
Personnel composition Duty
Quantity
UAVs Quantity
Flight operator
1
1
Mission operator
2
Main flight operator
1
Co flight operator
1
Mission operator
4
4
From 2009 to 2012, the US Air Force laboratory also carried out the multi UAV monitoring interface research project MUSCIT (multi UAV supervisory control interface technology). From previous test projects, researchers gradually realized that three major problems have an important impact on multi UAV monitoring: first, the intelligence level of UAVs is relatively low, and the ground station control is mainly based on the traditional touch interaction mode, focusing on relying on static and dynamic clicks to achieve command control; Second, the interface design idea is mainly space design [1], which takes up a lot of display space; Third, the interface development of time-domain design is insufficient, and the operator manually switches pages and searches information, which increases the workload of the operator [2]. The above three problems restrict the realization of one-station multi UAV monitoring.
2 Man-Machine Authority Allocation Refer to China GJB/Z 134A “Ergonomics implementation procedures and methods”, sort out the typical operation processes and operation items of the operator monitoring UAVS, and classify the operations under the typical UAVS missions according to the degree of automation, complexity, intensity, accuracy and monotonicity according to the residual allocation principle. The degree of automation, complexity and intensity from low to high is 1–5 points, and the accuracy and monotonicity from high to low are 1–5 points. The five operation items with a total score of 12 points or less are suitable for allocation to people, and those with more than 12 points are suitable for allocation to machines. At the same time, considering the mission security, situations involving login software, instruction sending, weapon delivery, emergency handling, etc. must be allocated to people. See Table 2 for man-machine operation load analysis of typical scenarios.
Research on Multi UAV …
1053
Table 2 Analysis of man-machine operation load in typical scenes Operation item
Automation
Complexity
Load
Accuracy
Repeatability
Importance
Synthetical
Power on
5
1
1
1
5
–
Machine
Self inspection
5
1
1
1
5
–
Machine
Login
–
–
–
–
–
Yes
Man
Mission planning
–
–
–
–
–
Yes
Man
Mission assignment
5
1
1
1
5
–
Machine
Simulation training
3
3
4
5
2
–
Man&Machine
Preparation
5
3
3
2
5
–
Machine
Static inspection
5
2
2
5
5
–
Machine
Dynamic inspection
–
–
–
–
–
Yes
Man
Taxi command
–
–
–
–
–
Yes
Man
Taxi monitoring
4
3
2
3
3
–
Machine
Link monitoring
4
3
2
3
3
–
Machine
Flight surveillance
4
2
2
2
3
–
Machine
flight control
4
5
5
5
2
–
Machine
Handover
2
3
3
4
2
Yes
Man
Sub link handover
3
3
3
3
4
–
Man&Machine
Main link handover
3
3
3
3
4
–
Man&Machine
Receive mission instructions
5
1
1
2
4
–
Machine
Mission load self 5 inspection
2
3
5
2
–
Machine
Mission load control
–
–
–
–
–
Yes
Man
Image interpretation
–
–
–
–
–
Yes
Man
Target identification
4
3
3
4
3
–
Machine
Aim
4
3
2
4
4
–
Machine
Judge attack conditions
4
3
2
4
4
–
Machine
Weapon delivery
–
–
–
–
–
Yes
Man
Display data
5
2
2
4
4
–
Machine (continued)
1054
H. Zhang et al.
Table 2 (continued) Operation item
Automation
Complexity
Load
Accuracy
Repeatability
Importance
Synthetical
Damage assessment
3
3
3
3
3
–
Man&Machine
Strike again or return
–
–
–
–
–
Yes
Man
Send return order
–
–
–
–
–
Yes
Man
Return flight monitoring
4
1
2
2
4
–
Machine
Apply for landing
2
3
2
4
3
–
Man
Perform landing
4
3
3
4
4
–
Machine
Power off
2
1
1
4
5
–
Man
3 Design of Multi UAV Monitoring Interface Taking four UAVs monitoring as an example, this paper designs a cognitive architecture of multi UAV monitoring, establishes an intention recognition model based on the cognitive architecture, and optimizes the spatial and temporal interface, so as to realize the control ability of four UAVs in one station.
3.1 Multi UAV Monitoring Cognitive Architecture Model According to Wickens’ information processing model, Klein’s RPD model and perceptual cycle model (PCM), this paper proposes a cognitive and decision-making model for operators of multi UAV monitoring system, as shown in Fig. 1. The monitoring operation behavior of the UAV operator is determined by the operator’s intention, which is determined by the bottom-up and top-down processes. The bottom-up process is driven by the tasks and situational states in the system; According to the needs of the current task and situation, the operator has the possible intention to monitor the current display control system (cognitive processing and operation such as perception, understanding and prediction of information). These intentions are translated into actual monitoring and operation behaviors with the support of the operator interface. The top-down process is decided by the operator himself; In a certain task and situation, the operator generates a specific intention according to previous memory, experience and habits, and actively looks for relevant system information for cognitive processing of perception, understanding and prediction, and operates when necessary.
Research on Multi UAV … Fig. 1 The Operator cognitive framework model we proposed
1055 Memory experience
Top down process
Top down information analysis Task status
Bottom up information analysis
Operation intention
Interface
Information analysis Situation status
Bottom up process
Environment System
3.2 “Task Situation Operator” Intention Recognition Model Intention Prediction Model Based on Task and Situation. The intention of the operator’s behavior is determined by the task, situation and the operator himself. The impact of the task, situation and the operator on the intention has two processes: topdown and bottom-up, as shown in Fig. 2. The intention prediction model based on task and situation is derived from the relationship between the task situation matrix and the possible intention set. For example, the task conditions contained in the typical UAV scene can be sorted into a set T = {T1 , T2 , T3 ,..., Tm }. Similarly, the situational conditions can be arranged into a set S = {S1 , S2 , S3 ,..., Sn }. A matrix M can be formed by T and s. The elements in the matrix can be associated with one or more operational actions; Therefore, the elements in the matrix can contain one or more decision ladders and operational strategies. Further, according to the decision ladder and operation strategy in the task situation matrix elements, through the intention set analysis method of expert experience, determine the intention set I = {I1 , I2 , I3 ,..., Ik } that may be generated in different operation strategies, and one or more intention elements can correspond to a human–computer interface. Prediction Model Based on Operator Specific Intention. Operator specific intention prediction is a multivariate model of machine learning, including three blocks: input layer, model layer and output layer. The model is a kind of multiple input. Because the model layer considers the dynamic change process of scenario tasks, it Fig. 2 “Task situation operator” intention recognition
. ·Load ·Emotion
. .
Operator Cognitive information processing Habits Proficiency
·Stress
Operator intent
Situation ·Enemy's movements ·Climate ·Equipment status
. .
Task phase Monitoring requirements Task
1056
H. Zhang et al.
adopts the procedural hybrid model algorithm, and its input can be divided into three parts: (1) Operator login data set: the proficiency, operation preferences and previous operation behavior indicators entered by the operator at login; (2) Implicit feedback: including eye movement mode, physiological parameters and previous operation records [3]; (3) Intention prediction model based on task and situation: this input is a dynamically switched page. Implicit feedback is calculated based on the same page, so it is an operator specific intention prediction model and a process model. The model layer is based on the algorithm of fusion before self-learning, which fuses the scene, task, display feedback and implicit feedback into a model, and extracts the features in the data through clustering analysis of training self-learning to form a unified push result. The output layer is an operator specific intention recognition model that combines operator state representation (display feedback, implicit feedback) and situational task representation.
3.3 Space/Time Domain Interface Push Optimization Based on the establishment of the “task situation operator” intention recognition model, the interface is intelligently pushed and optimized based on the space / time domain interface to solve the problem that the current space design uses a static display and control interface, which takes up a lot of display space, while the interface of time domain design generally requires the operator to manually switch pages and search information, which increases the workload of the operator [4]. Task Driven—Space Page Layout Switching. Determine the intention subset according to the task scenario model, display the cognitive information supporting the subset, and realize the dynamic switching of pages through information representation design and information layout design. The specific implementation methods are as follows: (1) According to the characteristics of UAV monitoring, the UAV monitoring display elements are divided into four elements: flight monitoring, mission monitoring, situation display and mission planning. According to the size of the actual display resources, the UAV monitoring interface is divided into P × Q preset minimum units, as shown in Fig. 3. The specified interface is adjusted by the preset minimum unit. (2) Obtain the working stage of UAV through telemetry information, and define it as the working mode of UAV (navigation mode or mission mode). According to the working mode of UAV, dynamically adjust the size of the display area of flight monitoring, mission monitoring, situation display and mission planning
Research on Multi UAV …
1057
Fig. 3 Minimum unit division of UAV monitoring interface
1#1
1#2
...
1#Q
2#1
2#2
...
2×Q
...
...
...
...
P#1
P×2
...
P×Q
displayed on the UAV monitoring interface. In the stages of UAV pre flight preparation, take-off, departure and return landing, the operator’s monitoring of the UAV focuses on the flight status of the UAV without paying too much attention to the information of the mission load. Therefore, the working mode of the UAV is the navigation mode. In the navigation mode, the flight monitoring display area displayed on the UAV monitoring interface increases and the mission monitoring display area decreases, as shown in the Fig. 4. In the UAV task execution stage, the operator’s monitoring of the UAV is focused on the task load of the UAV, and the workload of UAV flight monitoring is reduced. Therefore, the UAV working mode is the task mode. In the task mode, the flight monitoring display area displayed on the UAV monitoring interface is reduced, and the task monitoring display area is increased, as shown in Fig. 5. Intention Recognition—Intelligent Push Display of Time Domain Pages. The specific intention is determined according to the task scenario model, and the cognitive enhancement display technology supporting the intention is activated to realize the dynamic presentation and push of cognitive information. The specific implementation methods are as follows: Fig. 4 Schematic diagram of navigation mode of UAV monitoring interface
Situation display
Fig. 5 Schematic diagram of task mode of UAV monitoring interface
Task monitoring
Flight monitoring
Flight monitoring
Situation display
Mission planning
Task monitoring
Mission planning
1058
H. Zhang et al.
Obtain the real-time physiological data of the operator through the physiological instrument and eye movement instrument, and define the intention mode of the operator (conventional mode and stress mode) in combination with the operation data. According to the intention mode of the operator, cooperate with the eye movement data to realize the intelligent push display.
4 Verification To verify the rationality and feasibility of the designed man-machine interaction interface, this paper built a multi UAV monitoring demonstration and verification system and carried out multi UAV monitoring simulation tests. The intelligent display and control system of UAVs cluster autonomous adaptive sensing is composed of adaptive sensing intelligent display and control software and its resident computer, control device, physiological parameter acquisition device and display device, as shown in Fig. 6. The adaptive sensing intelligent display control software consists of a four aircraft mission drive in one station, an intelligent multi UAV monitoring control module and an adaptive intelligent push display module. Physiological parameter acquisition equipment and control devices are connected to the intelligent display and control system with adaptive sensing as input. The final mission planning, flight monitoring and mission monitoring interfaces are displayed by three displays. The verification environment is shown in Fig. 7. Table 3 shows the experimental comparison between the designed adaptive prototype interface system and the conventional ground station. By comparing the experimental data of the conventional ground station design and the adaptive prototype Fig. 6 Composition of intelligent multi UAV monitoring system with adaptive sensing
Mouse
USB
Control device
Adaptive perception intelligent display control software One station four aircraft mission drive
KeyBoard
DVI Intelligent multi UAV monitoring control module
Display device Mission Planning Flight Monitor Mission Monitor
Eye tracker Net Physiological instrument
Fig. 7 Demonstration and verification of intelligent display based on adaptive perception
computer Adaptive intelligent display module
Research on Multi UAV …
1059
interface, it can be seen that the adaptive sensing intelligent interface prototype interface is optimized compared with the traditional ground station operation design by optimizing the operation logic and reducing the number of operations in the conventional mode. Especially under stress operation, the operation times and operation time are significantly improved compared with the traditional ground station design. When using dynamic time domain cognitive enhancement interaction, the operator’s operation accuracy in typical missions is more than 90%. Compared with the traditional static interface, the dynamic time domain cognitive enhancement interface shortens the average completion time of the operator in typical missions by more than 30%. The experimental results show that this paper designs the cognitive architecture of multi UAV monitoring, establishes the operator intention recognition model, designs the multi UAV man-machine monitoring interface in the form of spatial page layout switching and time-domain page intelligent push display, optimizes the operation logic of multi UAV monitoring, determines the specific intention according to the mission scene model. It reduces the operating load of operators, enables operators to pay more attention to mission information, and promotes the transformation of operators from UAVS operators to UAVS system commanders. Table 3 Verification evaluation form Conventional ground station design
Adaptive prototype interface system
Number of keys to operate under normal conditions (Times)
Number of keys to operate under stress (Times)
Time estimation Normal / stress (seconds)
Number of keys to operate under normal conditions (Times)
Number of keys to operate under stress (Times)
Time estimation Normal / stress (seconds)
Offline planning
30+
50+
300/300
30+
40+
100/200
Preparation before takeoff
40+
70+
360/500+
30+
40+
240/260+
Four planes take off
20
40+
40/100+
8
30+
20/40+
Assemble and sail
20
40+
40/100+
8
30+
20/40+
Target search
60+
80+
360/500+
30+
60+
200/260+
Cooperative reconnaissance
60+
80+
360/500+
30+
60+
200/260+
Landing
20
40+
40/100+
8
20+
20/40+
Total
250+
400+
1500/ 2100+
144+
280+
800/1100+
1060
H. Zhang et al.
5 Future Prospects In the future, with the improvement of the autonomy of UAV swarms and the intelligent level of human-machine interaction, multi UAV monitoring will show the following development trends.
5.1 The Operator Shifts from Focusing on Flight Information to Focusing on Mission Information The information monitored by multiple drones in one seat is not simply superimposed on the information of multiple drones. In the one-seat multi UAV mode, the information the operator focuses on is the work result of the drone swarm rather than the work process. The operator is only responsible for issuing orders, reviewing plans and monitoring tasks. The operator is not the operator of the UAV and the monitor of the single aircraft, and the operator becomes the monitor of the multi UAV group mission.
5.2 Seats Show Reduced Resource Redundancy The autonomous control and autonomous decision-making of the UAV reduces the information that the operator needs to pay attention to, and realizes the efficient use of display resources. Information integrated display technology, dynamic display resource allocation and other technologies can meet the operator’s information needs in the case of reduced display resource redundancy.
5.3 Improve the Intelligence of the Auxiliary Operation of the Monitoring System The monitoring system combines the current state information of the UAV system, extracts the operator’s current operation, predicts the operator’s operation intention, and provides operation suggestions by searching the interactive action sequence library. The monitoring system will reduce the operator’s cognitive and memory burden, reduce untimely operations and operational errors caused by fatigue, and help operators deal with various abnormal situations quickly and accurately.
Research on Multi UAV …
1061
References 1. Zhang, G.H., Lao, S.Y., Ling, Y.X.X., Ye, T.: Research on multiple and multimodal interaction in C2. J. Natl. Univ. Defense Technol. 32, 153–159 (2010) 2. Chen, Y.G., Yuan, H., Huang, X.T., Liu, G.Y.: The course and the neural sources of time perception. J. Southwest Univ. (Soc. Sci. Ed.) 2, 1–10 (2011) 3. Shi, J.F., Cao, X.H., Wang, G., Qu, B.: An eye movement study on visual search of web page layout 04, 1–3 (2008) 4. Xu, W., Chen, Y.: Application of human factors in developing civil aircraft and recommendation. Aeronaut. Sci. Technol. 6, 18–21 (2012)
Resource Preallocation Based on Long and Short Periods Combined Task Volume Prediction Senhao Zhu
Abstract With the rapid development of cloud computing, it is faced with many challenges, such as changing the amount of system tasks at any time and optimizing the allocation of resources at any time. This paper comprehensively considers the long-term regularity and short-term volatility of the system task volume, and proposes a prediction method combining the long and short periods. This method uses the combination of GRU and ARIMA model to effectively model the regularity of traffic or task volume change. Users can adjust the model weight according to the task characteristics and types to get the predicted task volume, so as to reasonably allocate the system computing resources. The experiment shows that the prediction algorithm based on GRU and ARIMA model has high prediction accuracy, and can effectively improve the resource utilization efficiency, alleviate the pressure of computing nodes and improve the system throughput after being applied to the distributed resource pre-allocation management system. Keywords Task volume prediction · Resource management · ARIMA · GRU
1 Introduction In the early days of the Internet, due to low user access, only a small number of servers were needed to meet user needs. With the development of the Internet and cloud computing, a single server cannot meet user needs. In order to meet user needs more quickly, it is necessary to adopt a distributed system and use multiple servers to handle user needs. For cloud computing, the number of working nodes on the cloud affects the throughput of the distributed system, and different throughput will also give users different service experience. When the distributed cluster has only a few working nodes, the computing power of the distributed system is low. The task queue of the S. Zhu (B) Beijing University of Post and Telecommunications, Beijing 100876, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_99
1063
1064
S. Zhu
system will also accumulate a large number of tasks, which will seriously affect the user experience. If there are a large number of nodes working in the distributed cluster, the computing power of the distributed system improves. However, due to the decrease in the total number of tasks, a large number of working nodes may cause the waste of resources. Excess computing resources will lead to waste of hardware equipment, increase excess energy consumption and equipment consumption, and increase the operating costs of service providers. Lee et al. [1] proposed a dynamic resource management scheme to solve the dynamic demand for resources during the execution of distributed tasks and adjust resources according to computing time and communication time. Gao et al. [2] proposed a resource scheduling method for CPU and GPU in 2018. In 2019, Gu and Chowdhury et al. [3] proposed GPU resource scheduling algorithms based on different task priorities. Peng et al. [4] proposed the Optimus algorithm, which dynamically adjusts the number of working nodes in distributed machine learning during runtime. Zhang et al. [5] proposed the SLAQ algorithm, which modeled the execution quality of distributed tasks and allocated resources according to the variation of loss values of different models, significantly improved the execution quality of tasks and reduced the average delay. This study is mainly aimed at managing the number of work nodes in the distributed system. Under different user visits, the distributed cluster starts different work nodes to meet the dynamic change of resource demand. By means of traffic prediction, this paper predicts the future task volume, thus giving the appropriate number of work nodes.
2 The Proposed Method Resource preallocation based on with long and short periods combined task volume prediction.
2.1 Task Volume Prediction Based on Long and Short Periods The daily task volume of the system is affected by the task volume in the recent period, and it also exhibits long-term periodicity, i.e., people do similar work every week. Therefore, the amount of data processed in the previous weeks can be used to predict the amount of data in the future. For example, to predict the task volume of next Monday, the task volume of each Monday in the past few weeks can be used to make predictions. This study proposes the combination of the GRU and ARIMA models for task volume prediction. The GRU model is used to make predictions based on the task volume of the last week, and the ARIMA model is used to predict the task volume
Resource Preallocation Based on Long and Short Periods Combined … Fig. 1 The workflow of task volume prediction
1065 Task volume forecast
GRU model
ARIMA model
Extract the task volume in the past 7 days
Extract the task volume of each Monday in the previous weeks
Task volume data
of the next week based on the data volume of the previous weeks. The workflow is shown in Fig. 1. 1. GRU Model The GRU is a type of recurrent neural network. Like long short-term memory, (LSTM), it was also proposed to solve problems such as long-term memory and gradients in backpropagation. The GRU unit structure is shown in Fig. 2 and Eq. (1), ∼
where z t represents the proportion of network information h t that needs to be retained at time t, and rt represents the proportion of h t−1 received at time t that needs to be retained. Finally, their combination produces the final output h t , and W is the corresponding weight. z t = σ Wz · h t−1 , xt , rt = σ Wr · h t−1 , xt , ∼ h t = tanh Wh˜ · rt h t−1 , xt , ∼
h t = (1 − z t )h t−1 + z t h t ,
(1)
2. ARIMA model The ARIMA model is composed of an autoregressive (AR) model and a moving average (MA) model. AR models are mainly used to describe the relationship between the current moment and the historical moment. The AR model fits the historical data and predicts the data at the current moment according to the fitting result. The AR model is represented as follows: yt = c +
p i=1
γi yt−i + εt ,
(2)
where c is a constant, the value of which is taken as 0 in this study, and p is the lag of time-series data, indicating how long before the data is used to predict the future
1066
S. Zhu
Fig. 2 The GRU unit structure
data; in this study, p = 5. Moreover, γ is the autocorrelation coefficient, indicating the degree of influence of the historical data on the current value, and εt is a white noise sequence, which is assumed to have an average equal to 0. The MA model pays more attention to the influence of the error term. The role of the MA model in the ARIMA model is primarily to adjust the error term in the model, and its formula is as follows: yt = εt +
q i=1
δi εt−i ,
(3)
where q is the lag of the prediction error and δi is the coefficient of the MA. According to Eqs. (2) and (3), the formula of the ARIMA model is as follows. yt = c +
p i=1
γi yt−i + εt +
q i=1
δi εt−i ,
(4)
3. Long- and short-term task volume prediction This work combines the GRU and ARIMA models, and proposes a long- and short-term combined prediction algorithm called LSP. The prediction formula is y pr ed = αG RU xbe f or eseven + (1 − α) A R I M A xbe f or e ,
(5)
where α is the scale factor, the value range of which is [0, 1], representing the influence of different models on the final predicted value. When the value of α is close to 1, the output results are more inclined to the GRU model; when the value of α is close to 0, the output results are more inclined to the ARIMA model. In practice, there is a strong temporal correlation between tasks. Compared with the long-term task volume, the short-term task volume plays a greater role in predicting the future task volume. In this study, the scale factor α is taken as 0.6.
Resource Preallocation Based on Long and Short Periods Combined …
1067
2.2 Resource Preallocation Based on Task Volume Prediction There are many computing resources in a distributed learning cluster, but these computing resources are not always running. To reduce the energy consumption and maintenance costs, usually only some computing resources are powered on. However, it is difficult to determine how many computing resources should be powered on. In this study, a heuristic algorithm is used to manage the number of running nodes in a distributed learning cluster. If the predicted number of running nodes is greater than the current number of running nodes, the resource manager increases the running nodes. When the predicted number of running nodes is less than the current number of running nodes, the number of running nodes will be reduced, thereby reducing the operation and maintenance costs. The time taken to complete different types of tasks varies greatly. Some tasks may require little execution time, while others may take a long time to complete, such as machine learning tasks. Therefore, it is a difficult problem to evaluate the number of computing nodes required for a task. There are several ways to determine the number of computing nodes: 1. Fixed number of nodes. 2. The average method is used to determine the number of nodes. The exponential weighted moving average (EWMA) method is used in this study to calculate the average number of tasks completed by nodes. When using this method, each previous value decreases exponentially with time. Compared with the traditional averaging method, this method does not require all past values to be saved, and the computational workload is significantly reduced. The formula for the average daily number of tasks completed by a computing node is task new = ρtask old + (1 − ρ)task past ,
(6)
where ρ is a scale factor, which represents the impact of past data on the average value, task past represents the task volume in the past day, task old represents the task volume calculated by the EWMA the last time, and task new represents the task volume calculated at the current moment. Using Eqs. (5) and (6), the number of running computing nodes required by the distributed computing cluster on the current day can be calculated. The calculation formula is as follows. w pr ed =
y pr ed task new
(7)
1068
S. Zhu
Fig. 3 The prediction results of the proposed LSP method
3 Experiment and Results 3.1 Dataset The task volume of distributed learning can be represented by the size of the dataset used by the distributed learning model. In this research, the traffic data of the core network of a city in Europe were used as the training data to test the prediction algorithm. This dataset was collected from the Internet traffic data of a city in Europe from 06:57 on June 7, 2005, to 11:17 on July 31, 2005.
3.2 Task Volume Prediction and Resource Allocation The prediction results are shown in Fig. 3. In Fig. 3, the orange line is the actual task value, and the blue line is the prediction result of the proposed LSP method, which achieved an accuracy rate of 97.1%. The experimental results show that the method can be used for task volume prediction and can maintain a high model accuracy. In the case of the same number of tasks, different numbers of running computing nodes will impose different computing pressures on each computing node. Suppose the number of computing nodes is 60. Based on the static resource allocation method, 30 computing nodes were used. Figure 4 shows the average number of tasks that each computing node needs to complete based on the proposed resource allocation method and the static resource allocation method. As can be seen from the figure, the proposed resource allocation scheme significantly reduced the number of tasks completed by each computing node in all cycles. This avoids processing performance degradation from running too many tasks on a single computing node.
Resource Preallocation Based on Long and Short Periods Combined …
1069
Fig. 4 The average number of tasks completed by each computing node
4 Conclusion In this paper, ARIMA model and GRU model are used to fit and calculate different types of historical task volume, and then the weighted average method is used to fuse the predicted values of the two data to obtain a new predicted value of task volume. Subsequent experiments show that the task volume prediction scheme based on ARIMA and GRU can accurately predict the future task volume. Acknowledgements This research work was supported by National Key R&D Program of China (Grant No. 2020YFB2104700).
References 1. Lee, Y.S.L., Weimer, M., Yang, Y., et al.: Dolphin: runtime optimization for distributed machine learning. In: International Conference on Machine Learning ML Systems Workshop, New York City, pp.1–14 (2016) 2. Gao, C., Ren, R., Cai, H.: GAI: A centralized tree-based scheduler for machine learning workload in large shared clusters. In: Vaidya, J., Li, J. (eds.) Algorithms and Architectures for Parallel Processing, ICA3PP 2018, vol. 11335, pp. 611–629. Springer, Cham (2018). https://doi.org/10. 1007/978-3-030-05054-2_46 3. Gu, J., Chowdhury, M., Shin, K. G., et al.: Tiresias: a {GPU} cluster manager for distributed deep learning. In: 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19), pp. 485–500 (2019) 4. Peng, Y., Bao, Y., Chen, Y., et al.: Optimus: an efficient dynamic resource scheduler for deep learning clusters. In: Proceedings of the Thirteenth EuroSys Conference, pp. 1–14 (2018) 5. Zhang, H., Stafman, L., Or, A., et al.: Slaq: quality-driven scheduling for distributed machine learning. In: Proceedings of the 2017 Symposium on Cloud Computing, pp. 390–404 (2017)
A Loss Management Method for the Spaceborne Multi-control Processing Ying Zhang, Feng Tian, Cunsheng Jiang, Shihui Wang, and Yizhu Tao
Abstract The multi-control system for spaceborne task processing uses software to complete the power management and error correction of the configuration program to ensure the spaceborne task with high reliable operation capability. The system in this paper adopts software to complete the correction of the configuration program and only wakes up the control strategy of multi-redundant fault-tolerant programming before writing to the FPGA, reducing power consumption while ensuring high reliability of the program. The specific address that exceeds the error correction ability is written to the telemetry, and the ground will update the error block by means of upshooting. In the process of testing, observing the loading and power consumption control of the multiprocessor for the spaceborne task, it has been verified that the system can be realized in the power consumption. The operation for the control processing is represented by the typical controller. The multi-control power mode management for spaceborne task processing realizes the high-reliability on-orbit multi-control, that has the low-cost circuits. Keywords Spaceborne · Management · Attrition · Control
1 Introduction Ma N in 2018 and Gu X in 2020, proposed that the multiprocessing at the mode control level of power management, limited by the storage configurator is more expensive than PROM and MRAM [1–4], as well as Nor flash. Wang P in 2018 typically Y. Zhang (B) · F. Tian · C. Jiang · S. Wang Beijing Aerospace Automatic Control Institute, Beijing 100854, China e-mail: [email protected] National Key Laboratory of Science and Technology on Aerospace Intelligent Control, Beijing 100854, China Y. Tao Yuying School, Beijing 100018, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_100
1071
1072
Y. Zhang et al.
figured out that the Nor flash has the advantages of large capacity, moderate price, and repeatable programming [5, 6], but nor flash still has a very small single-event flip possibility [7–9]. The configurator of the processor represented by the FPGA does not allow the problem of flipping, and this problem will cause a large loss of power consumption. In order to solve this problem, the system in this paper adopts software to complete the correction of the configuration program and only wakes up the control strategy of multi-redundant fault-tolerant programming before writing to the FPGA, reducing power consumption while ensuring high reliability of the program. If a situation exceeds the error correction ability occurs on a certain channel, the error correction uses extended Hamming code for error correction, and the error correction ability is using 8 bit to correct 1 bit flip, and the system will only wake up the relevant processing circuit and write the specific address beyond the error correction. The ground will update the error block by means of an up-bet. So as to solve the singleevent flipping problem fundamentally. In the process of testing, error information can be artificially added to observe whether the processor can be loaded normally, and it has been verified that the system can implement a multi-controller manage method for spaceborne task processing.
2 Spaceborne Multi-control Processing This paper sets up a test environment to simulate star power consumption control. The computer sends configuration programs or remote control instructions to the CPU through the bus. After the CPU receives the corresponding data, it is sent to the corresponding board according to the RS485 bus format, and ACTEL completes different functions according to different data content. The specific test results are as shown as Fig. 1.
Fig. 1. Block diagram of the system deep sleep module
A Loss Management Method for the Spaceborne Multi-control …
1073
1. When the system is powered on, actel will read the configuration program from Nor flash and write it to Xilinx FPGA, complete the loading of xilinx FPGA, and the running state is normal. 2. The system receives the ground reload instruction, and actel will re-read the configuration from Nor flash and write the loading pin of the Xilinx FPGA to complete the reload function of the FPGA, and the running status is normal. 3. ACTEL receives the working mode control word from the star, and sends the control word to Xilinx FPGA, which will complete the recording playback, then other working mode switching according to different control words, after testing, the system is running normally. 4. The lower machine will poll and send telemetry instructions to each single board within a certain time, and each single board will send the telemetry data to the lower computer in a certain order, and the lower opportunity will be transmitted to the star, so as to complete the downstream transmission of the telemetry data. During the test, the telemetry data of each board will be sent back to the star computer, and the telemetry data will be normal through analysis and comparison. 5. In order to expand the function of the satellite, the same FPGA loads different configuration programs, which will complete different on-orbit algorithm processing, so as to realize the function expansion of a single satellite, which is also the trend of future development, that is, to realize the reconstruction of in-orbit programs. Receive the new configuration program uploaded from the ground, send it to the corresponding single board through the lower computer, and if it is written to the default area, the system can reload the new program when it is powered on. By sending new configuration programs through the host computer, the system can realize program loading for different functions. 6. The Nor FLASH stores multiple versions of the loader, and the system will start reading the configuration program from the address 0 by default to complete the loading, and other versions of the program need to complete the data reading at different addresses through remote control instructions, so as to realize the on-orbit program switching function. The above tests show that the circuit has in-orbit reconfigurable management functions. In the extreme case, that is, the correction error is beyond the capacity range, but also to achieve power consumption control, the ground will judge which program needs to be replaced according to the data downstream, through the upper note, the system only needs to replace the program in the flip position, so as to cover the problem program to ensure the high reliability of the system.
3 Loss Management Method Simulation In order to comprehensively analyze the application possibility of various chips in multi-mode autonomous embedded system, the multi-mode control method provided by FPGA processor is emphatically analyzed. In the next part, the different modes of
1074
Y. Zhang et al.
the above processors are introduced one by one in combination with the requirements of multi-mode control. FPGA is one of the main power sources of the system, among which the operating voltage, operating frequency, technology level, selected I/O standard, suspended pin state, clock circuit, configuration circuit and memory are the main sources of the power consumption of FPGA. The static power of FPGA is the sum of all the bias current and the transistor leakage power. With the development of integrated circuits, the current leakage of FPGA will be very serious when the process is 90 nm. When the process is at 65 nm, it becomes more severe. Better transistor performance can be achieved by lowering the threshold voltage, but the leakage current will also increase. Under different process conditions, the static power consumption varies greatly. Under the worst and typical process conditions, the static power consumption changes by 2:1. The core voltage (VCCINT) is also closely related to the leakage current. The leakage current is proportional to the cube of the core voltage. The VCCINT increases by 5% and the leakage power consumption increases by 15%. Leakage current is also closely related to chip temperature. I/O power consumption is also very large, by different I/O standards, the power consumption is very different. it can reduce power consumption by choosing a lowpower I/O standard at the expense of speed or logical utilization. For example, LVDS high-speed interface power consumption is relatively high, each pair of input current is 3 mA, each pair of output current is 9 mA. When the system requires high performance, LVDS is used. HSTL and SSTL standard power consumption is relatively low, input current is 3 mA. When constraining FPGA pins, if some unused pins are unconstrained and suspended, they may be considered metastable due to the unknown driving process, which will bring great power loss, even greater than the input power with a driver. Therefore, it is necessary to define a knowable state for the input pin that is not used to avoid the influence caused by the hanging pin. The power consumption of RAM takes a larger proportion, and the main power consumption comes from the clock signal of RAM and the read and writes operations of RAM. In the read and write operations of RAM, the power consumption of the read operation is greater than that of the write operation, and the larger the Hamming distance of the continuous operation, the greater the power consumption. Meanwhile, the number of RAM blocks is positively correlated with the power consumption. Therefore, in the process of read and write storage, the Hamming distance between continuous operation addresses should be reduced as far as possible to reduce the number of read and write operations, reduce the number of switch times, so that more data can be read at a time. At the same time, try to avoid the read and write interval operation. It can use continuous read and write operations instead. RAM can also be read and written on the reverse clock edge, reducing RAM’s peak power consumption. Standby power is designed idle power consumption, which consists of static power consumption at rated temperature. CLB, configuration circuits, and clock circuits play a large role in standby power consumption, but I/O and BRAM power
A Loss Management Method for the Spaceborne Multi-control …
1075
Fig. 2 Schematic of the mode switching pins
Fig. 3 FPGA core voltage switching process
consumption cannot be ignored. The dynamic power consumption of FPGA is mainly embodied in the power consumption of memory, internal logic, clock and I/O. The mode switch for power consumption control is integrated on the board, and when pin 1 is switched to ON, the startup mode is QSPI FLASH mode; When pin 1 switches to OFF, the boot mode is SD card mode. Both modes support JTAG debug mode. This is shown in Fig. 2. The low-power mode of the FPGA is done by reducing the core voltage VCCINT, the core voltage rise and fall can be done by controlling the power module LTM 4644, and the output voltage of the LTM4677 is controlled by switching the set resistor of the output voltage selection pin. The output voltage selection pins are V OUT0 CFG and V OUT1 CFG to control the output voltages of the two channels respectively. When the output voltage pin passes 2 k, the output voltage is 1 V. The switching of the control resistance is controlled by the low on-resistance analog switch ADG802, which has an on-resistance of up to 0.4 ohms and a leakage current of up to ± 30 nA. Since the FPGA requires a certain power-up sequence, the other voltages of the FPGA need to be turned off, when switching the core voltage. So the mode switching needs to be completed with the help of DSP. The switching process is shown in Fig. 3. The FPGA’s core voltage is reduced from 1 V to 0.9 V and from 0.9 V to 1 V. Since the LTM4677’s soft-start time is 3 ms, plus the time of each link, it takes 50 ms, which meets the switching time in the design specification. FPGAs can also reduce dynamic power consumption by gating the clock and stopping the clock from driving the internal logic. It is also possible to turn off and on part of the clock by controlling the enable terminal of the clock chip. For devices with multi-voltage power supply, it is often necessary to power the core voltage first, and then power the external voltage after the core voltage is stable. However, many power supply chips do not have the ability of sequential power-on, so the reset delay mode is adopted. It has the reset delay function, as shown in Fig. 4.
1076
Y. Zhang et al.
Fig. 4 Power-on sequence
4 Conclusion In this paper, a multi-control method for spaceborne task processing is proposed, and the control system software is used to complete the power consumption management and error correction of the configuration program to ensure the spaceborne task with high reliable operation capability. Let all levels of control work under their memory conversion. The specific address that exceeds the error correction ability is written to the telemetry, and the ground will update the error block by means of upshooting. The efficiency of the overall power control conversion is maximized. In the process of testing, observing the power consumption of the multi-processor, it has been verified that the energy control design is realized in the power consumption operation, represented by the processor nuclear voltage adjustment, and high reliability can be achieved Multi-control in orbit.
References 1. Ma, N., Zhang, X., Zheng, H.-T., Sun, J.: ShuffleNet V2: practical guidelines for efficient CNN architecture design. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. Lecture Notes in Computer Science, vol. 11218, pp. 122–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01264-9_8 2. Mehta, S., Rastegari, M., Shapiro, L., et al.: ESPNetv2: a light-weight, power efficient, and general purpose convolutional neural network, pp. 1–9 (2018). arXiv preprint arXiv: 1811.11431 3. Zhang, Y., Ma, Z., Niu, Z., Feng, Li.: High level integrated system of space detector network integration. J. Phys. Conf. Ser. 2242(1), 012040 (2022). https://doi.org/10.1088/1742-6596/ 2242/1/012040 4. Guo, X., Tang, Y., Wu, M.: FPGA-based hardware-in-the-loop real-time simulation implementation for high-speed train electrical traction system. IET Electr. Power Appl. 14(5), 1–6 (2020) 5. Wang, P., Hu, Q., Zhang, Y., et al.: Two-step quantization for low-bit neural networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2018, Salt Lake, UT, USA, vol. 6, pp. 4376–4384 (2018) 6. Hu, Q., Li, G., Wang, P., Zhang, Y., Cheng, J.: Training binary weight networks via semibinary decomposition. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. Lecture Notes in Computer Science, vol. 11217, pp. 657–673. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_39 7. Ying, Z., Xing, Z., Jian, C., Hui, S.: Processor free time forecasting based on convolutional neural network. In: Proceedings of the 37th Chinese Control Conference, CCC 2018, vol. 7, pp. 9331–9336. IEEE, Wu Han (2018)
A Loss Management Method for the Spaceborne Multi-control …
1077
8. Wu, B.C., Wan, A., Yue, X.Y., et al.: Shift: A zero Flop, zero parameter alternative to spatial convolutions. arXiv preprint arXiv: 1711.08141 (2017) 9. Zhang, Y., Zhao, Q., Tao, L., Cao, J., Wei, M., Zhang, X.: A real-time online aircraft neural network system. IWOFC (International Workshop on Future Computing). ICCASIT 2019, vol. 12, pp. 1–6. IEEE, Hang Zhou (2019)
Reliability Simulation of Phased-Mission System with Multimode Failures Zhicheng Chen and Jun Yang
Abstract This paper put forward a simulation approach based on ExtendSim for reliability analysis of phased-mission systems (PMS) with multimode failures. Firstly, the paper points out that the BDD method has the shortcomings of complex application and lack of generality in solving the problem of reliability assessment of PMS with multimode failures. By analyzing PMS, we indicate that PMS is one kind of discrete event simulation system and the critical events are component failure and phase transition. The simulation process is established by applying Monte Carlo theory, and two critical events are modeled by using the Shutdown module and the Create module in ExtendSim. And for a case of equipment reliability evaluation, a reliability simulation model of a PMS with multimode failures is established. The simulation result shows that the simulation model built is credible, and has the features of strong suitability and easy-to-build in contrast with the BDD approach. Keywords Phased-mission systems · Multimode failures · Reliability simulation model · ExtendSim
1 Introduction Many systems perform a series of tasks which must be carried out in consecutive, nonoverlapping time periods (phases). These systems are usually called phased-mission systems (PMS). Compared with single-phased systems, reliability analysis of PMS is much more difficult because of the dependence across the phases [1]. Recently, an emerging concern is to analyze PMS with multimode failures. Markov models can be used in multimode failure modeling and analysis [2]. However, these models suffer from explosion problem when the number of components becomes large. Z. Chen (B) Department of Economics and Management, Wenhua College, Wuhan, China e-mail: [email protected] J. Yang Department of Art Design, Wuhan Polytechnic, Wuhan, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_101
1079
1080
Z. Chen and J. Yang
Researchers then propose an efficient, powerful technique called Binary Decision Diagram (BDD), which exploit the low computational complexity and less storage space consumption. Zang et al. [3] were the first to use the BDD method to analyze the reliability of phased mission systems. Z. Tang et al. [4] then extended the BDD algorithms to the analysis of systems containing components with multiple competing failure modes. S. Reed et al. [5] introduced improved algorithms for the analysis of phased missions with multiple failure mode components. It results in smaller BDDs, faster repeat analysis, and improved accuracy. Although some progress has been made in this area, the disadvantages of using BDD to analysis reliability of PMS with multimode failures are the following: 1, there are several preprocesses before the programming of BDD, which make using BDD complicated; 2, BDD forms heavily depend on the order of variables, yet the common optimal BDD variable ordering approach hasn’t been found. Therefore, we proposed a multi-mode PMS simulation model based on ExtendSim. It can make modeling process simpler and needn’t consider the dependence across the phases. The structure of the paper is as follows: Sect. 2 introduces PMS with multimode failures and system assumptions, Sect. 3 describes the simulation flow, and presents the PMS simulation modeling method based on ExtendSim, Sect. 4 shows the effectiveness of the proposed modeling approach by a PMS example.
2 System Description The PMS with multimode failures are very common in real world. A typical example of such systems is the ion propulsion system, which consists of a PPU, two ion engines, and two propellant valves. The configurations of the system in two phases are identical, but with different component failure rates. The PPU has three failure modes: failure to start, failure to operate, and failure to shutdown; each failure mode can cause the system failure. Each engine has two failure modes: failure to start and failure to operate, each with the effect of ion engine loss. Each propellant valve has three failure modes: failure to open, failure to close, and external leakage. The first failure mode of a valve induces the loss of the corresponding engine; while the other two failure modes cause the failure of the whole system. The methods and models discussed in this paper make the following assumptions for a phased mission. . . . .
The mission consists of a set of consecutive phases. The time duration for each phase is known and fixed. For mission success, all phases must be completed. The component in PMS has multiple competing failure modes, i.e. it can fail in distinct and mutually exclusive ways.
Reliability Simulation of Phased-Mission System … Fig. 1 An Example of PMS
1081 Phase 1
Phase 2
X1 X1
X2
X2
3 PMS Simulation Model 3.1 Simulation Flow From the simulation standpoint, we can conclude that PMS is one kind of discrete event simulation system, and only two events may cause the failure of PMS: 1) Component failure: one critical component, such as the component in one serial system, fails can lead to system failure; 2) Phase transition: sometimes the transition of phases can be the cause of system failure. Considering a non-repairable PMS which is illustrated in Fig. 1, when component X1 fails in phase 1, the system could continue working until the PMS transform into phase 2. Consequently, if we can handle above-referenced two events, the reliability of PMS would be got by simulation. Figure 2 sketches the simulation flow of PMS. Suppose the simulation count is N, the mission state of one simulation is random variable x (system success: x = 1, system failure: x = 0), according to Monte Carlo simulation theory, the reliability of PMS is the sample mean of x: R=
N 1 E xi N i=1
(1)
3.2 Simulation Model Based on ExtendSim ExtendSim is a simulation software developed by the Imagine That company and it can build models for discrete event systems and continuous systems, with the main features such as intuitive modeling and easy debugging. Accordingly, we adopt ExtendSim to build the simulation model. Failure Modeling. To model failures of PMS, we use Shutdown module in ExtendSim, which will generate the appropriate “failure” information (component fails: output = 1, component works: output = 0) by designating the distribution function for time between failure (TBF). In fact, it can model repairable component
1082
Z. Chen and J. Yang
Fig. 2 PMS Simulation Flowchart
start
system init
simulation count=N no
yes
calculate reliability
end
simulation clock advances yes next phase
system structure changes
no component fails
no
yes system fails yes
no
xi=0
all phases over
yes
xi=1
count+1
no
by designating the distribution function for time to repair (TTR). If the component is non-repairable, making its TTR a constant greater than mission time is appropriate. For multimode failures, it should be noted that these failures are exclusive, we employ Equation module to model this restriction. Figure 3 shows a component with three failure modes. The Equation code is list below: if (failmode1==1 or failmode2==1)// other failure mode occurs fail=0; else fail=InputValue;
Fig. 3 Multimode Failures Modeling
Reliability Simulation of Phased-Mission System … Table 1 Create schedule
1083
Create Time
Value
0
1
5
2
15
3
Phase Modeling. As indicated in Sect. 3.1, phase transitions are critical simulation events. Create module in ExtendSim is capable of creating values by schedule. We can let it create the next phase number when the transition time arrives. This would help us to identify which phase the PMS is in and which system structure we should use to check that if the PMS fails. For example, if a PMS has two phases, the phase length of phase 1 is 5 h and which of phase 2 is 10 h, then the schedule would show as table 1. By failure modeling and phase modeling, two simulation events which could make system fails are created. When these events occur, we can know if system failure happens and if the mission succeeds at the end time of simulation. For more information about simulation model based on ExtendSim, see next section.
4 Case Analysis Let us illustrate the application of our method with an example in ref [5].This PMS consists of three phases with three components: A, B, and C. Component B has three failure modes. The system fault tree configuration is shown in Fig. 4. The phase lengths of all three phases are 10 h each. The failure rates of components A and C are 2.0 × 10–4 /h. The failure rates of all failure modes of component B are 1.0 × 10–4 /h. Based on Sect. 3, we can build the simulation model as shown in Fig. 5. The source code in Judgement Equation module is list below: Fig. 4 PMS configuration in three phases
Phase 3
Phase 2
Phase 1
B3,2 A1
C1
B1,1B2,1 B3,1
A2
B2,2 C2 A3
B1,2 B3,2
C3 B1,3 B2,3
1084
Z. Chen and J. Yang
Fig. 5 Simulation Model
if (lastfail==1)//last phase fails, this phase fails phasefail=1; else { if (phase==1) phasefail=A||B1||B2||B3||C; if (phase==2) phasefail=A||(B1&&C)||B2||(B3&&C); if (phase==3) phasefail=(A&&B1&&C)||(A&&B2&&C)||B3; }
It should be recognized that the Judgement Equation module output 1 when the system fails. As indicated in Sect. 3.2, what we need is that the simulation output 1 when the system succeeds. Therefore, we add a ‘not’ Math module after the Equation module to get the correct value. After running simulation for 10,000 times, the reliability of this PMS we get is 0.9886. Compared to the result 0.988 in ref [4], the relative error is not more than 0.1%, which shows that the simulation model is credible.
5 Conclusion From the case above, the conclusion can be reached that the reliability simulation model built can obtain the reliability of PMS with multimode failures. Compared with the BDD approach, the simulation model is more intuitive and the modeling process is simpler. As indicated in Sect. 3.2, this model can introduce repairable components and arbitrary distributions, which make this model has the feature of strong suitability. Therefore, it is of certain academic value and significance for promotion. In the
Reliability Simulation of Phased-Mission System …
1085
future, we will study and introduce elements such as multi-state, maintainability, and supportability to make the simulation model closer to the actual situation, and finally complete the reliability maintainability supportability simulation modeling of complex systems.
References 1. Xing, L., Amari, S.V.: Reliability of phased-mission systems. In: Misra, K.B. (ed.) Handbook of Performability Engineering, pp. 349–368. Springer London, London (2008). https://doi.org/ 10.1007/978-1-84800-131-2_23 2. Wu, X.Y., Hua, Y., Li, L.R.: Numerical method for reliability analysis of phased-mission system using Markov chains. Commun. Stat.-Theor. Methods 41(21), 3960–3973 (2012) 3. Zang, X.Y., Sun, N.R., Trivedi, K.S.: A BDD-based algorithm for reliability analysis of phasedmission systems. IEEE Trans. Reliab. 48(1), 50–60 (1999) 4. Tang, Z., Dugan, J.B.: BDD-based reliability analysis of phased mission systems with multimode failures. IEEE Trans. Reliability 55(2), 350–360 (2006) 5. Reed, S., Andrews, J.D., Dunnett, S.J.: Improved efficiency in the analysis of phased mission systems with multiple failure mode components. IEEE Trans. Reliability 60(1), 70–79 (2011)
Algorithm for the Operation of the Data-Measuring System for Evaluating the Inertial-Mass Characteristics of Space Debris A. V. Sedelnikov, M. E. Bratkova, and E. S. Khnyryova
Abstract The paper presents a method for evaluating the inertial-mass characteristics of a space debris object using measurement data. This evaluation method assumes the presence of a tug spacecraft and the subsequent transportation of space debris using a tether system to disposal orbits. The concept of the data-measuring system is presented, which should be developed to evaluate the inertial-mass characteristics. The requirements that the designed data-measuring system must meet for an effective evaluation are given. The structure for the data-measuring system has been developed, corresponding to the given requirements for it. This data-measuring system consists of two parts. The first part is based on a tug spacecraft. The second part is attached to space debris. To estimate the angular velocity of rotation, it is proposed to use angular velocity sensors and magnetometers. It is assumed that the tug spacecraft can exert a controlled mechanical effect on space debris. Measurement equipment are also used to control this effect. The estimation of the error is carried out by comparing the values of the angular velocity of rotation for the tug spacecraft obtained using the angular velocity sensors and magnetometers. To do this, measurements are reduced to a coordinate system associated with the induction vector of the Earth’s magnetic field. Examples of measurements carried out on AIST-2D small spacecraft for remote sensing of the Earth are shown. An algorithm for evaluating the inertial-mass characteristics of space debris is presented. The results of the work can be used in the design of a tug spacecraft for transporting space debris. Especially when developing the data-measuring system that makes it possible to evaluate the inertial-mass characteristics of space debris in order to effectively solve the problem of its safe transportation. Keywords Data-measuring system · Space debris · Magnetometer · Evaluating the inertial-mass characteristics
A. V. Sedelnikov (B) · M. E. Bratkova · E. S. Khnyryova Samara National Research University, Samara, Russia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 Y. S. Shmaliy and A. Nayyar (eds.), 7th International Conference on Computing, Control and Industrial Engineering (CCIE 2023), Lecture Notes in Electrical Engineering 1047, https://doi.org/10.1007/978-981-99-2730-2_102
1087
1088
A. V. Sedelnikov et al.
1 Introduction Nowadays, the problem of space debris is becoming more urgent. The growth in the number of launches of space objects [1–3] and the end of the active lifetime of already launched objects [4–6] determine the rapid increase in space debris in nearEarth orbit. This problem, according to a large number of experts, can no longer be ignored. Otherwise, the safety of near-Earth space flights will be significantly reduced [7–9]. Currently, many methods have been developed for cleaning up space debris from near-Earth orbits [10–13]. There is also a strong opinion about the need to design a spacecraft, taking into account the requirements for its removal from orbit at the end of its active lifetime [14–16]. However, the problem of already launched space objects that have turned into space debris will remain relevant. In order to effectively clean the near-Earth space using a tug spacecraft, in many cases it is necessary to know the inertial-mass characteristics of the captured space debris [17, 18]. This makes it possible to develop effective algorithms for controlling the movement of a tug spacecraft to perform the necessary actions for the capture and transportation of space debris. Therefore, the identification of methods for estimating the inertial-mass characteristics of space debris objects is an important and urgent problem. The solution of this problem will improve the efficiency of cleaning the near-Earth space from space debris using tug spacecraft.
2 Materials and Methods Transportation of space debris using a tug spacecraft is considered as a cleaning method (Fig. 1). At the same time, it is assumed that space debris is a spacecraft with unknown inertial-mass characteristics and ended active lifetime. These characteristics are subject to evaluation as a result of using the designed data-measuring system. There is also an active regularly functioning tug spacecraft. Fig. 1 Space debris transportation using a tug spacecraft and tether system
Algorithm for the Operation of the Data-Measuring System …
1089
The method for evaluaing the inertial-mass characteristics of a space debris object assumes that the tug spacecraft has executors for controlled mechanical impact on space debris, as well as for attaching a magnetometer to the space debris with a signal transmission channel to the tug spacecraft. The method is based on measurements of the components of the Earth’s magnetic field induction vector using magnetometers installed on board the tug spacecraft and fixed on the space debris object. According to the measurement data, the parameters of the rotational motion of space debris are estimated, and then the components of its inertia tensor are determined.
3 Structure of the Data-Measuring System The designed data-measuring system must meet the following requirements: • Providing direct measurements for the components of the angular velocity vector of the tug spacecraft; • Providing direct measurements for the components of the Earth’s magnetic field induction vector on a tug spacecraft; • Evaluation of the components of the angular velocity vector for the tug spacecraft using the dynamics of the induction vector; • Providing measurements for the parameters of controlled mechanical impact on space debris; • Evaluation the perturbations of the rotational motion of space debris using the parameters of controlled mechanical impact; • Providing direct measurements for the components of the Earth’s magnetic field induction vector on space debris; • Transmission of measurement data from space debris to the tug spacecraft; • Evaluation of the components of the angular velocity vector for space debris using the dynamics of the induction vector; • Evaluation of the components of the angular acceleration vector for space debris using the dynamics of the angular velocity vector; • Evaluation of the components of the space debris inertia tensor. The structure of the data-measuring system that meets the listed requirements is shown in Fig. 2.
Fig. 2 Structure of the information and measurement system
1090
A. V. Sedelnikov et al.
According to Fig. 2, the designed data-measuring system will include: • • • • •
Angular velocity sensors; Three-component magnetometers; Thrust sensors; Receiving device; Transmitting device.
4 The Operation Algorithm for the Data-Measuring System At the first stage of operation, the magnetometer with the transmitter must be rigidly fixed on the surface of space debris. During any movement of space debris, the magnetometer must not change the orientation of its axes of the instrument frame relative to the body axis system of space debris. Further, using the measurements of the angular velocity sensors, the vector of the angular velocity of rotation for the tug spacecraft is constructed. Measurements of magnetometers make it possible to construct the induction vector of the Earth’s magnetic field. In this case, it is necessary that the axes of the instrument frame for the angular velocity sensors and magnetometers be equally oriented, for example, parallel to the main body axis system of the tug spacecraft. In this case, the vectors of the angular velocity and magnetic field induction will be built in the main body axis system. An example of such measurements for the small Aist-2D spacecraft is shown in Fig. 3. Estimation of the angular velocity of rotation for the spacecraft using measurements of the Earth’s magnetic field induction vector should be carried out according to the formula [19]:
ω =
B × B˙ − B 2
d˜ B dt
,
(1)
Fig. 3 The components of the measured vectors in the main body coordinate system: a) angular velocity (1 - ωx ; 2 - ωy ; 3 - ωz ); b) Earth magnetic field induction (1 - Bx ; 2 - By ; 3 - Bz )
Algorithm for the Operation of the Data-Measuring System …
1091
where ω is the angular velocity vector of rotation for the tug spacecraft; B is the Earth’s magnetic field induction vector; B˙ is the total time derivative of the Earth’s ˜ magnetic field induction vector; ddtB is the local time derivative of the Earth’s magnetic field induction vector. For B˙ ≈ 0 (measurements are made quite often) we obtain the Boer formula [20]: ⎧ ⎪ ⎪ ⎪ ωx = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ωy = ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ωz = ⎪ ⎪ ⎩
B y1 (Bz1 − Bz0 ) − Bz1 B y1 − B y0 ; 2 2 2 + B y1 + Bz1 (t1 − t0 ) Bx1 Bz1 (Bx1 − Bx0 ) − Bx1 (Bz1 − Bz0 ) ; 2 2 2 + B y1 + Bz1 (t1 − t0 ) Bx1
Bx1 B y1 − B y0 − B y1 (Bx1 − Bx0 ) . 2 2 2 + B y1 + Bz1 (t1 − t0 ) Bx1
(2)
As it is known [21], using formula (2), one can directly estimate two components of the angular velocity vector, which are perpendicular to the Earth’s magnetic field induction vector. However, there are methods that allow obtaining all three components of the angular velocity vector, for example, [22]. To reveal the estimation error (2), one should calculate the components of the angular velocity vector in the coordinate system, one of the axes of which coincides with the direction of the Earth’s magnetic induction vector. Comparison of the angular projections measured by the sensors and those estimated by formula (2) will make it possible to estimate the error. Simultaneously with magnetometer measurements on a tug spacecraft, similar measurements should be carried out on space debris. In this case, it should be understood that the axes of the instrument frame of a magnetometer fixed on space debris will be randomly oriented relative to its main body axis system. The dynamics of changes in the orientation for the axes of the instrument frame of magnetometers located on space debris and a tug spacecraft can be estimated using the formula: Bb1 = Aˆ 12 · Bb2 ,
(3)
where b1 is the instrument frame coordinate system for magnetometers located on a tug spacecraft; b2 is the instrument frame coordinate system for magnetometers located on space debris; Aˆ 12 is the direction cosine tensor between instrument frame coordinate systems. After measurements sufficient for a reliable estimate of the angular velocity, it is necessary to exert a known perturbing effect on the space debris object. In this case, the magnitude of the impact should exceed other perturbing factors acting on space debris, so much so that they could be neglected in the analysis of the rotational motion of space debris. Further, using the Euler dynamic equations, we can estimate the components of the space debris inertia tensor:
1092
A. V. Sedelnikov et al. ⎧
I ω˙ − I x y ω˙ y − I x z ω˙ z + ω y Izz ωz − I x z ωx − I yz ω y − ωz I yy ω y − I x y ωx − I yz ωz = Mx ⎪ ⎪ ⎨ xx x
I yy ω˙ y − I x y ω˙ x − I yz ω˙ z + ωz I x x ωx − I x y ω y − I x z ωz − ωx Izz ωz − I x z ωx − I yz ω y = M y ⎪ ⎪
⎩ Izz ω˙ z − I x z ω˙ x − I yz ω˙ y + ωx I yy ω y − I x y ωx − I yz ωz − ω y I x x ωx − I x y ω y − I x z ωz = Mz
(4)
Mx , M y , Mz is the vector of the moment from the perturbation effect in where M the axes of the instrument frame of the magnetometer located on space debris. The errors in estimating the parameters of the controlled action and estimating the angular velocity of space debris determine the error in calculating the components of the space debris inertia tensor. The data obtained in this way can be useful for the efficient solution of the problem of transporting space debris using tether systems. With high requirements for the accuracy of evaluating the inertial-mass characteristics in the designed data-measuring system, several magnetometers should be used.
5 Conclusion Thus, as a result of the research carried out, the work presents an algorithm for the operation of the data-measuring system for evaluating the inertial-mass characteristics of space debris. The concept of the data-measuring system is presented, which should be developed to evaluate the inertial-mass characteristics. The requirements that the designed data-measuring system must meet for an effective evaluation are given. The structure for the data-measuring system has been developed, corresponding to the given requirements for it. Such an evaluation will increase the efficiency of solving the problem of transporting space debris using tether systems. Acknowledgements This study was supported by the Russian Science Foundation (Project No. 22-19-00160).
References 1. Somma, G.L., Lewis, H.G., Colombo, C.: Sensitivity analysis of launch activities in low earth orbit. Acta Astronaut. 158, 129–139 (2019). https://doi.org/10.1016/j.actaastro.2018.05.043 2. Schmitz, J., Komorowski, M., Russomano, T., Ullrich, O., Hinkelbein, J.: Sixty years of manned spaceflight—Incidents and accidents involving astronauts between launch and landing. Aerospace 9(11), 675 (2022) 3. Smith, M.G., Kelley, M., Basner, M.: A brief history of spaceflight from 1961 to 2020: an analysis of missions and astronaut demographics. Acta Astronaut. 175(11), 290–299 (2020) 4. Janovsky, R., Kassebom, M., Lübberstedt, H., et al.: End-of-life de-orbiting Strategies for Satellites, pp. 1–10. Deutscher Luft- und Raumfahrtkongress, Stuttgart (2002) 5. Liou, J.C., Johnson, N.L.: A sensitivity study of the effectiveness of active debris removal in LEO. Acta Astronaut. 64(2–3), 236–243 (2008) 6. Ledkov, A.S., Aslanov, V.S.: Active space debris removal by ion multi-beam shepherd spacecraft. Acta Astronaut. 205, 247–257 (2023) 7. Smirnov, N.N.: Space flight safety – discussing perspectives. Acta Astronaut. 126, 497–499 (2016)
Algorithm for the Operation of the Data-Measuring System …
1093
8. Smirnov, N.N., Kiselev, A.B., Smirnova, M.N., Nikitin, V.F.: Space traffic hazards from orbital debris mitigation strategies. Acta Astronaut. 109, 144–152 (2015) 9. Ledkov, A., Aslanov, V.: Review of contact and contactless active space debris removal approaches. Prog. Aerosp. Sci. 134, 100858 (2022) 10. Priyant, C.M., Surekha, K.: Review of active space debris removal methods. Space Policy 47, 194–206 (2019) 11. Wang, Q., Jin, D., Rui, X.: Dynamic simulation of space debris cloud capture using the tethered net. Space: Science & Technology (2021). https://doi.org/10.34133/2021/9810375 12. Trushlyakov, V.I., Yudintsev, V.V.: Rotary space tether system for active debris removal. J. Guid. Control. Dyn. 43(2), 354–364 (2020) 13. Aslanov, V.S., Ledkov, A.S.: Fuel costs estimation for ion beam assisted space debris removal mission with and without attitude control. Acta Astronaut. 187, 123–132 (2021) 14. Sanchez-Arriaga, G., Sanmartin, J.R., Lorenzini, E.C.: Comparison of technologies for deorbiting spacecraft from low-earth-orbit at end of mission. Acta Astronaut. 138(A7), 536–542 (2017) 15. Nock, K., Aaron, K., McKnight, D.: Removing orbital debris with less risk. J. Spacecr. Rocket. 50(2), 365–379 (2013) 16. Krestina, A.V., Tkachenko, I.S.: Efficiency assessment of the deorbiting systems for small satellite. J. Aeronaut. Astronaut. Aviat. 54(2), 227–240 (2022) 17. Bourabah, D., Field, L., Botta, E.M.: Estimation of uncooperative space debris inertial parameters after tether capture. Acta Astronaut. 202, 909–926 (2023). https://doi.org/10.1016/j.act aastro.2022.07.041 18. Chu, Z., Ma,Y., Hou, Y., Wang, F.: Inertial parameter identification using contact force information for an unknown object captured by a space manipulator. Acta Astronautica 131 (2016) 19. Sedelnikov, A.V., Salmin, V.V.: Modeling the disturbing effect on the Aist small spacecraft based on the measurements data. Sci. Rep. 12, 1300 (2022) 20. Lapshin, V.V.: The equations of a solid body motion. IOP Conf. Ser. Mater. Sci. Eng. 1191, 012011 (2021). https://doi.org/10.1088/1757-899X/1191/1/012011 21. Carletta, S., Teofilatto, P., Farissi, M.S.: A magnetometer-only attitude determination strategy for small satellites: design of the algorithm and hardware-in-the-loop testing. Aerospace 7(1), 3 (2020) 22. Carletta, S., Teofilatto, P.: Design and numerical validation of an algorithm for the detumbling and angular rate determination of a CubeSat using only three-axis magnetometer data. Int. J. Aerosp. Eng. 2018, 1–12 (2018). https://doi.org/10.1155/2018/9768475