145 66 17MB
English Pages 516 [512] Year 2024
Bing Wang Zuojin Hu Xianwei Jiang Yu-Dong Zhang (Eds.)
532
Multimedia Technology and Enhanced Learning 5th EAI International Conference, ICMTEL 2023 Leicester, UK, April 28–29, 2023 Proceedings, Part I
Part 1
Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering Editorial Board Members Ozgur Akan, Middle East Technical University, Ankara, Türkiye Paolo Bellavista, University of Bologna, Bologna, Italy Jiannong Cao, Hong Kong Polytechnic University, Hong Kong, China Geoffrey Coulson, Lancaster University, Lancaster, UK Falko Dressler, University of Erlangen, Erlangen, Germany Domenico Ferrari, Università Cattolica Piacenza, Piacenza, Italy Mario Gerla, UCLA, Los Angeles, USA Hisashi Kobayashi, Princeton University, Princeton, USA Sergio Palazzo, University of Catania, Catania, Italy Sartaj Sahni, University of Florida, Gainesville, USA Xuemin Shen , University of Waterloo, Waterloo, Canada Mircea Stan, University of Virginia, Charlottesville, USA Xiaohua Jia, City University of Hong Kong, Kowloon, Hong Kong Albert Y. Zomaya, University of Sydney, Sydney, Australia
532
The LNICST series publishes ICST’s conferences, symposia and workshops. LNICST reports state-of-the-art results in areas related to the scope of the Institute. The type of material published includes • Proceedings (published in time for the respective event) • Other edited monographs (such as project reports or invited volumes) LNICST topics span the following areas: • • • • • • • •
General Computer Science E-Economy E-Medicine Knowledge Management Multimedia Operations, Management and Policy Social Informatics Systems
Bing Wang · Zuojin Hu · Xianwei Jiang · Yu-Dong Zhang Editors
Multimedia Technology and Enhanced Learning 5th EAI International Conference, ICMTEL 2023 Leicester, UK, April 28–29, 2023 Proceedings, Part I
Editors Bing Wang Nanjing Normal University of Special Education Nanjing, China Xianwei Jiang Nanjing Normal University of Special Education Nanjing, China
Zuojin Hu Nanjing Normal University of Special Education Nanjing, China Yu-Dong Zhang University of Leicester Leicester, UK
ISSN 1867-8211 ISSN 1867-822X (electronic) Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering ISBN 978-3-031-50570-6 ISBN 978-3-031-50571-3 (eBook) https://doi.org/10.1007/978-3-031-50571-3 © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
Preface
We are delighted to introduce the proceedings of the fifth edition of the 2023 European Alliance for Innovation (EAI) International Conference on Multimedia Technology and Enhanced Learning (ICMTEL). This conference brought researchers, developers and practitioners from around the world who are leveraging and developing multimedia technologies and enhanced learning. The theme of ICMTEL 2023 was “Human Education-Related Learning and Machine Learning-Related Technologies”. The technical program of ICMTEL 2023 consisted of 119 full papers, including 2 invited papers in oral presentation sessions at the main conference tracks. The conference tracks were: Track 1, AI-Based Education and Learning Systems; Track 2, Medical and Healthcare; Track 3, Computer Vision and Image Processing; and Track 4, Data Mining and Machine Learning. Aside from the high-quality technical paper presentations, the technical program also featured three keynote speeches and three technical workshops. The three keynote speeches were by Steven Li from Swansea University, UK, “Using Artificial Intelligence as a Tool to Empower Mechatronic Systems”; Suresh Chandra Satapathy from Kalinga Institute of Industrial Technology, India, “Social Group Optimization: Analysis, Modifications and Applications”; and Shuihua Wang from the University of Leicester, UK, “Multimodal Medical Data Analysis”. The workshops were organized by Xiaoyan Jiang and Xue Han from Nanjing Normal University of Special Education, China and Yuan Xu and Bin Sun from University of Jinan, China. Coordination with the Steering Committee Chairs, Imrich Chlamtac, De-Shuang Huang, and Chunming Li, was essential for the success of the conference. We sincerely appreciate their constant support and guidance. It was also a great pleasure to work with such an excellent organizing committee, we appreciate their hard work in organizing and supporting the conference. In particular, the Technical Program Committee, led by our TPC Co-chairs, Shuihua Wang and Xin Qi, who completed the peer-review process of technical papers and put together a high-quality technical program. We are also grateful to the Conference Manager, Ivana Bujdakova, for her support, and all the authors who submitted their papers to the ICMTEL 2023 conference and workshops. We strongly believe that ICMTEL conference provides a good forum for researchers, developers and practitioners to discuss all science and technology aspects that are relevant to multimedia technology and enhanced learning. We also expect that future events will be as successful and stimulating, as indicated by the contributions presented in this volume. October 2023
Bing Wang Zuojin Hu Xianwei Jiang Yu-Dong Zhang
Organization
Steering Committee Imrich Chlamtac De-Shuang Huang Chunming Li Lu Liu M. Tanveer Huansheng Ning Wenbin Dai Wei Liu Zhibo Pang Suresh Chandra Satapathy Yu-Dong Zhang
Bruno Kessler Professor, University of Trento, Italy Tongji University, China University of Electronic Science and Technology of China (UESTC), China University of Leicester, UK Indian Institute of Technology, Indore, India University of Science and Technology Beijing, China Shanghai Jiaotong University, China University of Sheffield, UK ABB Corporate Research, Sweden KIIT, India University of Leicester, UK
General Chair Yudong Zhang
University of Leicester, UK
General Co-chairs Ruidan Su Zuojin Hu
Shanghai Advanced Research Institute, China Nanjing Normal University of Special Education, China
Technical Program Committee Chairs Shuihua Wang Xin Qi
University of Leicester, UK Hunan Normal University
viii
Organization
Technical Program Committee Co-chairs Bing Wang Yuan Xu Juan Manuel Górriz M. Tanveer Xianwei Jiang
Nanjing Normal University of Special Education, China University of Jinan, China Universidad de Granada, Spain Indian Institute of Technology, Indore, India Nanjing Normal University of Special Education, China
Local Chairs Ziquan Zhu Shiting Sun
University of Leicester, UK University of Leicester, UK
Workshops Chair Yuan Xu
Jinan University, China
Publicity and Social Media Chair Wei Wang
University of Leicester, UK
Publications Chairs Xianwei Jiang Dimas Lima
Nanjing Normal University of Special Education, China Federal University of Santa Catarina, Brazil
Web Chair Lijia Deng
University of Leicester, UK
Organization
ix
Technical Program Committee Abdon Atangana Amin Taheri-Garavand Arifur Nayeem Arun Kumar Sangaiah Carlo Cattani Dang Thanh David Guttery Debesh Jha Dimas Lima Frank Vanhoenshoven Gautam Srivastava Gonzalo Napoles Ruiz Hari Mohan Pandey Hong Cheng Jerry Chun-Wei Lin Juan Manuel Górriz Liangxiu Han Mackenzie Brown Mingwei Shen Nianyin Zeng
University of the Free State, South Africa Lorestan University, Iran Saidpur Government Technical School and College, Bangladesh Vellore Institute of Technology, India University of Tuscia, Italy Hue College of Industry, Vietnam University of Leicester, UK Chosun University, Korea Federal University of Santa Catarina, Brazil University of Hasselt, Belgium Brandon University, Canada University of Hasselt, Belgium Edge Hill University, UK First Affiliated Hospital of Nanjing Medical University, China Western Norway University of Applied Sciences, Bergen, Norway University of Granada, Spain Manchester Metropolitan University, UK Perdana University, Malaysia Hohai University, China Xiamen University, China
Contents – Part I
AI-Based Education and Learning Systems A Mobile Application for Taking Notes Based on Cornell Technique . . . . . . . . . . ˙sler, and Yılmaz Kemal Yüce Hasan Demirelli, Yalçın I¸ Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Siyu Liu, Feng Yong, Yingjie Wu, and Haoran An Application of Superpixel Clustering Algorithm to Hip Joint Image Segmentation Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jinshun Ding, Xiaoyu Lian, Taowen Lu, Yi Gu, Dandan Guo, and Zhiying Cao
3
20
31
Design and Implementation of Interactive Platform for Mental Health Promotion Based on Mobile Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiufeng Ye, Gang Ye, Dongming Zhao, Wei Zhong, and Xinlei Chen
41
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor Data Acquisition of UAV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaokaiti Maihebubai
50
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline Based on Machine Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanmei Sun and Ya Xu
65
Design of Intelligent Security Inspection System for Airport Passengers’ Carry on Luggage Based on Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chun Zheng and Xiafu Pan
82
Robust Design of Machine Translation System Based on Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Pei Pei and Jun Ren Design of English and American Literature Online Learning System Based on Android . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Yaping Liang and Di Qi
xii
Contents – Part I
Design of College English Reading Inculcate Feedback Channel Under Cloud Terrace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Di Qi and Yaping Liang Smooth Switching Control Method for Parallel and Off Grid of Distributed Photovoltaic Power Grid Based on Deep Reinforcement Learning . . . . . . . . . . . . 144 Xinran Liu, Wenyu Liu, Lu Liu, Haishan Zhou, Yudan Liu, and Yanfa Xu Classifying Evaluation Method of Innovative Teachers’ Teaching Ability Based on Multi Source Data Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 Fanghui Zhu and Shu Fang Web Based Adaptive Integration Method of College Students’ Comprehensive Quality Evaluation Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Wenjing Liu and Haidi Yuan A Method of Mining Abnormal Data of College Students’ Physical Fitness Test Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Liyi Xie and Hui Liu Low Resolution 3D Image Enhancement Based on Artificial Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 Yingjian Kang, Lei Ma, Jianxing Yang, and Shufeng Zhuo Adaptive Slot Allocation Method for Data Link of UAV Transmission Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Zhijun Liu, Xin Zhang, and Mingfei Qu Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network Based on Improved Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Xin Zhang, Zhijun Liu, and Mingfei Qu Leakage and Discharge Fault Detection Technology of Subway Electromechanical Equipment Based on Big Data Analysis . . . . . . . . . . . . . . . . . . 248 Wuguang Wang and XingfeiMa Ma Research on Random Intrusion Depth Detection of Internet of Things Based on 3D Convolutional Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Xingfei Ma and Wuguang Wang Intelligent Measurement of Power Frequency Induced Electric Field Strength Based on Convolutional Neural Network Feature Recognition . . . . . . . . 277 Ying Li, Zheng Peng, Mancheng Yi, Jianxin Liu, Sifan Yu, and Jing Liu
Contents – Part I
xiii
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot Based on Deep Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Rong Zhang and Xiaogang Wei An Intelligent Mining Method of Distributed Data Based on Multi-agent Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 Zhongwei Chen and Xiaofeng Li Medical and Healthcare Local Binary Pattern and RVFL for Covid-19 Diagnosis . . . . . . . . . . . . . . . . . . . . 325 Mengke Wang Feature Selection Using Data Mining Techniques for Prognostication of Cardiovascular Diseases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Naga Venkata Jashwanth Vanami, Lohitha Rani Chintalapati, Yagnesh Challagundla, and Sachi Nandan Mohanty Research on the Construction and Application of Smart Hospital Based on Mobile Terminal Security Aggregation Business Management Platform . . . . 354 Yixin Wang, Weiqing Fang, Liang Chen, and Wei Zhu Research on Information Security Management in Hospital Informatization Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Zhiying Cao and Chujun Wu Research and Thinking on the Construction of Internet Hospitals in Psychiatric Hospitals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Xinlei Chen, Dongming Zhao, Wei Zhong, and Jiufeng Ye A Method of Indoor Space Layout for Home Stay Based on Binocular Vision SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 Liushi Qin and Zhengfeng Huang Research on Modeling and Evaluation of Topology Reliability of Smart Campus Network Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Qiangjun Liu and Ningning Wang Intelligent Retrieval Method of Massive Music Information Resources Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Yi Liao and Lin Han Reliability Evaluation Method of Online Japanese Teaching Software Based on Bayesian Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424 Jiayi Sun, Yue Wang, and Chao Song
xiv
Contents – Part I
Intelligent Analysis Method of Multidimensional Time Series Data Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438 Zhongwei Chen and Xiaofeng Li An Intelligent Teaching System Based on Mobile Terminal for the Simulation of Legal Education Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 Xiaomei Yang Security Analysis of Car Driving Identification System Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Xiaogang Wei and Rong Zhang Construction of Intelligent Evaluation Model for Electric Power Marketing Inspection Status Based on Cloud Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Yanli Zhang, Xinlei Zhang, Lisai Wang, Renkai Niu, Wei Guo, and Chuangji Zhang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
AI-Based Education and Learning Systems
A Mobile Application for Taking Notes Based on Cornell Technique Hasan Demirelli1 , Yalçın ˙I¸sler2 , and Yılmaz Kemal Yüce1(B) 1 Alanya Alaaddin Keykubat University, Alanya 07450, Antalya, Turkey
[email protected] 2 ˙Izmir Katip Çelebi University, ˙Izmir, Turkey
Abstract. Notetaking is considered, by many educators, as one of the critical actions of learning. There are several note-taking methods and approaches. Based on these methods and approaches, various applications, whether mobile, desktop or -Web-based, were developed. In this paper, a novel note-taking application based on a technique, known as Cornell Technique, is presented. For the software development process, Incremental Model was adopted. Requirement Analysis included, aside from examining principles and related note-taking structure of Cornell Technique, investigating (i) how to perform notetaking as an activity of learning, (ii) its product and (iii) relationship of notes for the purpose of storage. Models containing sub-activities, such as reviewing note have been identified and some were selectively adopted and related functions such as review alert (tickler) and collaboration on notetaking have been implemented. To the purpose of storage, a tree-based scheme called collection was modelled. User interfaces were first designed as mockups and click-through prototype using Adobe XD. The mobile application was implemented in Dart programming language. Google’s Flutter Framework was adopted to have flexibility in UI development. The application has been published in Google Play Store for users to install for free. Keywords: Notetaking · Cornell Technique · mobile application
1 Introduction Note taking is an information-processing approach that is efficacious and commonly used both in daily life and in many professions [1]. In this regard, it is an action taken as a routine of productive thought processes such as learning, decision making and problem solving as well as practicing. Due to its relationship with learning, note taking can be accepted as an academic skill, too. During any period of their education, whether it be primary school or university, students are presented either no or very little information regarding the approaches or techniques of note taking. However, notetaking is one of the most common activities performed by students. Research has shown that taking notes is a type of writing task that undergraduates perform during lectures. Brobst reported that 98% of college students take lecture notes [2]. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 3–19, 2024. https://doi.org/10.1007/978-3-031-50571-3_1
4
H. Demirelli et al.
1.1 Impact of Note-Taking on Learning From a cognitive point view, notetaking does not simply refer to writing down what one listens to or reads as shortened text. Learning is comprised of a few integral cognitive processes such as attention, encoding, storage and retrieval. According to Di Vesta and Gray [3], notetaking serves primarily for two cognitive functions, encoding and storage, since during notetaking students encode information by transcribing, selecting, and summarizing relevant information and organize it for later retrieval. In their study Craik and Tulving [4] presented that notetaking demonstrates significant impact on retention and recall by activating some cognitive processes. In his research Kiewra shows that notetaking significantly raises attention during lesson compared to simply listening to the lesson without notetaking [5, 6]. In another study, students asserted the same result regarding notetaking by referring to the fact that taking notes helps them remain attentive [7]. A study by Carrier et.al. Found out that one’s perception of confidence in notetaking skill is a predictor of course achievement [8]. Affirmatively, a study reported that students who take more notes during lectures are high achievers in their courses [9]. In accordance with this finding, two different studies concluded that notetaking is positively correlated with test and course performance [10, 11]. The product of notetaking, i.e., the set of notes taken, is essential and critical for review; since evidence shows that students who review notes outperform students who do not review notes [12, 13]. Kılıçkaya and Çokal-Karada¸s studied the effect of notetaking in listening comprehension performances of students from Foreign Language Education in Middle East Technical University. They concluded that the experimental group, which was allowed to take notes, performed statistically better than the control group, which was not allowed to take notes [14]. 1.2 Problems Related to Notetaking Many studies focused on problems that would occur related to notetaking and whether these problems cause negative consequences in learning and education []. Some of these confirmed that university students apply weak strategies and techniques during lectures and while studying, such as organizing ideas linearly and poor or incomplete notetaking [15, 16]. Another study similarly concluded that students have poor notetaking skills (e.g., organizing ideas linearly) during lecture or reading. Consequently, lecture note takers omit around 70% of critical lecture points [17, 18]. Incomplete notetaking is considered as a major issue since studies show that number of points recorded in notes is positively correlated with academic success [11, 20–23]. In relation to students’ poor notetaking skills supported by evidence, a research set reveals that students’ own study notes are less complete and less effective than ones provided by the instructor [17–19, 23, 24]. 1.3 Notetaking Techniques and Methods In literature, there exists a body of research regarding notetaking techniques and strategies. Basically, they are classified as (i) Linear Notetaking Techniques and (ii) Non-linear
A Mobile Application for Taking Notes Based on Cornell Technique
5
Notetaking Techniques. In literature, linear notetaking is defined as organization of information as lists or outlines [13]. Research depicts that linear organization of notes restricts learning, in particular; relational learning [13]. Any notetaking approach should have validated strategies to activate cognitive processes mentioned. The main objective of these techniques is to guide students through a standardized and effective method or a step-by-step procedure of processing lecture or similar context material. They dictate and make students walk through certain instructions and employ certain principles. Some even propose their own structures, and formats. These methods include but not limited to Buzan Method, Verbatim Split Method, SOAR Method, Bartush Active Method, and Cornell Method [25–30]. Among them, Cornell Method has a long history [29]. 1.4 Mobile Applications for Notetaking Digital notetaking has recently become an alternative format of notetaking. There are many mobile applications for notetaking available in application marketplaces. A search in application marketplaces reveal many options offering features and functionalities. Using latest versions of some applications users can insert multimedia into their notes (pictures, voice recordings, video recordings), take voice-recorded speech-to-text notes or take handwritten notes using a stylus pen. Two popular ones are Samsung Notes [31] and Google Keep [32], both of which are downloaded more than one billion times in Google Play Store. There are others such as Microsoft OneNote [33] (more than 500 million downloads), ColorNote NotePad Notes [34] (more than 100 million downloads) and Evernote [35] (more than 100 million downloads). In literature, research evidence concerning the use of mobile applications for educational purposes is scarce [36–38]. In their study, [36] Pyörälä, et.al. Investigated students’ perceptions of notetaking with iPads in 2019. The study was conducted at University of Helsinki, which has given iPads to its freshman students in medical and dental schools since 2013. The study concluded that both medical and dental school students considered digital notetaking as the most important use of mobile devices during their first two years. In addition, the authors discovered that students had developed refined digital notetaking strategies and always had their notes ready for retrieval [36]. Shen and Reily developed a mobile application called GroupNotes that allows students to form groups and take digital notes during a lecture collaboratively [37]. In a similar effort, a mobile application called EduNotes for collaborative notetaking during lectures was introduced by Popescu et. al [38]. The authors investigated 25 students’ perceptions of using the application in a lecture session through a survey regarding their experience with EduNotes. Students were positive about and welcomed the idea of taking notes and sharing them with peers. They also reported notetaking by EduNotes as relatively quick. 1.5 Aim of the Study In this paper, we report on the development of a mobile application that makes a notetaking system, called Cornell Technique, possible to experience with many features to help users in their learning processes. An electronic model that is a version of the original technique with some essential and additional functions (facilities) were designed and
6
H. Demirelli et al.
implemented. The next section introduces an overview of Cornell Technique, the electronic notetaking model designed to implement, the system architecture and tools that we utilized for the development. In Sect. 3, basic features through user interface of the application are presented. Finally, in the last section, possible increments (i.e., possible functionality features that are planned to be added in next versions) and possible design of a study on measuring the impact of use of our application on academic performance will be discussed.
2 Materials and Methods 2.1 The Cornell Technique The Cornell Notetaking Technique had its name after the Cornell University. It was developed by Walter Pauk in 1949. Pauk states that his sole purpose was to present his students a simple and effective notetaking technique to reach high comprehension and retention in relation to what they listen to or read regarding their studies in university. In his book titled “How to Study in College”, Walter Pauk later has extended his effort’s scope and presented Cornell Technique as part of a methodological study approach for college education [29]. In his book [29], Pauk proposes “retaining information” as the third stage of his four-staged studying and learning approach. He divides retaining information into three sub-stages and offers Cornell Notetaking System as a technique consisting of a collection of algorithms, principles, and a documentation template to use in two sub-stages, i.e., “Taking effective notes” and “Turning notes into knowledge”. In detail, the Cornell Notetaking System is established on. • Classification of notetaking depending on context of learning activity performed, i.e., event (e.g., lecture, discussion, meeting, or reading session) – How to take notes during lecture, discussion or meeting, – How to take notes while reading books (e.g., textbook), • A notetaking sheet with a specific structural format • Algorithms having steps with five relevant routines to be performed using the notetaking sheet(-s), and • Timing of these algorithms. – During event (e.g., during lecture) – After or before event (e.g., before next class begins or quiz) The Five routines/actions of CT are defined as follows; Record, Reduce, Recite, Reflect, Review [29]. The notetaking documentation format is presented in Fig. 1. Basically, it is divided into 3 partitions. In his methodology, Pauk relates each partition in format with a different combination of routines, learning context and timing.
A Mobile Application for Taking Notes Based on Cornell Technique
7
Fig. 1. Structure of a Cornell Notetaking sheet [29].
The first partition, called Notes, is as shown in white in Fig. 1. For using Notes partition, considering the categorization of notetaking mentioned above (relative to data source and learning context, i.e., listening to lecture or reading from textbook), Pauk offers two different algorithms [29] both to be carried out during related learning activity. One of these algorithms, i.e., the algorithm for note taking during lecture, discussion, or meeting, is shown in Fig. 2. Pauk relates Notes partition to ‘R’ecording action. Regardless of the type of the source or context for learning activity (lecture, book reading, discussion) Notes partition is for recording as many facts and ideas from source activity (e.g., listening to lecture, reading textbook) as possible. Pauk also proposes some methods to take meaningful notes in a quick and timely fashion for application in this partition during an event such as taking notes telegraphically [29]. The second partition is called Cue Column. This column is related to ‘R’eviewing, ‘R’educing and ‘R’eciting routines. While taking notes in Notes partition, Cue Column should remain empty. Hence, timing for use of this column is defined correspondingly as after initial learning activity or event (e.g., lecture, class, discussion, etc.) in general. When it’s time for user to review, recite and/or reduce what s/he has jotted down in Notes Column, user ought to type down questions and pivotal phrases to help clarify meanings, reveal relationships by filtering out essential points in text. Thus, Cue partition is designed for connecting key concepts and finding out relationships between them by investigating the text in Notes partition. Thereby, through a learning session consisting of three ‘R’s, user shall fill this partition. The second partition is called Cue Column. This column is related to ‘R’eviewing, ‘R’educing and ‘R’eciting routines. While taking notes in Notes partition, Cue Column
8
H. Demirelli et al.
Fig. 2. Pauk’s Algorithm for taking notes in Notes partition during lecture, discussion or meeting [29].
should remain empty. Hence, timing for use of this column is defined correspondingly as after initial learning activity or event (e.g., lecture, class, discussion, etc.) in general. When it’s time for user to review, recite and/or reduce what s/he has jotted down in Notes Column, user ought to type down questions and pivotal phrases to help clarify meanings, reveal relationships by filtering out essential points in text. Thus, Cue partition is designed for connecting key concepts and finding out relationships between them by investigating the text in Notes partition. Thereby, through a learning session consisting of three ‘R’s, user shall fill this partition, 1. First, user shall make a ‘r’eview through Notes Partition, 2. Then, user shall ‘r’educe [reviewed] notes to essential facts and concepts, and established relationships between them, 3. Finally, user shall ‘r’ecite. The last partition, called Summary, is reserved for ‘R’eview and ‘R’eflect activities. Pauk refers to this partition as “the area that will be used to distill a page’s worth of notes down to a sentence or two” [29]. By exploiting the Summary partition for the first time after reviewing Notes and Cue partitions, user synthesizes, reasons and draw conclusions and writes them with her/his own words. This partition is useful for quick reference, especially before exams of type quizzes. The impact of Cornell Technique was investigated by many studies. For instance, Sahin ¸ et. al. Aimed whether applying Cornell Technique has impact on understanding and retention of the text dictated [39]. They concluded that there was a statistically significant difference between experimental group that applied Cornell Technique and control group that applied traditional linear notetaking technique.
A Mobile Application for Taking Notes Based on Cornell Technique
9
2.2 Proposed Model for Electronic Notetaking The proposed model consolidates and equips the original Cornell Technique together with some functionalities based on facts in reference to scientific findings. The first functionality is collaborative notetaking. In their study regarding a Webbased collaborative notetaking application, Valtonen et.al. Reported that students showed acceptance towards the idea of sharing their lecture notes and were interested in reading other students’ notes [40]. Considering these findings, the proposed model in our study, enables users to form teams and work collaboratively for taking notes. This functionality offers users much more than sharing notes and/or completing missing parts of shared/team notes. It simply makes it possible for users with different learning styles to come together and combine and reflect their learning strengths on notes. In their book dated 1971, Kolb, Rubin and McIntyre proposed a categorization of learning style [41– 43]. They conceptualized and described four different learning styles, i.e., Converger, Diverger, Assimilator and Accommodator [41] and they also designed and validated a self-examining inventory for individuals to discover their learning mode, called Learning Style Inventory. Later in 2013, Kolb and Kolb refined and extended the categorization and introduced Learning Style Topology with nine learning styles [42]. Users with different learning styles will have the opportunity to collaborate on notes. To the purpose of storing and accessing notes, the “Folder-File” model, which is a typical abstraction currently in use in modern operating systems, is adopted. In our abstraction, notes are represented and stored as files with a specific format (technically speaking, as entries in database). For storing logically related notes, a logical storage structure called “Collection”, which is equivalent to a “Folder”, is designed. For each user’s storage space, a root collection is created by default. Users can create and store notes as separate files under their root collection. Similarly, they can create a hierarchy of collections under root collection (i.e., users can create new collection under an existing collection) to store their interrelated notes. A reminder module was integrated as part of application of Cornell Technique’s learning routines (i.e., Recording, Reciting, Reducing, Reviewing and Reflecting). The module simply allows user to set a notification-based tickler for any note. For each tickler, user sets date and time. A reminder is thought to be useful since studies show that notes are valuable when they are reviewed. It is experimentally discovered that reviewing notes is more important than recording them for learning [44]. Hence, ticklers will help users act proactively for reviewing, reducing, and reciting. Another function in our electronic model is called Note History. This function allows users to do versioning on their notes. In other words, whenever a user makes some changes in (i.e., manipulates) and saves an existing note, that note’s previous version is also saved and stored as an old version of it. The expected efficacy and benefit of note history for users is to be able to rephrase and observe their progress on the matter of respective note. However, there is a limit to versioning a note. Each version of a note is stored for 30 days. Any 30-day old version of a note is deleted implicitly by the application.
10
H. Demirelli et al.
2.3 Development Approach and Tools As the software development process model, Incremental Model [45, 46] was adopted. Major reasons behind choosing the Incremental Model are twofold; to develop a version with core functionality and have a quick release, and since we know the requirements up-front and already had the note template defined as an interface. We analyzed and prioritized the requirements and decided on core/essential functionality set. Next, we used Adobe xD v42 (released July 2021) software to design user interface based on core functionality set [47]. To this purpose, we modeled the functions and prepared a medium fidelity click-through prototype to test product flows of the application [48, 49]. For implementation, two integrated development environments (IDEs) Android Studio v2020.3.1 and IntelliJ Idea v2021.2.3 were used [50, 51]. In these IDEs, system is implemented using Dart programming language with Flutter Framework from Google and respective plug-in for Dart programming language for the IDEs. As the database management system infrastructure, one of Google’s cloud services called Firestore is employed. Firestore is a NoSQL type of database management system [52]. 2.4 System Architecture The system architecture is presented in Fig. 3. It is a two-tier application mimicking a three-tier architecture [53]. Basic system operation at the data tier and parts of business/application tier are realized using dedicated Google Services. For operations at data tier, such as database operations and file hosting, Google’s Firebase Service is adopted, which is an instance of Google Cloud Services with advanced capabilities [54, 55].
Fig. 3. System architecture with both Web and mobile applications of Cornell Note.
A Mobile Application for Taking Notes Based on Cornell Technique
11
2.5 Data Model Google’s Firebase Service relies on Google’s Firestore Cloud Database Service. Firestore is a NoSQL type of database management system and requires JSON notation-based data modeling. Therefore, for our system’s data tier operations, data model is designed using JSON [56]. The data model is shown in Fig. 4. In our data model, at the root are two types of arrays (collections) of objects, i.e., User and Group (Team), defining a user or a team. A user is defined by several fields, including but not limited to user-id, display Name (a field cloned from Google OAuth API) and email address followed by an array of objects of type Document representing notes and note collections of user. Each Document has type field to indicate whether object is a note or note collection. In addition, parentId field is available as the track of note collection the object is stored. Finally, within each note of type Document is defined another type of array (collection) of objects called History, in which old versions of a note is stored with fields homogeneous to ones in object of type Document. In case notes and collections of notes belong not to a single user but to a team, an object of type Group with fields such as id, user (representing number of users in team), created_at is defined.
Fig. 4. Data model implemented in Firestore Cloud Database Service with 7 collections.
12
H. Demirelli et al.
In definition of a group are two arrays (collections) of objects of two types, User and Document. The array called User is designated to store members of a team (Group). An object of type Group can be created by any user, who is called Creator. Creator of a team can add members by their email addresses. While adding a member, creator must also assign a role to that team member. There are three roles represented by the field userType, Moderator, Editor and Viewer. Each role represents an authorization schema (set of permissions) per operations within group. Each Document type of object in first array is either a note or a note collection. Within each group note, an array of old versions of note is defined as History, which is an array of old versions of a note to be stored with fields homogeneous to ones in object of type Document. 2.6 Basic Operation and User Interface To manifest basic operation and functionality of our application, we designed a use case diagram and used it through the rest of the development. The use case diagram is presented in Fig. 5. It also allows users to image and get an idea of what they can do with the application. As seen in Fig. 5, there are. • 32 distinct use cases, • between them are 17 different relationships of types < < include > > and < < extend > >, • 17 distinct associations from user to use cases. • 4 distinct < < invariants > > (column-wise). The user can start using Cornell Note after logging on. Log on operation requires user to have an active Google account. Hence, the first screen any user would face is shown in Fig. 6 (a). First time logging on is considered as a registration and as part of the registration process, for each user, a root note collection is created. After logging on, user will be viewing her/his root collection screen showing all notes, note collections and teams (groups) user is a member of. By tapping on any note collection in her/his root collection, user can view notes and sub-collections of notes within. Included in this screen on the right bottom corner is a floating action button with three sub-actions to choose from, called create new note, create new note collection, and create new group. In addition, a search button with magnifier icon is available on the top right corner of this screen. By tapping this button, user can perform a search for those notes and note collections having names matching the text s/he would input into the search box that appears. A sample of this user interface is presented in Fig. 6 (b). When viewing this UI, user can view the contents of a note in Cornell Note template simply by tapping that note. The sample of UI showing contents of a note is available in Fig. 6 (c). Within the Cornell Note Template UI, three partitions are created with default dimensions. However, users can adjust the dimensions of partitions by tapping on and dragging the bars between partitions. This functionality allows users to adapt the partitions’ area (size?) to their content when and where necessary. On top right corner of this screen are three buttons represented with icons for search, tickler and action overflow menu (also known as “more menu”), respectively. The search function enables user to search for a text in note. User can add a tickler for the note s/he currently is viewing through a two-stepped interface,
A Mobile Application for Taking Notes Based on Cornell Technique
13
the first one for date and the second one for time. A snapshot of UI for viewing a note in Cornell Technique’s template is in Fig. 6 (c).
Fig. 5. Use-case diagram for Cornell Note.
Whenever user double-taps on one of the partitions of note that is currently viewed on the Cornell Note Template UI, that partition would be viewed in Editor UI. Doubletap for viewing and editing partition of a note is a design choice related to Midas Touch Phenomenon in interface design [57, 58]. In the Editor UI, at the top is the note’s name concatenated with respective partition’s name as the title. Below the title is the editing area showing the content of the respective partition.
14
H. Demirelli et al.
Fig. 6. User interface screenshots for login, root collection, template and editing.
At the bottom is a sliding bar with various functional buttons for formatting and adding multimedia (e.g., pictures, audio, etc.) into content of partition that is currently viewed. In case user makes changes in content of partition of a note, regardless of availability of Internet connection, changes are saved (if user confirms). In clear terms, the application saves changes in a note or a newly created note locally (i.e., in mobile device) if it cannot connect to Firebase service. Whenever the connection is established, the changes are committed in Firebase implicitly without further interaction with user and saving changes transaction is completed. While viewing/editing a note on Cornell Note Template UI, users can also check old versions of that note. To this purpose, user should tap on overflow action menu at the top right corner and tap History action. If there exists one or more old versions of note, they are listed in Note History screen. In this screen, each old version gets listed with the date it would become an old version. The capacity of the system to cache old versions of notes is defined as the last 30-days, i.e., all versions of all notes within the last 30-days are stored. A sample of Note History screen is at Fig. 6 (e). Users can also form teams to take notes collaboratively. The application provides this functionality through the floating action button at bottom right of the root collection screen as shown in Fig. 6 (b). After tapping the floating action button, by simply tapping create new group option, user shall view Group Creation Screen shown in Fig. 6 (f). On this screen, user can add members to her/his new group by typing each user’s e-mail address after tapping “Add New Group Member” button. All functionalities for notes (e.g., tickler, history) are available for group notes, too.
3 Discussion and Future Work The application is currently available in Google Play Store since March 2022. The latest version of our app reached 1000 + downloads as of 11.03.2023, globally. Number of monthly active users (MAU) for the last two months changed between 180 (minimum) and 256 (maximum) with an average engagement time of 28 min 00 s. Figure 7 is a graph showing activities of monthly active users (256) with respect to language on user device for the last four weeks. The graph shows MAU for the first seven languages. Considering the nature of notetaking and characteristics of Cornell Note Technique, some functions will be added to allow users better experience note taking and to increase effectiveness and efficiency of notetaking.
A Mobile Application for Taking Notes Based on Cornell Technique
15
Fig. 7. Google Analytics statistics for our app on 11.03.2023 for the last 4 weeks.
It is planned to add Kolb Learning Style Inventory as a self-examining tool to Cornell Note mobile application. Behind this decision lies the motivation to increase individuals’ understanding of the process of learning from experience and enable users to realize their unique individual approach to learning as Kolb and Kolb phrases in their book [42]. Thus, users will be empowered to taste and become aware of their own unique style with reference to generalized structure of process of learning. They can also realize how it is to study with people with different learning styles and experience how it results and yields for themselves in notetaking. In the next version, while setting ticklers, our application will enable users to select relevant Cornell Technique routines (i.e., five ‘R’s) together with related partition[-s]. Such a functionality will allow users to easily remember relevant partition[-s] together with his/her next learning routine[-s]. From a user’s point of view, associating partition[s] of a specific note with learning routine[-s] user would plan to execute might offer a more precise learning activity management. Setting repeated or periodic ticklers will be available in the next version, too. For the next version, another functionality we plan to add is linking notes. Basically, this is an identical function to linking documents in World Wide Web. To our knowledge, there are not any notetaking applications offering such a feature/function. The objectives of this feature are twofold: • Connecting the notes based on conceptual relationship between them (e.g., prerequisite or order of subjects in notes) • Connecting concepts within different notes that are related to each other for review and reduce routines. In addition to user-set ticklers, there will be new type of tickler. This type of tickler is called Reduce in Cue. It is an implicit tickler that will remind user of those notes in which only Notes partition is used. Reduce in Cue is a characteristic reminder of Cornell Technique to help user take next necessary action, i.e., Reduce, to continue learning process on a specific note. User can choose to pass, to reduce and fill Cue and to delete related note. Currently in our app, reminder module operates with local machine notifications and there is no functionality that allows users to list set ticklers. In addition, the functionality
16
H. Demirelli et al.
of Ticklers by Reminder Module will be migrated from local machine to Firebase. For this purpose, the data model shall be redesigned, and Firebase Cloud Messaging API will be used. In the current version, system can store old versions of each note dating 30-days back, as a maximum. In the next version, users will be able to set their preferred cache length within an interval of months with a maximum of 12 months. There are three similar mobile applications available in application marketplaces such as Google Play Store and AppStore of Apple. However, those applications are inferior to our application with respect to features and functions available and UI design. A comparison of these mobile applications to our application with respect to some functions and features is presented at Table 1 [59–61]. Our application (named Cornell Note) can be downloaded from [62]. Although there exists many studies investigating the impact of Cornell Technique in different learning contexts, to our knowledge, none of them were involving the use of Cornell Technique through a digital environment. We plan to investigate the impact of Cornell Technique on academic success in an engineering faculty in a university for different core courses. Table 1. Comparison of similar mobile notetaking applications to our application Columns
Speech to Text CN
Cornell Notes
Our App.
Collaboration
✖
✖
✖
✔
Collection
✔
✖
✖
✔
Tickler
✖
✖
✖
✔
Formatting
✔
✖
✖
✔
Multimedia
✖
✖
✖
✔
Search Within Notes
✔
✖
✖
✔
Search Within Collections and Note Names
✖
✖
✖
✔
Template Frame Adjustment
Partial
✖
✖
✔
Note History
✖
✖
✖
✔
Web
✖
✖
✖
✔
References 1. Hartley, J.: Note taking in non-academic settings: a review. Appl. Cogn. Psychol. 16(5), 559–574 (2002) 2. Brobst, K.E.: The Process of Integrating Information From Two Sources, Lecture and Text. Dissertation (1996) 3. Di Vesta, F.J., Gray, G.S.: Listening and note taking. J. Educ. Psychol. 63(1), 8–14 (1972)
A Mobile Application for Taking Notes Based on Cornell Technique
17
4. Craik, F.I.M., Tulving, E.: Depth of processing and the retention of words in episodic memory. J. Exp. Psychol. Gen. 104(3), 268–294 (1975). https://doi.org/10.1037/0096-3445.104.3.268 5. Kiewra, K.A.: Note taking and review: the research and its implications. J. Instructional Sci. 16, 233–249 (1987) 6. Kiewra, K.A.: Cognitive aspects of autonomous note taking: control processes, learning strategies, and prior knowledge. Educ. Psychologist 23(1), 39–56 (1988). https://doi.org/10.1207/ s15326985ep2301_3 7. Van Meter, P.N., Yokoi, L., Pressley, M.: College students’ theory of notetaking derived from their perceptions of notetaking. J. Educ. Psychol. 86(3), 323–338 (1994). https://doi.org/10. 1037/0022-0663.86.3.323 8. Carrier, C.A., Williams, M.D., Dalgaard, B.R.: College students’ perceptions of notetaking and their relationship to selected learner characteristics and course achievement. Research in Higher Education 28(3), 223–239 (1988). http://www.jstor.org/stable/40195863 9. Kiewra, K.A.: A review of note-taking: encoding-storage paradigm and beyond. Educ. Psychol. Rev. 1(2), 147–173 (1989) 10. Kiewra, K.A.: The process of review: a levels of processing approach. Contemporary Educational Psychology 8(4), 366374 (1983). ISSN 0361-476X. https://doi.org/10.1016/0361-476 X(83)90023-1 11. Kiewra, K.A., Benton, S.L.: The relationship between information-processing ability and notetaking. Contemp. Educ. Psychol. 13, 33–44 (1988) 12. Kiewra, K.A.: Investigating notetaking and review: a depth of processing alternative. Educational Psychologist. 20(1), 23–32 (1985) 13. Kiewra, K.A., DuBois, N.F., Christian, D., McShane, A., Meyerhoffer, M., Roskelley, D.: Note-taking functions and techniques. J. Educ. Psychol. 83(2), 240–245 (1991). https://doi. org/10.1037/0022-0663.83.2.240 14. Kılıçkaya, F., Çokal-karada¸s, D.: The effect of note-taking on university students’ listening comprehension of lectures. Kastamonu E˘gitim Dergisi 17(1), 47–56 (2009) 15. Kiewra, K.A.: Aids to lecture learning. Educational Psychologist 26(1), 37–54 (1991) 16. Pressley, M., Yokoi, L., van Meter, P., et al.: Some of the reasons why preparing for exams is so hard: what can be done to make it easier? Educ. Psychol. Rev. 9, 1–38 (1997). https://doi. org/10.1023/A:1024796622045 17. Kiewra, K.A.: Learning from a lecture: an investigation of notetaking, review and attendance at a lecture. Human Learning: J. Practical Res. Appl. 4(1), 73–77 (1985) 18. Kiewra, K.A.: Students’ note-taking behaviors and the efficacy of providing the instructor’s notes for review. Contemporary Educational Psychology 10(4), 378–386 (1985). ISSN 0361– 476X. https://doi.org/10.1016/0361-476X(85)90034-7 19. Baker, L., Lombardi, B.R.: Students’ lecture notes and their relation to test performance. Teaching of Pschology 12(1), 28–32 (1985) 20. Kiewra, K.A.: Acquiring effective notetaking skills: an alternative to professional notetaking. J. Read. 27(1), 299–302 (1984) 21. Kiewra, K.A.: Implications for note taking based on relationships between note taking variables and achievement measures. Read. Improv. 21, 145–149 (1984) 22. Kiewra, K.A., Benton, S.L., Lewis, L.B.: Qualitative aspects of notetaking and their relationship with information-processing ability and academic achievement. J. Instr. Psychol. 14(3), 110–117 (1987) 23. Collingwood, V., Hughes, D.C.: Effects of three types of university lecture notes on student achievement. J. Educ. Psychol. 70(2), 175–179 (1978). https://doi.org/10.1037/0022-0663. 70.2.175 24. Morgan, C.H., Lilley, J.D., Boreham, N.C.: Learning from lectures: the effect of varying the detail in lecture handouts on note-taking and recall. Appl. Cogn. Psychol. 2, 115–122 (1988)
18
H. Demirelli et al.
25. Tee, T.K., et al.: Buzan mind mapping: an efficient technique for note-taking. World Academy of Science, Engineering and Technology, International Journal of Social, Behavioral, Educational, Economic, Business and Industrial Engineering 8, 2831 (2014) 26. Readence, J.E., Bean, T., Baldwin, R.S.: Content Area Reading: An Integrated Approach. 4th Edition, Kendall-Hunt Publishing (1989) 27. Jairam, D., Kiewra, K.A.: An investigation of the SOAR study method. J. Adv. Acad. 20(4), 602–629 (2009). https://doi.org/10.1177/1932202X0902000403 28. Daher, T.A., Kiewra, K.A.: An investigation of SOAR study strategies for learning from multiple online resources, Contemporary Educational Psychology 46, 10–21 (2016). ISSN 0361–476X, https://doi.org/10.1016/j.cedpsych.2015.12.004 29. Pauk, W., Owens, R.J.Q.: How to Study in College. 10th Ed., Wadsworth Cengage Learning, Boston (2011). ISBN-13: 978-1-4390-8446-5 30. Stacy, E.M., Cain, J.J.: Notetaking and handouts in the digital age. American J. Pharmaceutical Education 79(7) (2015) 31. Samsung Notes. https://play.google.com/store/apps/details?id=com.samsung.android.app. notes. Accessed 10 Jan 2023 32. Google Keep. https://play.google.com/store/apps/details?id=com.google.android.keep. Accessed 10 Jan 2023 33. Microsoft OneNote. https://play.google.com/store/apps/details?id=com.microsoft.office.one note. Accessed 10 Jan 2023 34. ColorNote Notepad Notes. https://play.google.com/store/apps/details?id=com.socialnmo bile.dictapps.notepad.color.note&gl=TR. Accessed 10 Jan 2023 35. Evernote. https://play.google.com/store/apps/details?id=com.evernote. Accessed 10 Jan 2023 36. Pyörälä, E., Mäenpää, S., Heinonen, L., et al.: The art of note taking with mobile devices in medical education. BMC Med. Educ. 19, 96 (2019). https://doi.org/10.1186/s12909-0191529-7 37. Shen, H., Reilly, M.: Personalized multi-user view and content synchronization and retrieval in real-time mobile social software applications. J. Comput. Syst. Sci. 78(4), 1185–1203 (2012) 38. Popescu, E., Stefan, C., Ilie, S., Ivanovi´c, M.: EduNotes – A Mobile Learning Application for Collaborative Note-Taking in Lecture Settings. In: Chiu, D., Marenzi, I., Nanni, U., Spaniol, M., Temperini, M. (eds): Advances in Web-Based Learning – ICWL 2016. ICWL 2016. Lecture Notes in Computer Science, 10013. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-47440-3_15 39. Sahin, ¸ A., Aydın, G., Sevim, O.: The effect of cornell method on understanding and retention of the text dictated. Dumlupınar Üniversitesi Sosyal Bilimler Dergisi 29, 29–36 (2011) 40. Valtonen, T., Havu-Nuutinen, S., Dillon, P., Vesisenaho, M.: Facilitating collaboration in lecture-based learning through shared notes using wireless technologies. J. Comput. Assist. Learn. 27(6), 575–586 (2011) 41. Kolb, D.A., Rubin, I.M., McIntyre, J.: Organizational Psychology: An Experiential Approach, 3rd edn. Prentice Hall, Englewood Cliffs, N.J (1979) 42. Kolb, A.Y., Kolb, D.A.: The kolb learning style inventory 4.0: a comprehensive guide to the theory, psychometrics, research on validity and educational applications. Experience Based Learning Systems, Inc, Kaunakakai, HI (2013) 43. Smith, D.M., Kolb, D.A.: User’s Guide for the Learning Style Inventory. McBer and Company, Boston (1986) 44. Kiewra, K.A.: Providing the instructor’s notes: an effective addition to student notetaking. Educational Psychologist 20(1), 33–39 (1985). https://doi.org/10.1207/s15326985ep2001_5 45. Link, J.: Chapter 14 - The role of unit tests in the software process, editor(s): Johannes Link, In The Morgan Kaufmann Series in Software Engineering and Programming, Unit Testing
A Mobile Application for Taking Notes Based on Cornell Technique
46. 47. 48. 49. 50. 51. 52.
53. 54. 55. 56. 57. 58.
59. 60. 61. 62.
19
in Java, Morgan Kaufmann, pp. 291–312 (2003). ISBN 9781558608689, https://doi.org/10. 1016/B978-155860868-9/50016-X Arifo˘glu, A., Do˘gru, A. Yazılım Mühendisli˘gi (Yöntemler, Metodolojiler, CASE Ortamları, Günün Teknolojisi), 1st Edition, SaS Publishing (2001) Adobe XD. https://www.adobe.com/tr/products/xd.html. Accessed 10 Feb 2023 A Comprehensive Guide to Wireframing and Prototyping. https://www.smashingmagazine. com/2018/03/guide-wireframing-prototyping/ChristopherMurhpy. Accessed 10 Jan 2023 McElroy, K.: Prototyping for Designers: Developing the Best Digital and Physical Products. 1st Edition. O’Reilly. ISBN-13: 978–1491954089 (2016) Android Studio. https://developer.android.com/studio. Accessed 10 Jan 2023 IntelliJ Idea. https://www.jetbrains.com/idea/. Accessed 10 Jan 2023 Google Firestore. https://firebase.google.com/products/firestore?gclid=CjwKCAjwjZmT BhB4EiwAynRmD0uFN9Kh-qQ_S3NSUmvuy4N9uEWtHT_QI2xOks59VnOvvICuN3C4 qRoCxeAQAvD_BwE&gclsrc=aw.ds. Accessed 10 Jan 2023 What is Three-tier Architecture? https://www.ibm.com/cloud/learn/three-tier-archit ecture.IBM. Accessed 10 Jan 2023 Google Firebase. https://firebase.google.com/. Accessed 10 Jan 2023 Google OAuth API. https://developers.google.com/identity/protocols/oauth2. Accessed 10 Jan 2023 JavaScript Object Notation (JSON). https://www.json.org/json-en.html. Accessed 10 Jan 2023 The Midas Touch Effect. https://uxdesign.cc/the-midas-touch-effect-the-most-unknown-phe nomenon-in-ux-design-36827204edd. Accessed 10 Jan 2023 Haans, A., Usselsteijn, W.A.: The virtual midas touch: helping behavior after a mediated social touch. IEEE Trans. Haptics 2(3), 136–140 (2009). https://doi.org/10.1109/TOH.200 9.20 Cornell Notes Mobile Application. https://play.google.com/store/apps/details?id=com.aip csoft.cornell_notes_mobile&hl=en_US&gl=US. Accessed 11 Mar 2023 Columns – Cornell Notes. https://apps.apple.com/us/app/columns-cornell-notes/id1493839 821?mt=12. Accessed 06 Jan 2023 Speech to Text Cornell Notes. https://play.google.com/store/apps/details?id=com.cornell. voice.notepad. Accessed 11 Mar 2023 Cornell Note Mobile Application. https://play.google.com/store/apps/details?id=com.cornell. not&gl=TR. Accessed 11 Mar 2023
Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao Siyu Liu, Feng Yong(B) , Yingjie Wu, and Haoran An Shandong Province Meteorological Data Center, Jinan 250031, China [email protected]
Abstract. Based on the average temperature series during the past 60 years, the climate memory signals on different scales in Qingdao are extracted, to which autocorrelation function and fractional operator method analyses are then applied to fulfill the extraction. The result shows that: the climatic signals of long time q scale have certain memory. The fractional integral operator (0 It ) is used to extract the climate memory signals in the temperature series. This method is expected to play a role in the prediction of the climate in Qingdao. Keywords: Fractional Operators · Long-term Memory Signals · Weather and Climate
1 Introduction Although the degree of randomness of weather-scale and climate-scale stochastic time series are different, the relationship between them can be inscribed by introducing fractional order derivatives and integrals [1], and climate change on long time scales has significant long-term memory or persistence [2]. Although both weather and climate events follow the laws of hydrodynamics, they show chaos because both weather and climate are multi-scale phenomena that produce different degrees of rise and fall with different time scales [3]. As early as 1976, Hasselmann compared weather and climate to the relationship between micro-scale motion and macro-scale motion, and used the Langevin equation to establish the relationship between climate random variables and weather random variables, and proposed that the first-order derivatives of climate random variables with respect to time are a consequence of weather random variables [3]. However, recent studies have found that the simple first-order derivative is no longer a good model of the relationship between weather and climate, and that the relationship between the two is likely to be one of fractional order derivatives, and there have been studies that have confirmed the memorability of climate element sequences [4–7]. Since the Qingdao area is bordered by the Yellow Sea and influenced by the marine environment, this paper conducts an in-depth study on the climate memory characteristics of Qingdao area [8]. In this study, the daily, monthly and annual temperature data of Qingdao city from 1961 to 2020 are used to analyze the autocorrelation and climate memory signal characteristics of stochastic time series by using autocorrelation function © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 20–30, 2024. https://doi.org/10.1007/978-3-031-50571-3_2
Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao
21
and fractional order integration function, and the research results can provide technical guidance for local climate prediction and climate change research.
2 Data and Methods 2.1 Data Sources The daily, monthly and annual temperature data of Qingdao City from 1961 to 2020 were used in the paper, and the information was obtained from the Shandong Meteorological Data Center. In order to ensure that the temperature data are true and valid, the temperature data are tested for temporal consistency and spatial consistency, abnormal data are removed, and linear interpolation is performed on the missing measurement data, etc. The daily, monthly, and annual mean temperature distance level data for 1961–2020 were obtained using quality-controlled temperature data at different time scales.The use of distance level series information can remove the trend disturbance of the actual observed data and has relative smoothness. 2.2 Correlation Analysis Autocorrelation is the degree of dependence between the instantaneous values of two moments in a random time series, and the value of the autocorrelation function expresses the degree of correlation of the data. For stochastic time series, autocorrelation reflects the memorability of the stochastic time series. Let a time series be T (τ ), then its autocorrelation function R(τ ) can be expressed as: R(τ ) =
T (t + τ )T (τ ) T 2 (τ )
(1)
where, T (t + τ ) representing the sequence value of the interval t + τ , T (τ ) representing the present moment value, τ representing the time interval. If the autocorrelation function of a random time series is similar to white noise, it can be approximated as white noise.The value of the state at each moment of the white noise sequence is independent of the value of the state at other moments, which can also be referred to as a special kind of smooth time-random sequence without any temporal memorability.Therefore, the autocorrelation function should be used to determine whether the random time series is white noise [9]. 2.3 Definition of Fractional Order Integral In 1819, the French mathematician Lacroix used the Gamma function to express the fractional order derivative of a non-negative exponential power function [10]: d 1/2 xα (α + 1) α−1/2 = x 1/2 dx (α + 1)
(2)
This coincides with the result calculated from the now familiar definition of the Riemann-Liouville fractional order derivative, In addition, Abel used fractional order
22
S. Liu et al.
f (x) derivatives d dx1/2 to express the solution of integral equation in his study of the isochronous problem, x (3) f(x) = (x − t)1/2 g(t)dt 1/2
0
In the 1830s, Liouville proposed what is now called the Liouville-type fractional order derivative, and Riemann also gave an expression for the fractional order derivative, Later, Sonin, Letnikov, Laurent and other mathematicians improved and perfected it, and finally formed the Riemann-Liouville fractional order operator [12]. It is defined as t 1 q (4) (t − x)q−1 u(x)dx 0 It u = (q) 0 where 0 ≤ q ≤ 1, (q) denotes the Gamma function, t denotes the current time point, x denotes the historical time, the fractional order integral starts from 0, represents the historical starting point, theoretically −∞ should be used to denote the historical starting point, considering the time node of the beginning of the observed data and the technical implementation, choose x = 0 as the historical starting point.As can be seen from the definition of the fractional order operator, the value of the integer order derivative of a function at a point depends only on the variability of the function near that point, whereas the value of the fractional order derivative of a function at that point is influenced by the variability of the function from almost the whole globe. In this sense, the integer order derivative operator can be considered as a local operator, while the fractional order operator is a non-local operator, which is the essential difference between the two. 2.4 Climate Memory Signal Intensity Since weather and climate element series are multi-scale series, power spectra S(f ) in frequency space are commonly used to portray stochastic time series of weather and climate.The power spectrum S(f ) of a climate random sequence £(t) is the square of its Fourier transform coefficient £(t), The formula is: 2 S(f ) = £(t) ∝ f −β (5)
where £(t) is the Fourier transform of £(t), Then S(f ) is called the power spectrum of the climate-time random series, f is the frequency, β is the power spectrum index [12]. In analyzing the distance level information of weather and climate stochastic time series, the average of the squared temperature distance level differences x(τ) at time intervals τ is often used, x(τ )2 called the second order structure function. The secondorder structure function of the fractal system can be derived from the mean square displacement formula of Brownian motion, The equation is: (τ ) = x(τ )2 ∝ τ 2α where α (0 < α < 1) is the Hurst index.
(6)
Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao
23
Using the second-order structure function and the fractional-order integral model, the relationship between the Hurst α and power spectrum indices β and the order of the fractional-order operator q is obtained as: 2q = β = 2α + 1
(7)
By selecting the appropriate intensity (q order) of the climate memory signal, it is possible to simulate the variation of the climate-scale stochastic sequence £(t). q = 0 represents a climate random sequence with no climate memory signal, behaving as white noise (β = 0). q > 0 represents the variation of the q-order integral operator to simulate a stochastic sequence of climate, cumulative climate memory signals and weather-scale signals are also introduced. 2.5 Climate Memory Signal Calculation Decomposing the climate stochastic series into two components: the cumulative climate memory signal M (t) and the weather-scale signal ε(t). Before extract M (t) using the fractional order integral operator, Eq. (4) needs to be written in discrete form as follows: 1 £(t) = M(t) + ε(t) = (q)
t−δ x=0
(t − x)q−1 ε(x)dx + ε(t) = K(q)δt ⊗ 0t−δ + ε(t) (8)
where K(q)δt = {k(q, t), k(q, t − δ), . . . , k(q, t − x), . . . k(q, δ)} represents the climate memory signal kernel function, 0t−δ = {ε(0), ε(δ), . . . , ε(u), . . . ε(t − δ)} representing at each time point in historical weather signals.Therefore, in order to extract the climate memory signal, it is necessary to determine K(q)δt and 0t−δ and then calculate its convolution. In extracting the climate memory signal, the memory signal strength (q order) needs to be obtained by calculating the Hurst index α, and calculate the climate memory signal kernel function K(q)δt = {k(q; t), k(q; t − δ), . . . , k(q; t − x), . . . k(q; δ)} at each moment, k(q, t − x) =
1 (q)(t − x)1−q
(9)
2.6 Historical Weather Signal Calculations According to the study of the relationship between weather and climate series, the following fractional order relationship between weather and climate time series is found [13]: d qx = ε(t) dxq
(10)
The variation of the climate stochastic series can be regarded as the q order integral of the weather stochastic series, and the method can express the long-tail phenomenon of the climate stochastic series more precisely [14]. Fractional-order integration models
24
S. Liu et al.
split the climate stochastic time series into cumulative climate memory signals M(t) and weather-scale signals ε(t). £(t) = M(t) + ε(t)
(11)
The cumulative climate memory signal M(t) represents the cumulative long-tail impact of the historical weather signal, and the weather-scale signal ε(t) represents the impact of the current weather signal. At the same time the cumulative climate memory signal M(t) and the weather scale signal ε(t) make up the current climate state, and the current weather scale signal continues to have an impact on the climate state. If the climate memory signal M(t) can be calculated and the impact of the historical memory signal can be further quantified, it will improve the climate prediction technology, therefore, how to extract its signal is the focus of the research. If the current climate state £(t) is unknown, the cumulative climate memory signal M(t) can be expressed by the historical weather signal 0t−δ using a fractional order integration model as follows: t−δ 1 (12) M(t) = (t − x)q−1 ε(x)dx (q) x=0 ε(0), ε(δ), . . . , ε(u), . . . ε(t − δ) representing at each time point in historical weather signals, δ is the time interval of the random time series, calculating the cumulative climate memory signal M(t) and the historical random series £(0), £(δ), · · · , £(t − δ), the corresponding weather signal ε(0), ε(δ), . . . , ε(t − δ) can be further extracted, Assuming a historical starting point is t = 0 and ignoring the effect of the current moment, we have ε(0) = £(0), For the next moment t = δ in time, due to the corresponding historical time node x = 0, ε(δ) can be calculated from Eq. (8) as: ε(δ) = £(δ) − K(q)δt ⊗ 00 = £(δ) − k(q; δ) × ε(0)
(13)
By analogy, we get the corresponding weather signal for the historical 1 time point ε(0), ε(δ), . . . , ε(t − δ), where k(q, t − x) = , 0t−δ = (q)(t−x)1−q {ε(0), ε(δ), . . . , ε(u), . . . ε(t − δ)}, k(q, t − x) is determined by the strength (order) of the climate memory signal, Calculate k(q, t − x) to get an accurate estimate of the historical weather stochastic time series, Inverting the values of the climate random sequence (£(0), £(δ), . . . , £(u), . . . £(t − δ)) according to Eq. (8), Also using Eq. (12) to obtain the climate memory signal, although the value of the weather signal at the current moment cannot be estimated, the time stochastic series of the historical moment is still important and significant for climate prediction.
3 Results 3.1 Stability Test of Temperature Data The amplitude of the daily, monthly and annual mean temperature range series of Qingdao from 1961 to 2020 (see Fig. 1), the amplitude of the daily temperature range series is around 10°C, the amplitude of the monthly temperature range series is about 4°C, and
Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao
25
the amplitude of the annual temperature range series is only around 1°C, the degree of fluctuation of the daily temperature range series is relatively large, while the monthly and annual temperature series are relatively small. The smoothness of the year-by-year, monthly, and daily temperature distance series was tested by the unit root ADF method, and the results showed that the p-values of the year-by-year, monthly, and daily temperature distance series were close to 0, which was much smaller than the p-value level of the original hypothesis significance test of 1%, and the original hypothesis was significantly rejected, so the distance series of daily, monthly, and annual average temperature in Qingdao from 1961 to 2020 was judged to be a smooth time series.
Fig. 1. Average air temperature anomaly in Qingdao (1951–2020) on different time scales: a) daily scale, b) monthly scale, c) yearly scale).
3.2 Correlation Analysis The autocorrelation function R(τ) was calculated for the daily, monthly and annual mean temperature distance level series for Qingdao 1961–2020 (Fig. 2), and it can be seen from Fig. 2a that the curve fluctuates around the value of 0. Therefore, the daily mean temperature can be considered as a white noise series [14], which is not memorable. In contrast, compared to the daily distance level series, the amplitude of the autocorrelation values of the distance level series of monthly (Fig. 2b) and annual (Fig. 2c) mean temperatures range from 0.2 to 0.6, both of which have a certain degree of autocorrelation, and there is a law power phenomenon [15], which means that although the climate signal tends to decay with time, some of the characteristics do not disappear with the smoothing of the weather signal, but remain in the climate signal, which makes the monthly, annual or longer time scales are combined, so that the climate signals on long time scales show a certain degree of memory.
26
S. Liu et al.
Fig. 2. Autocorrelation functions of the average air temperature anomaly series in Qingdao (1951– 2020) on daily (a), monthly (b), and yearly scales(c).
3.3 Historical Weather Signals and Climate Memory Signals Assuming t = 0 as the starting point for the beginning of history, and ε(0) = £(0), for the next moment t = δ can be calculated from Eq. (13), so the weather signal at the node of the historical moment can be inferred point by point. Determining the starting point of the beginning of history is the key to extracting historical weather signals. Although theoretically chosen −∞ to represent a distant historical starting point, it is often chosen t = 0 as the starting point of history in practical calculations, and the influence of previous historical temperature data on the accuracy of the calculation cannot be theoretically ignored. Since meteorological observations began only more than 100 years ago, there are not enough observations to record previous climate states, yet observations recording previous climate states will inevitably have an impact on current climate states. The 5560th point to 5700th point of the daily average temperature distance level series in Fig. 1 is selected as the experimental object, and it can be seen from Fig. 3 that the error of the random time series is relatively large at the starting point 5560, and the error gradually decreases with time. Assuming that 5560 is the starting point of the observation series, the estimation of the historical weather signal starts at 5560, and the values before 5560 need to be ignored, and since the estimation starts at 5560, the appearance of errors is understandable. The historical observed values before 5560 are ignored in the calculation, which is the reason for the large error at the beginning. As time goes on, the influence of the ignored part will become smaller and its error will become smaller, and after 5580 point, the error is basically close to 0. Therefore, the calculation method of estimating the historical weather signal is considered reliable, and in later simulations, due to the influence of the unobserved time series, the Some data at the beginning need to be discarded to ensure the reliability. In the above discussion, it is found that the climate memory signal is objectively present in the long time scale stochastic series and has an impact on the later climate state, so quantifying the strength of the climate memory signal has a positive impact on climate prediction [16]. In Fig. 4, the time-stochastic series is divided into three stages: stage I (1961–1970), stage II (1970–2010), and stage III (2010–2020), and stage III is used as the test stage, in order to exclude the error caused by the historical observation data errors arising from inadequate data, using the data of Phase II as the experimental
Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao
27
Fig. 3. Verification of the extracted historical weather signal in Qingdao.
sequence to extract the historical memory signal can ensure the reliability of the extracted historical weather signal.
Fig. 4. The annual and monthly weather signals in Qingdao (1961–2020).
The historical weather signal is extracted by backpropagating (8), and considering the influence of unobserved historical data, the data of stage I (1961–1970) is discarded to ensure the reliability of historical weather signal calculation, so stage II (1970–2010) is selected to extract the historical weather signal, and the extraction of historical weather signal is completed by using Eq. (13), and the the intensity of the climate memory signal in Qingdao area q = 0.53 can be obtained by the calculation of hurst index. Stage III (2010–2020) is used as the test interval (See Fig. 5), the red curve represents the memory signal fluctuation, and the error fluctuation between the extracted memory signal and the historical weather signal can be visualized by Fig. 5b. A large number of experiments
28
S. Liu et al.
show that the memory signal extraction method works more obviously in unstable signals such as extreme weather and other abnormal temperatures.
Fig. 5. A comparison of the extracted climate memory signal to the historical weather signal in Qingdao (2010–2020).
4 Discussions In this paper, we use fractional order integral model to quantitatively extract climate memory signal in the historical time stochastic series of qingdao area, and use this method to estimate the long-term impact of historical climate state on the generation of future climate change, if the historical time stochastic series is regarded as a memory signal characterized by long time scale, the climate memory signal can explain climate change to some extent [17–20], as shown in Fig. 5, the red curve representing the climate memory signal accounts for a large proportion of the variation in the Stochastic Time Series, in the process of simulating climate change. The climate memory signal M(t) determines on what basis the stochastic time series continues to change, while the stochastic perturbation, which ε(t) represents the change in the short-term weather scale, generates the trend of the studied stochastic time series. The proportion of M(t) in the climate-time stochastic series is determined by the strength of the climate memory signal [21–23], but not all observed climate variables are characterized by strong climate memorability, which may be due to slow-responding subsystems such as the ocean. As shown in Fig. 5, besides indicating the climate memory signal of qingdao city, the weather-scale perturbation signal is also given, as seen in Figs. 5a and 5b in the past 20 years, the calculated weather perturbation signal is mostly negative, which indicates that the weather-scale perturbation is highly likely to be negative. In the experimental results, it is confirmed that the weather-scale perturbations of the temperature series for the past two decades are negative, and although the climate change is influenced by the historical climate memory signal, the climate conditions of qingdao city considered are also influenced by factors such as the topography of Qingdao. The simulation of long-range correlated processes using this method was developed from fractional-order Brownian motion, which describes continuous and longrange correlated physical processes, unlike the processes of ordinary brownian motion,
Extraction of the 1961—2020 Long Time Scale Climate Memory Signal in Qingdao
29
and considering that many physical phenomena in nature are long-range correlated, self-similar (Fractal) features in nature can be better explained using fractional-order brownian motion. Mandelbrot and Van Ness used fractional-order integral operators in Brownian motion to extract and model fractional-order Brownian motion. using the fractional-order integral operator to inscribe the historical time stochastic series can easily divide the current climate state in Qingdao area into two parts: the climate memory signal generated by the cumulative historical weather signal and the weather-scale perturbation signal, which can also be regarded as an independent homogeneously distributed noise sequence; therefore, the fractional-order integral model can be used not only as a fractional-order prediction model to improve the accuracy of climate prediction to simulate the long-term memory signals, but also to understand the process of physical changes from the causes of long-term memory signals.
5 Conclusion (1) The monthly scale and annual scale mean temperature spacing series of Qingdao city from 1961 to 2020 have a certain degree of autocorrelation, and the climate signal tends to decay with time, but some of the characteristics do not disappear with the weather signal smoothing, but still exist in the climate signal, so that the long time scale climate signal presents a certain degree of memory, and the annual scale of mean temperature spacing in Qingdao area has more memory characteristics compared with the moThe intensity of climate memory signal in Qingdao region is 0.53, which can be regarded as a characteristic of the whole climate system in the time range from month to year, and can explain climate change to some extent.nthly scale and daily scale. (2) By estimating the historical weather signal by this method, the weather signal of the historical moment node obtained by the inverse of Eq. (8) can be obtained based on the climate state of the current moment. Later, when using this method for climate prediction, some data from the beginning need to be discarded to ensure reliability due to the influence of unobserved time series. (3) The intensity of climate memory signal in Qingdao region is 0.53, which can be regarded as a characteristic of the whole climate system in the time range from month to year, and can explain climate change to some extent. Although some results were obtained by this method to analyze the mesoscale climate characteristics of Qingdao area, further analysis is needed to investigate the intensity differences of climate memory signals for different regions and the methods to calculate climate memory signals with higher accuracy, which is the focus of the next research.
References 1. Yuan, N., Yang, Q., Liu, J., et al.: Research progress on attribution and prediction of interdecadal climate events. China Basic Science 21(03), 36-63 (2019) 2. Guo, B., Pu, X., Huang, F.: Fractional Partial Differential Equations and their Numerical Solutions. Science Press, Beijing (2011)
30
S. Liu et al.
3. Chen, H.Z., Wang, H.: Numerical simulation for conservative fractional diffusion equations by an expanded mixed formulation. J. Comp. Appl. Math. 296, 480–498 (2016) 4. Li, Y.S., Chen, H.Z., Wang, H.: A mixed-type Galerkin variational formulation and fast algorithms for variable-coefficient fractional diffusion equations. Math Method Appl. Sci. 40, 5018–5034 (2017) 5. Jia, L.L., Chen, H.Z., Wang, H.: Mixed-type Galerkin variational principle and numerical simulation for a generalized nonlocal elastic model. J. Sci. Comput. 71, 660–681 (2017) 6. Yuan, Q., Chen, H.Z.: An expanded mixed finite element simulation for two-sided timedependent fractional diffusion problem. Adv. Differ Equ. 34 (2018) 7. Liu, S., Shi, S., Liu, S., et al.: The bridge between weather and climate: fractional derivative. Meteorological Science and Technol. (2), 15-20 (2007) 8. Yuan, N., Fu, Z., Liu, S.: Research on predictability based on long-term persistence of climate variability. Chinese Meteorological Society 1 (2013) 9. Liu, S., Yuan, N., Fu, Z., et al.: Long term memory of climate change: theoretical basis and observational confirmation. Peking University Physics Centennial Anniversary Special Issue Paper 43, 1327–1331 (2013) 10. Dmowska, R., Saltzman, B.: Advance in Geophysics: Long-Range Persistence in Geophysical Time Series. Academic Press, San Diego, London , Boston, New York, Sydney, Tokyo, Toronto, pp. 14–16 (1999) 11. Hasselmann, K.: Stochastic climate model part I, theory. Tellus 28, 473–485 (1976) 12. Robert, Z.: Nonlinear generalized Langevin equations. J. Stat. Phys. 9(3), 215–220 (1973) 13. 13. Lacroix, S.F.: Traite du CaZcuZ DifferentieZ et du CaZcuZ IntegraZ. Mme. VeCourcier, Tome Trois, Paris 14. Abel, N.H.: Auflosung einer mechanischen aufgabe. J. fur Die Reine und Angewandte Mathematik, pp. 153–157 (1826) 15. Abel, N.H.: Solution de quelques problemes a l’aide d’integrales definies. Oeuvres Completes, pp. 16–18 (1881) 16. Sonin, N.Y.: On differentiation with arbitrary index. Moscow Matem Sbornik 6(1), 1–38 (1869) 17. Letnikov, A.V.: An explanation of the concepts of the theory of differentiation of arbitrary index (Russian). Moscow Matem Sbornik 6, 413–445 (1872) 18. Laurent, H.: Sur le calcul des derives a indicies quelconques. Nouv Annales de Mathematiques 3, 240–252 (1884) 19. ErvinV, J., Roop, J.P.: Variational formulation for the stationary fractional advection dispersion equation. Numer Methods Partial Differ Equ. 22, 558–576 (2005) 20. Mandelbrot, B.B., Van Ness, J.W.: Fractional Brow nian motions, fractional noises and applications. SIAM Rev. 10, 422–437 (1968) 21. Liu, S., Liang, F., Liu, S., et al.: Chaos and Fractals in Natural Science. Peking University Press, Beijing, p. 159 (2003) 22. Zhu, X., Fraedrich, K., Liu, Z., et al.: A demonstration of long-term memory and climate predictability. Climate (23), 5021–5029 (2010) 23. Wang, Z.: Fractional calculus: a mathematical tool for describing memory characteristics and intermediate processes. Scientific Chinese (3), 76-78 (2011)
Application of Superpixel Clustering Algorithm to Hip Joint Image Segmentation Registration Jinshun Ding1 , Xiaoyu Lian1(B) , Taowen Lu1 , Yi Gu2 , Dandan Guo1 , and Zhiying Cao3 1 Changshu Meili Hospital, Changshu 215500, Jiangsu, China
[email protected]
2 Guli People’s Hospital, Changshu 215500, Jiangsu, China 3 The Affiliated Changshu Hospital of Soochow University (Changshu No.1 People’s Hospital),
Changshu 215500, Jiangsu, China
Abstract. Hip fracture is the most common and serious type of fracture in the elderly. The traditional orthopedic disease diagnosis method lacks sufficient information to assist doctors in making a diagnosis, which may easily lead to missed diagnosis and misdiagnosis, delay patient treatment, and may even cause medical accidents. By introducing computer-aided diagnosis technology, this research is mainly divided into the medical image preprocessing process of hip joint diseases, the image segmentation method of superpixel clustering algorithm, the image registration method based on volume feature point selection in the diagnosis of auxiliary ribs of hip joint diseases, and the visualization technology-based method. There are four parts in the evaluation method of auxiliary diagnosis and evaluation of hip joint diseases based on the calculation of quantitative indicators. Segment abdominal CT images through superpixel clustering image processing algorithm to provide auxiliary diagnosis and evaluation of hip joint diseases. Keywords: Hip Fracture · Image Processing · Clustering Algorithm
1 Introduction Hip fractures are the most common and serious type of fracture in the elderly. Surgical treatment of two-row acetabular fractures has proven challenging due to the complex acetabular fracture pattern and curved surface of the acetabulum [1]. In the U.S., women are as likely to suffer hip fractures as they are from breast, ovarian, and uterine cancers combined, while men are more likely to suffer hip fractures than prostate cancer. With the intensification of population aging, the incidence of hip fractures in the global elderly is increasing at a rate of 1% to 3% per year. It is estimated that the number of hip fractures in the world will rise to 6.26 million in 2050, and more than half of them will occur in Asia. The incidence of hip fractures in China is also increasing year by year: from 1990 to 1992, the incidence of hip fractures over the age of 50 was 83/100,000 for males and 80/100,000 for females, and increased to 129/100,000 for males during 2002–2006, © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 31–40, 2024. https://doi.org/10.1007/978-3-031-50571-3_3
32
J. Ding et al.
women 229/100,000, and medical expenses have also increased sharply. It is estimated that the total medical expenses for hip fractures in 2020 will be close to 100 billion yuan. Therefore, hip fracture in the elderly has become one of the most important public health problems in the world [2, 3]. With the aging of my country’s population, the proportion of osteoporosis in the elderly has increased, leading to an increase in the number of hip fracture patients year by year. At present, there are non-surgical treatment and surgical treatment for the clinical treatment of elderly patients with hip fractures. However, due to the poor resistance of elderly patients and many underlying diseases, non-surgical treatment requires long-term bed rest, which is prone to pressure sores, pulmonary Complications such as infection and deep vein thrombosis seriously affect the quality of life of patients and even endanger their lives. Therefore, surgery is the best treatment for hip fractures in the elderly. However, elderly patients are complicated with multiple complex medical diseases and organ decline in various systems, which increases the risk of surgery, and the incidence of postoperative complications and mortality are also extremely high [4].
2 Research Status With the rapid development of social and economic life, the number of patients with orthopedic diseases is increasing day by day. There are many kinds of orthopedic diseases, and some symptoms are similar and may occur at the same time. The traditional orthopedic disease diagnosis method lacks sufficient information to assist doctors in making a diagnosis, which may easily lead to missed diagnosis and misdiagnosis, delay the treatment of patients, and may even cause medical accidents [5, 6]. According to statistics, the misdiagnosis rate of diagnosis directly based on medical images can reach 10%–30%. Guobanfa [2021] No. 18 “Opinions of the General Office of the State Council on Promoting the High-quality Development of Public Hospitals” In order to promote the high-quality development of public hospitals, better meet the people’s growing needs for medical and health services, promote medical technology innovation, and promote originality New technologies for disease prevention, diagnosis and treatment. This study introduces computer-aided diagnosis technology, which includes computer science, medicine, mathematics, graphics and other multidisciplinary knowledge, covering numbers, images, 3D models and other aspects [7]. Its realization depends on medical image and its processing technology, such as computer tomography (Computed Tomography, CT), microscopic computer tomography (Micro Computed Tomography, MicroCT), high-power microscope imaging (High-Power Microscope, HPM), etc. Combined with medical image processing technologies such as image segmentation, registration, and visualization, it can visually present useful information in the form of comprehensive index values, 2D images, and 3D models. Diagnosis by experience or spatial imagination provides a lot of scientific basis for diagnosis, reduces the rate of misdiagnosis, and improves the efficiency of diagnosis and follow-up treatment. Therefore, the use of computer-aided diagnostic technology for orthopedic disease diagnosis has great practical significance [8, 9]. Medical image segmentation is a hot field in the field of image segmentation. In recent years, experts and scholars have proposed many theories and methods [10]. There
Application of Superpixel Clustering Algorithm
33
are segmentation methods based on traditional algorithms such as threshold, region growing, etc. For example, Ilhan et al. proposed a method for segmenting brain tissue affected by cancer [11]. The method uses morphological operations, pixel subtraction, and threshold-based image segmentation techniques to obtain clear images of the skull, brain, and tumor. The recognition rate of this method is 94.28% for tumor-containing images, 100% for tumor-free images, and the overall success rate is 96%, which is better than other algorithms. Zhang et al. proposed a bidirectional region growing segmentation algorithm for medical images [12]. This algorithm solves the problem that the traditional region growing algorithm is sensitive to noise and the order of pixel growth. At the same time, the concept of neighborhood difference transformation is proposed, and the threshold value is selected by using the minimum value optimization through the neighborhood difference transformation matrix. This method can obtain satisfactory segmentation results for medical images with a lot of noise. Although the above medical image segmentation methods based on traditional segmentation algorithms can obtain satisfactory results, they have certain limitations at the application level, and there are problems such as low operating efficiency and time-consuming. With the improvement of computer performance, machine learning methods are also widely used in the processing of medical images. He et al. proposed a three-layer automatic Adaboost-guided active contour model (ASM) based on CT images [13–15]. Three-level active contour modeling was automatically guided by the AdaBoost voxel classifier and AdaBoost contour classifier. A robust, accurate and fast 3D liver segmentation is achieved. Experimental results on three publicly available datasets show that the method achieves sufficient accuracy and significantly reduces segmentation time. Aruna et al. proposed an improved intuitive fuzzy clustering (IFCM) medical image segmentation algorithm. Intuitive Fuzzy Sets IFCM uses a new Intuitive Fuzzy Theory (IFS) method to generate functions to compute non-membership values [16–18]. The method is extensively experimented on standard datasets of brain, lung, liver and breast images, compared with other IFS-based methods, and outperforms other methods. Criminisi et al. proposed an algorithm for efficient automatic detection and positioning of 3D CT anatomical structures. Efficiently Solving Anatomical Localization Problems Using Multiclass Random Regression Forests. The results of quantitative experiments on 400 high-variable CT databases show that compared with multi-atlas registration and template-based nearest neighbor detection techniques, this method has higher segmentation accuracy and stronger robustness to data [19–21]. In view of the above background, the research on the hip joint CT image segmentation method aims to solve the problems existing in the hip joint CT image segmentation process, optimize and improve it with the help of image processing technology, and obtain more accurate hip joint segmentation results. Realize effective segmentation and provide theoretical reference and technical support for image processing and clinical development of orthopedic medicine.
34
J. Ding et al.
3 Research Content Medical imaging methods for hip joint diseases mainly include CT, MRI, and X-ray. Among them, CT scanning has the advantages of high resolution, fast imaging, good display of the contrast between bony structure and surrounding soft tissue, the spatial relationship between the lesion and adjacent tissues, and bone joints with complex anatomical structures. Preferred means. In this paper, the superpixel clustering algorithm is applied to the research process of hip joint image segmentation and registration, and the auxiliary diagnosis and evaluation are carried out, and the medical image processing technology issues involved in image segmentation, image registration, and visualization are researched and analyzed. The research content is mainly divided into the medical image preprocessing process of hip joint diseases, the image segmentation method of superpixel clustering algorithm, the image registration method based on volume feature point selection in the diagnosis of auxiliary ribs of hip joint diseases, and the hip joint disease based on visualization technology and quantitative index calculation. There are four parts in the evaluation method of auxiliary diagnosis of diseases. 3.1 Medical Image Preprocessing Image noise is an important factor affecting image quality. However, the contrast of some tissue details in the CT image is very low, and the noise will affect the resolution of the CT image and reduce the quality of the CT image. During the imaging process of medical images, they are often disturbed by the detection equipment and the noise in the environment to generate noise, which makes the CT value of the image fluctuate randomly and seriously affects the image information. In order to improve the speed and analysis ability of computer processing CT images, improve the image quality in the acquired original CT images, remove redundant noise, and improve the contrast of images, it is necessary to preprocess the original CT images. This paper studies the CT images of hip joints. There are many irrelevant factors in the images, including tissues and muscles. Due to the particularity of the research objectives, image preprocessing is particularly important. Before the 3D reconstruction of abdominal CT images, preprocessing is performed to remove redundant noise and enhance the region of interest, which can make the reconstructed 3D images more accurate. Perform preprocessing on the abdominal CT image, aiming at the low contrast and high noise of the original hip joint CT image, choose a suitable image preprocessing method to perform image filtering, image sharpening and histogram on the hip joint image Image equalization and other operations complete the image preprocessing work. The flow of the entire image preprocessing operation is shown in Fig. 1.
Fig. 1. The overall process of hip joint CT image preprocessing
Application of Superpixel Clustering Algorithm
35
Image Filtering, CT images of the hip joint contain a lot of noise, and the function of image filtering is to eliminate the noise in the original image, which is the so-called image denoising. Most CT images of hip joints contain additive noise. Traditional filtering algorithms have a good inhibitory effect on additive noise, and the processing speed of traditional algorithms is fast and simple. This study mainly uses the Gaussian filter algorithm for preprocessing. See Fig. 2.
Fig. 2. Comparison chart of image filtering and processing of CT image of hip joint
Aiming at the disadvantage that the image contrast is reduced after Gaussian filtering, effective image sharpening and enhancement processing will be performed to make the CT image of the hip joint clearer. Image Sharpening, in order to effectively suppress the noise in the hip joint image, this paper uses Gaussian filter for preprocessing, the image quality has been effectively improved, but the contrast will be reduced, which will affect the edge information, so image sharpening is used The preprocessing method of the image is used to enhance the edge information features of the image and improve the image contrast. Next, the image Laplacian sharpening algorithm is analyzed experimentally. Laplace sharpening is a second-order differential sharpening, which is based on the degree of change of image pixels and is an isotropic filter. See Fig. 3. The above images (A) and (B) respectively represent the images of the hip joint image after Laplace sharpening. By comparing the above experimental results, it can be concluded that the hip joint image sharpened by Laplace can effectively highlight the edge information, the image outline can be effectively displayed, and the internal details of the image sharpened by Laplace are more preserved. it is good. Histogram Equalization, histogram equalization is a kind of image enhancement. Because the CT images of the hip joint obtained by CT machines are often low in brightness and contrast, which will have a great impact on the results of the later 3D reconstruction work, so the method of histogram equalization is used to enhance the image quality. Histogram equalization is to use the histogram correction method to redistribute the gray value of the original image by using the nonlinear stretching method to expand the gray value range of the image.
36
J. Ding et al.
(A)
(B )
Fig. 3. Comparison chart of image sharpening processing of CT image of hip joint
In this paper, the preprocessing method of histogram equalization is used to test the CT image of the hip joint. The specific results and the simulation results of the gray histogram are shown in Fig. 4 below.
Fig. 4. Comparison chart of image histogram processing of CT image of hip joint
The experimental results are shown in the Fig. 5:
Fig. 5. Process diagram of hip joint CT image preprocessing
Application of Superpixel Clustering Algorithm
37
3.2 Superpixel Clustering and Segmentation After the improved SNIC algorithm is used to segment the CT image of the hip joint and generate superpixels, we need to cluster and fuse the obtained superpixel blocks to obtain a complete segmentation result, so this article will use the fuzzy C-means clustering algorithm (FuzzyC-meansclustering, FCM is used to cluster the characteristics of superpixels, and the superpixels with high similarity are divided into several categories, so as to realize the aggregation of superpixels. After the improved SNIC algorithm is used to segment the CT image of the hip joint and generate superpixels, we need to cluster and fuse the obtained superpixel blocks to obtain a complete segmentation result, so this article will use the fuzzy C-means clustering algorithm (FuzzyC-meansclustering, FCM is used to cluster the characteristics of superpixels, and the superpixels with high similarity are divided into several categories, so as to realize the aggregation of superpixels. An Overview of the FCM Algorithm. Cluster analysis is a method of classifying a data set with several samples according to certain rules. Classification process. Dunn proposed the FCM algorithm in 1973, and it has been applied to data analysis and classification in various fields. At present, the FCM algorithm is widely used in machine learning, pattern recognition and computer vision, and has achieved good results. The main goal of the FCM algorithm is to divide the data set X into c categories, and by repeatedly calculating the cluster centers of each category, the intra-class similarity of each type of data can be maximized, and the similarity between different classes can be maximized. Small. At this time, the sum of the distances between each data and its cluster center is the smallest, so construct the objective function J m of the similarity index and optimize the objective function to take the minimum value to obtain the membership of each data to each category degree matrix. Superpixel Feature Selection. The features in the image represent the relationship between pixels in a certain area, reflecting the grayscale law inside the image. For superpixels, it is characterized by the characteristics of superpixels, that is, the overall gray scale range of the pixels contained inside and the similarity between pixels. The grayscale mean and gray-scale standard deviation can reflect the gray-scale range interval and gray-scale consistency of the image. They are basic and very commonly used as feature quantities to represent the characteristics of an image or a certain local area. Superpixels are clustered by several pixels, so the gray mean and standard deviation can be used as the features representing superpixels. As a Result of Clustering and Segmentation. After selecting the features suitable for characterizing superpixel characteristics in the previous section, three eigenvalues of gray value, standard deviation and average entropy are extracted for each superpixel one by one to form a three-dimensional feature vector that can express superpixel characteristics, by using the FCM clustering algorithm to calculate the Euclidean distance between all superpixel feature vectors and cluster centers, clustering all feature vectors into a set number of categories can complete the clustering and merging of superpixels. For the Clustering of Superpixel Features. FCM mainly constructs the objective function according to the distance between the feature vector and the cluster center,
38
J. Ding et al.
and optimizes the objective function to obtain the minimum value, so as to obtain the membership matrix of each feature vector for each category, and then complete the clustering. Classification for each feature vector. According to the clustering results of the feature vectors, the clustering and merging of the corresponding superpixels can be completed. Image Registration. Image registration can align the spatial position between images, combined with image fusion, multiple data information can be presented on the same image to assist in diagnosis. Image registration requires the use of two objects, the floating image and the reference image, and is actually a one-to-one mapping process between the floating image and the reference image. The purpose is to make the pixels in the reference image correspond to the pixels in the floating image, and The overall spatial position of the image is consistent.
4 Discussion and Summary For hip joint diseases, a superpixel clustering image processing algorithm is proposed to segment abdominal CT images and provide auxiliary diagnosis and evaluation of hip joint diseases. An image preprocessing method is proposed, which preprocesses the interference noise that affects image information, introduces Gaussian filtering algorithm for filtering, uses image Laplacian sharpening algorithm for sharpening, and uses histogram equalization for image enhancement. Preprocessing works to remove unwanted noise and enhance regions of interest. Improve segmentation registration accuracy. A superpixel aggregation algorithm is proposed to segment the image. An improved SNIC algorithm is proposed to segment the image and generate superpixels, and the obtained superpixel blocks are clustered and fused to obtain a complete segmentation result. It has the ability to avoid errors caused by external factors, and segment the target area more quickly and completely. An image registration technique is proposed. The spatial position between images can be aligned, and combined with image fusion, multi-data information fusion can be presented on the same image. Three-dimensional visualization visual assessment is proposed, which can well assist doctors in diagnosis and expand the way doctors obtain diagnostic information from two-dimensional to three-dimensional. Funding. 2022 Changshu Science and Technology Development Plan (Social Development) Project, Research on joint image segmentation and registration based on superpixel clustering algorithm. (No. CS202243).
References 1. Kleiven, S.: Hip fracture risk functions for elderly men and women in sideways falls. J. Biomechanics 105, 109771 (6 pp.) (2020) 2. Jazinizadeh, F.: Enhancing hip fracture risk prediction by statistical modeling and texture analysis on DXA images. Quenneville, Cheryl E. Medical Engineering and Physics 78, 14–20 (2020)
Application of Superpixel Clustering Algorithm
39
3. T. Computer Methods in Biomechanics and Biomedical Engineering 23(9), 476–83 (2020) 4. Cordeiro, M., Caskey, S., Frank, C., Martin, S., Srivastava, A., Atkinson, T.: Hybrid Triad Provides Fracture Plane Stability in a Computational Model of a Pauwels Type III Hip Fracture 5. Li, G., Jia, J.: Convolutional neural network to explore the effect of the drug on postoperative POCD in elderly patients with hip fracture. J. Intelligent & Fuzzy Systems: Applications in Engineering and Technol. 39(4), 4989–97 (2020) 6. Carballido-Gamio, J., et al.: Hip fracture discrimination based on statistical multi-parametric modeling (SMPM). Annals of Biomedical Engineering 47(11), 2199–212 (2019) 7. Zuki, A.A.M. , Mat, F., Daud, R., Kamaruddin, N.S., Ibrahim, I. A review of hip fracture analysis subjected to impact loading.IOP Conference Series: Materials Science and Eng. 670, 012026 (5 pp.) (2019) 8. Ding, J., Xu, K., Ren, Y., Cao, Z.: Modeling and printing technology based on 3D registration algorithm of MIMICS software applied to hip fracture .lecture notes of the institute for computer sciences. Social-Informatics and Telecommunications Engineering, Multimedia Technology and Enhanced Learning - 4th EAI International Conference, ICMTEL 2022, LNICST LNICST 446, 517–524 (2022) 9. Aldieri, A., Terzini, M., Audenino, A.L.: Combining shape and intensity dxa-based statistical approaches for osteoporotic HIP fracture risk assessment. Bignardi, Cristina; Morbiducci, Umberto Source: Computers in Biology and Medicine 127 (2020) 10. Jazinizadeh, F., Quenneville, C.E.: 3D Analysis of the proximal femur compared to 2D analysis for hip fracture risk prediction in a clinical population. Annals of Biomedical Eng. 49(4), 1222–1232 (2021) 11. Mishra, A., Srivastava, V.: Biomaterials and 3D printing techniques used in the medical field. J. Med. Eng. Technol. 45(4), 290–302 (2021) 12. Xiao, J., et al.: S Large-scale 3D printing concrete technology: current status and future opportunities. Cement and Concrete Composites 122 (2021) 13. Schouten, M., Wolterink, G., Dijkshoorn, A., Kosmas, D., Stramigioli, S., Krijnen, G.: A review of extrusion-based 3D printing for the fabrication of electro- and biomechanical sensors. IEEE Sens. J. 21(11), 12900–12912 (2021) 14. Blyweert, P., Nicolas, V., Fierro, V., Celzard, A.: 3D printing of carbon-based materials: a review. Carbon 183, 449–485 (2021) 15. Shahbazi, M., Jiger, H.: Current status in the utilization of biobased polymers for 3D printing process: a systematic review of the materials, processes, and challenges. ACS Applied Bio Materials 4(1), 325–369 (2021) 16. Kamiya, T.Y., Corrêa, M., Marcell, M., Kleina, M.: Case study applying the methodology in a 3D printing process. SpringerBriefs in Applied Sciences and Technology, pp 31–68 (2021) 17. Alwazzan, M.J.,Alkhfagi, A.O., Alattar, A.M.: Image segmentation algorithm based on statistical properties. research in intelligent and computing in engineering. Select Proceedings of RICE 2020. Advances in Intelligent Systems and Computing (AISC 1254), pp 333–40 (2021) 18. Prasath, V.B.S., Dang, N.H.T., Nguyen, H.H., Dvoenko, S.: Multiregion multiscale image segmentation with anisotropic diffusion .pattern recognition. ICPR International Workshops and Challenges. Proceedings. Lecture Notes in Computer Science (LNCS 12665), pp. 129–40 (2021) 19. Jun, M.: Cutting-edge 3D Medical Image Segmentation Methods in 2020: Are Happy Families All Alike?. arXiv, p 13 (2021) 20. Chen, X., Zhao, D., Zhong, W.: Auxiliary recognition of alzheimer’s disease based on gaussian probability brain image segmentation model. Communications in Computer and Information Science CCIS 1138, 513–520 (2019)
40
J. Ding et al.
21. Wang, Y., Ding, J., Fang, W., Cao, J.: Segmentation-assisted diagnosis of pulmonary nodule recognition based on adaptive particle swarm image algorithm. Communications in Computer and Information Science, CCIS, 1138, 504–512 (2019). Cyberspace Data and Intelligence, and Cyber-Living, Syndrome, and Health - International 2019 Cyberspace Congress, CyberDI and CyberLife, Proceedings
Design and Implementation of Interactive Platform for Mental Health Promotion Based on Mobile Internet Jiufeng Ye1 , Gang Ye1 , Dongming Zhao1 , Wei Zhong1 , and Xinlei Chen1,2(B) 1 Suzhou Guangji Hospital, Suzhou 215000, Jiangsu, China
[email protected] 2 School of Information and Control Engineering, China University of Mining and Technology,
Xuzhou 221116, Jiangsu, China
Abstract. This paper focuses on the prevention and treatment of the increasing psychological and mental diseases.The number of people suffering from psychological and mental diseases has exceeded that of cardiovascular and cerebrovascular diseases, respiratory diseases and malignant tumors. Mental health is an indispensable part of human health. Hospitals strengthen the construction and development of mental health knowledge by means of informationization.Develop an interactive platform for mental health promotion, which is mainly based on mobile Internet technology. This paper introduces the interactive platform for mental health promotion from the following aspects of functional requirements of APP, the core framework of APP software, the hardware configuration of hospital, and software security testing. The construction of this platform is the effective expansion of the “online virtualization” in Suzhou Guangji Hospital and Suzhou mental health center. Using mobile Internet, cloud platform, intelligent APP and other means of informatization, we are committed to the prevention and treatment of mental health, so as to protect the public’s mental health. Keywords: Cloud Platform · Software Architecture · Mental Health
1 Introduction We are in a fast-paced period of social transformation, and the pressure of social competition is great, which makes people’s psychological diseases become increasingly prominent [1, 2]. Therefore, the role of psychology in the national economy and people’s livelihood is gradually valued. At present, psychological diseases and mental diseases have surpassed diseases such as cardiovascular and cerebrovascular diseases, respiratory systems and malignant tumors [3]. Every year, about 1.6 million people have social problems caused by psychological problems. Therefore, it is extremely urgent to guide and treat mental illness [4].
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 41–49, 2024. https://doi.org/10.1007/978-3-031-50571-3_4
42
J. Ye et al.
Strengthening the construction and development of mental health knowledge by means of informationization is an important content of deepening the reform of medical and health system, maintaining and promoting people’s physical and mental health, an inevitable requirement of comprehensively promoting the rule of law, innovating social governance and promoting social harmony and stability, and is of great significance for building a healthy China, a country ruled by law and a safe China [5–7]. In recent years, there is a shortage of mental health professionals in Suzhou, and the technology and ability of mental health services need to be improved. Psychological counselors and institutions have no laws and rules to follow and lack supervision; The mental health promotion and management system is imperfect, and the city mental health center has not been established yet, which can not meet the increasing demands of the masses for multi-level mental health services. It is urgent to establish a perfect mental health promotion and service system which is suitable for the actual situation in Suzhou [8]. Suzhou Guangji Hospital has established Suzhou Mental Health Center and Psychological Crisis Intervention Center, and is committed to improving mental health service capacity. In 2017–2019, Guangji Hospital has built a mental cloud hospital platform so as to improve the government’s mental health service efficiently. In 2020, “Suzhou Mental Health Regulations” has been promulgated to promote the standardized and comprehensive development of mental health services [9, 10]. At present, the most important task is to enable the majority of patients with mental illness to be exposed to the publicity and popularization of mental health and receive basic treatment and rehabilitation services. Suzhou Guangji Hospital will focus on the prevention and rehabilitation of mental illness in the next stage [11, 12]. The cloud platform for mental health promotion will be built with mobile Internet as the medium. Developing mobile Internet interactive platform APP for mental health promotion, making great efforts to publicize and popularize mental health knowledge, and adopting comprehensive prevention and treatment measures such as drug treatment, psychological counseling, rehabilitation training and social services, will help patients participate in social life and promote the rehabilitation of patients with mental and mental diseases. The interactive platform for health promotion is mainly designed in three versions according to three types of users, including public version, medical version and management version [13–15]. Specific functions are as follows: Public version: publicity module; Expert consultation module; Non-interactive consultation module; Psychological self-test module; FM radio station; Encyclopedia of diseases module; Drug price publicity module; Service price publicity module; Map service module; Course learning module; The meter early warning monitoring module is used for data early warning; Integration module; Medication reminding module; Suggested complaint module [16, 17].Medical version: the medical service module corresponding to the public version.Management platform: article push management module; Psychological self-test management module; Alert view for individual users; Make statistics on users’ data; Feedback collection module [18–20].
Design and Implementation of Interactive Platform for Mental Health
43
2 Software Functions The interactive platform for health promotion is mainly designed in three versions according to three types of users, including public version, medical version and management version. Specific functions are as follows: Public version: publicity module; Expert consultation module; Non-interactive consultation module; Psychological self-test module; FM radio station; Encyclopedia of diseases module; Drug price publicity module; Service price publicity module; Map service module; Course learning module; The meter early warning monitoring module is used for data early warning; Integration module; Medication reminding module; Suggested complaint module. Medical version: the medical service module corresponding to the public version. Management platform: article push management module; Psychological self-test management module; Alert view for individual users; Make statistics on users’ data; Feedback collection module. The psychological cloud platform is also an important part of building an Internet hospital in Suzhou Guangji Hospital. From the perspective of disease control, if you and your family have mental health-related problems (such as anxiety, nervousness, fear, poor mood, sleep disorder, etc.), don’t rush to seek medical treatment, but consult the professional doctor team of our platform online for free at the first time. Doctors will respond as quickly as possible to protect the health of you and your family.
3 Architecture Design The hardware configuration of this platform is based on the existing hardware network architecture design of the hospital. The hardware service architecture design of APP is shown in the diagram. The main technology used in the mental health promotion platform is MVC technology. Based on the above requirements, APP was developed. The mental health promotion platform APP includes the following sections: Server Side: Compiling interface protocol documents, setting up server environment, designing database and compiling API interface. APP Side: The interface diagram development of APP is based on UI design drawing. After UI development is completed, the server API is docked, and the corresponding logic code is written at the functional level at the level of obtaining server interface data. Web Management End: According to the business logic of the front end, the back end will have corresponding functions to match it, and it is also necessary to write functional logic codes. The core framework is shown in Fig. 1. Extensible modules are reserved in the architecture design, and in order to improve the response speed of page access, we strictly follow the idea of component-based web development in the front-end design: (a) front-end: vue, a minimalist MVVM framework, is selected as the component framework, which can maximize code reuse and achieve the purpose of data-driven
44
J. Ye et al. Log module
Database module
Network module
Basic core framework design of psychological cloud platform APP
File module
Upgrade module
Communication module
UI module Data processing module
Fig.1. Schematic diagram of cloud platform APP core framework
(b)
(c)
(d)
(e)
view; The matching vue-router controls the page jump through the page routing to achieve the switching effect similar to APP. SASS pretreatment is adopted in CSS compilation, which can ensure clear code hierarchy and fast iteration. Server: We have adopted node.js and NPM technologies, which adopt event-driven and asynchronous programming, and have the characteristics of light weight and high efficiency, which is very suitable for the real-time data request service of public apps in a dense distributed environment. The Webpack packaging tool packages resources and codes into a js file based on AMD/CMD specification to improve page loading efficiency. The design of the terminal follows the principle of single-page application, that is, all operations are implemented on one page without jumping back and forth, which gives consideration to both fast page loading and asynchronous resource loading, and effectively improves the user experience of the mobile terminal. In the case of multiple application servers, the request is first processed by the intelligent load balancing server and dispatched to the server with good load. At the same time, the server will have a probability judgment on the request, and the high-frequency request from the same address or user will be rejected. Waterproof wall will judge the user’s behavior to a certain extent. If it is judged that it is a request sent by a robot program (such as high-frequency tapping password), it will directly discard the request for no service. In principle, the system assumes that all user inputs are suspicious, so all user inputs will be basically checked by the filtering module to prevent some general attacks, such as SQL injection caused by illegal characters, buffer overflow, cross-site attacks and other harmful requests from being executed. When accessing public information without user identification, a large amount of content will be directly served by the content buffer server program, and most requests do not need to call the database, which greatly reduces the pressure on the database. When accessing user-related information, users must register and log
Design and Implementation of Interactive Platform for Mental Health
45
in. In principle, real-name information and mobile phone binding verification are needed to ensure the validity of user information. When accessing sensitive information, consider using invoice barcode, user password or mobile phone verification code for secondary verification to ensure the privacy of sensitive information. (f) All business module codes do not directly access the database, but access the database through the data access mapping layer. The front end of the database is set with cache to reduce the access pressure of the database. At the same time, in the case of large traffic in the future, NOSQL cluster can be used to realize the cache. For the external data interface, we will use the flow control program (request frequency control) and the queuing system to ensure that the pressure on the external system will not be too high and ensure the availability of the software. In principle, the data gateway uses ciphertext encrypted by high-strength algorithm to communicate with the application server to ensure the security of data privacy. On the premise of considering the application characteristics, the above-mentioned technical means have targeted the most efficient front-end technology combination at present, and achieved good results in page access, scalability, code reuse and so on.
4 System Security With the increasing informationization of Suzhou Guangji Hospital, there are currently 20 servers with 50T storage and 600 computer terminals, as well as related medical information system (HIS), structured electronic medical record system (EMR), inspection system (LIS), nursing management system (NIS), ECG, PACS and OA systems. The applications of network layer and software layer are more and more complex, and the security risks faced by hospitals are correspondingly increased. How to reduce security costs and security risks is also an important task at present. Therefore, the application of mental health promotion platform based on mobile internet is also the key prevention object. 4.1 Authorized Data Security Gateway The project application server will obtain data through the proxy of the data gateway. The data gateway has the following characteristics: Authorized access control, including access source control (such as restricting IP address access) and visitor authentication; The gateway is located in the hospital intranet, and only communicates with the project application server externally, and is set in the same virtual private network, which makes the gateway position more difficult to be directly contacted by the outside; Data access function: special interface (white list interface) is used instead of universal interface, so that external access can only access authorized data, and one-way access is never two-way, thus avoiding the risk of unauthorized access to data caused by complex interface functions to the greatest extent; When APP accesses, make a flow control strategy, and do flow buffering and client queuing for multi-client access in a short time. Prevent users from visiting storms and causing blows to the system;
46
J. Ye et al.
The gateway equipment opens the management port and service port for access to the system. The management port is only released to the intranet client, and the service port is only open to the server. All visits are recorded in data log for future reference. 4.2 Application Server The application server of the project provides actual services for external mobile phone users. The server needs to have the following features for security design: Authorized access control: for users who need to access personal information, mobile phone authentication is required, and real-name authentication is required to improve the information. Temporary tokens will be issued when users log in, and the data security gateway will issue corresponding data to the application server through temporary passes. The application server is in the external network and communicates with the data security gateway through the encrypted channel or virtual security network; The server needs to cache certain information, such as doctor scheduling information, which can reduce 80%-90% of visits and greatly reduce service pressure when a large number of users visit; For the mobile phone access interface, a special interface (white list interface) is made instead of a universal interface, so that only authorized data can be accessed for external access, and other data are restricted, thus preventing server vulnerabilities caused by complex interfaces as much as possible. Log access function: logs of users logging in to access the server will be saved for future reference. 4.3 Firewall An application firewall can be deployed at the boundary of the intranet to prevent malicious access from invading;At the boundary of the external network, a hardware firewall will be deployed in the computer room to prevent malicious access and violent attacks. 4.4 Mobile Phone Terminal When all users register, they need to improve their real name information and bind their mobile phones before they can use the hospital function; User identity verification: password, SMS for verification; The sensitive information between the mobile phone and the server is exchanged with ciphertext, and the secondary authentication fee is used when the mobile phone accesses and uploads personal information and generates fee information, so as to ensure the security of the system and account.
5 System Test The safety test of health promotion interactive platform is a very important one among many tests, especially in the protection of user privacy (patient privacy), medical information, medical records, patient charge records and patient expense accounts, etc. It
Design and Implementation of Interactive Platform for Mental Health
47
mainly tests the file access rights, network transmission level, component access, iterative upgrade and so on.The system protects user privacy (patient privacy) by checking whether the user name and password are stored locally, and checking whether the sensitive private information interacting with the server, such as chat information and bank account, is effectively encrypted. Level 1 saves configuration files and system files on external devices; If you need additional information stored in external devices, check whether it has been maliciously tampered with before each use.Whether the sensitive data information transmitted in the network is encrypted or not, the more important sensitive information should be encrypted and transmitted in TLS or SSL mode.Rights protection shall be carried out for Android system components, and any third-party application is prohibited from accessing and calling internal components of APP. If calling components must be provided, it is necessary to confirm whether the caller has made signature settings. For iterative upgrade management, check whether the upgrade package has been tampered with, and check its integrity and legitimacy to prevent the upgrade package from being hijacked and used.
6 Summary The safety test of health promotion interactive platform is a very important one among many tests, especially in the protection of user privacy (patient privacy), medical information, medical records, patient charge records and patient expense accounts, etc. It mainly tests the file access rights, network transmission level, component access, iterative upgrade and so on. Implementation effect of psychological cloud platform: By the end of 2021, there were 1,268 consultations on the cloud platform, 124,000 psychological self-help audio listening, 161,000 psychological assessments and 1.12 million views of online psychological counseling popular science materials and lectures. These data were constantly refreshed during the outbreak. At present, the mobile internet interactive platform for mental health promotion has been officially launched and used, which has been an effective extension of the intelligent “online virtualization” hospital mobile platform created by Suzhou Guangji Hospital and Suzhou Mental Health Center, and citizens can download and log in through Android and IOS systems. It has the functions of popularizing knowledge, answering questions of international mental health scale, interactive psychological consultation and so on. Efforts to publicize mental health and popularizing mental health knowledge, multimedia and multi-platform publicity, combined with psychological counseling, drug treatment, rehabilitation training and social services and other comprehensive prevention and treatment measures will be conducive to the open management of patients’ participation in social life, and promote the rehabilitation of patients with mental illness. The psychological cloud platform is a medium, and the psychological assistance expert working group is an indispensable core force. They provide online consultation and answering questions for citizens through the mobile APP of “Psychological Cloud Hospital”. According to the professional answers to mental health problems, scientific popularization of mental health, mental health related knowledge. Suzhou Guangji Hospital and Suzhou Mental Health Center use mobile internet, cloud platform, intelligent APP and other information
48
J. Ye et al.
means to assist the psychological assistance expert working group to devote itself to the prevention and treatment of mental health. To protect the public’s mental health. Funding. Suzhou Industrial Technology Innovation Special Project in 2017 (Basic Research on People’s Livelihood Science and Technology - Medical and Health Application [second Batch]) Project name: Study on the prediction of efficacy of C-reactive protein (CRP) on Escitalopram oxalate and Efflafaxine, Project number: SYSD2017137.
References 1. Liu, H., Niu, Z., Wu, T., Du, J., Zhang, F., Bi, H.: A performance evaluation method of load balancing capability in saas layer of cloud platform. J. Physics: Conference Series 1856, 012065 (6 pp.) (2021) 2. Yin, Y., Cheng, Y., Guan, Y.:Construction of seismic cloud platform based on openstack. J. Physics: Conference Series 1881, 032021 (6 pp.) (2021) 3. Zhao, T., Zhibo, Y., Han, J., Zhang, M., Wang, J.: Design for spatiotemporal information cloud platform of smart city based on OSGi. J. Physics: Conference Series 1732, 012018 (6 pp.) (2021) 4. Benoit, A. , Elghazi, R., Robert, Y.: Max-stretch minimization on an edge-cloud platform. In: 2021 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 766–75 (2021) 5. Islam, M.S., Pourmajidi, W., Zhang, L., Steinbacher, J., Erwin, T., Miranskyy, A.: Anomaly detection in a large-scale cloud platform. In: 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pp. 150–9 (2021) 6. Ma, B., Wang, T., Lin, L., Lv, X., Ma, Y.: An IoT-oriented cloud platform for intelligent management of emergency equipment. In: 2020 International Wireless Communications and Mobile Computing (IWCMC), pp. 1314–19 (2020) 7. Zhou, J., Feng, X., Li, W.: Construction of rural primary school teachers’ professional ability promotion system based on cloud platform. J. Physics: Conference Series 1881, 032034 (8 pp.) (2021) 8. Islam, M.N., Khan, S.R., Islam, N.N., Rezwan-A-Rownok, M., Zaman, S.R., Zaman, S.R.: A mobile application for mental health care during COVID-19 pandemic: development and usability evaluation with system usability scale. Computational Intelligence in Information Systems. Proceedings of the Computational Intelligence in Information Systems Conference (CIIS 2020). Advances in Intelligent Systems and Computing (AISC 1321), pp. 33–42 (2021) 9. Srikanth, T.K., et al.: Leveraging technology to improve quality of mental health care in Karnataka. WebSci ‘21: Proceedings of the 13th Web Science Conference, pp. 107–14 (2021) 10. Idrees, A.R., Kraft, R., Pryss, R., Reichert, M., Baumeister, H.: Literature-based requirements analysis review of persuasive systems design for mental health applications . Procedia Computer Science, 18th International Conference on Mobile Systems and Pervasive Computing, MobiSPC 2021, The 16th International Conference on Future Networks and Communications, FNC 2021 and the 11th International Conference on Sustainable Energy Information Technology, SEIT 2021, 191, pp. 143–150 (2021) 11. Chen, Y., Xu, Y.: Social support is contagious: exploring the effect of social support in online mental health communities. Conference on Human Factors in Computing Systems - Proceedings, May 8, 2021, Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA 2021 12. Sweeney, C., et al.: Can chatbots help support a person’s mental health? Perceptions and Views from Mental Healthcare Professionals and Experts. ACM Transactions on Computing and Healthcare 2(3), 25 (15 pp.) (2021)
Design and Implementation of Interactive Platform for Mental Health
49
13. Zhang, X., He, Y.: Information security management based on risk assessment and analysis. In: 2020 7th International Conference on Information Science and Control Engineering (ICISCE), pp. 749–52 (2020) 14. Tariq, M.I., et al.: Prioritization of information security controls through fuzzy AHP for cloud computing networks and wireless sensor networks. Sensors 20(5), 1310 (36 pp.) (2020) 15. Awang, N., Samy, G.N., Hassan, N.H., Maarop, N., Magalingam, P., Frank, M.: Identification of information security threats using data mining approach in campus network. In: Proceedings of the 24th Pacific Asia Conference on Information Systems: Information Systems (IS) for the Future.PACIS (2020) 16. Hina, S., Dominic, P.: Dhanapal Durai.Information security policies’ compliance: a perspective for higher education institutions. Journal of Computer Information Systems 60(3), 201–211 (2020) 17. Wang, Y.: Network information security risk assessment based on artificial intelligence. J. Physics: Conference Series 1648, 042109 (8 pp.) (2020) 18. Wang, C., Jin, X.: The Researches on Public Service Information Security in the Context of Big Data. ISBDAI ‘20, pp. 86–92 (2020) 19. Kang, M.: Anat Hovav.Benchmarking Methodology for Information Security Policy (BMISP): Artifact Development and Evaluation. Information Systems Frontiers 22(1), 221–42 (2020) 20. Alsbou, N., Price, D., Ali, I.: IoT-based smart hospital using cisco packet tracer analysis. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), pp. 6 (2022)
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor Data Acquisition of UAV Xiaokaiti Maihebubai(B) State Grid Xinjiang Electric Power Co., Ltd., Information and Communication Company, Urumqi 830001, China [email protected]
Abstract. Common defects such as loose and missing pins on power fittings seriously affect the stable operation of the power system. In order to find and eliminate defects in time, a method for detecting pin defects in overhead lines based on multi-sensor data acquisition by UAV is proposed. UAV multi-sensors are used to collect images of overhead line pins, and grayscale, denoising and enhancement processing are performed on the images. On this basis, a watershed based image segmentation method is proposed to eliminate background interference. Defect detection is realized based on Faster-RCNN network, features are extracted through hierarchical residual convolution module, RPN is used to delineate regions of interest, Fast-RCNN network is applied to realize category confirmation, and detection results are output. The experimental results show that under the application of the research method, the AP values of the five types of samples are all above 0.9, indicating that the research method has a high defect detection accuracy. Keywords: UAV · Multi-Sensor · Data Acquisition · Pin Image · Preprocessing · Image Segmentation · Defect Detection
1 Introduction As key components such as pins on high-voltage transmission towers have been exposed to wind, rain and sunlight for a long time, safety hazards such as pins falling off are prone to occur, resulting in failure of transmission lines. This kind of accident has brought great economic losses to the power company, and also brought great inconvenience to the residents’ living power supply. To ensure the safety of transmission lines, it is necessary to conduct timely inspection on this key component on the high-voltage iron tower [1]. At present, the inspection of transmission lines mainly relies on manual operations, which have problems of low efficiency and poor safety, and the inspection results are greatly affected by objective factors such as personnel skills, weather, and environment. Therefore, intelligent high-voltage tower inspection is very necessary. In order to realize the automatic inspection of power transmission network based on vision, researchers have explored in multiple research directions. Helicopters, UAVs, etc., as aerial flight platforms, have the characteristics of high efficiency, accuracy and safety. Using the cameras © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 50–64, 2024. https://doi.org/10.1007/978-3-031-50571-3_5
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
51
installed on the flight platforms, a large number of images including key information such as insulators can be obtained. They have thus become an important way of inspecting power transmission equipment in recent years. The State Grid Corporation of China took the lead in putting drones into the inspection of high-voltage lines and collected a large number of pictures. However, after the drones transmit these pictures back to the ground, a large number of professionals are still required to look at them one by one, looking for the defective pins on the high-voltage towers from the pictures. Although this method solves the security problem, it has low efficiency and high missed detection rate, making it difficult to achieve good troubleshooting efficiency. This makes it difficult to detect the potential defects of insulators accurately, which greatly increases the maintenance cost [2]. Therefore, how to use image processing technology to automate the entire process to achieve real intelligent detection has become a research direction in this field. The inspection of overhead line pins is usually divided into two steps: the positioning of the inspection object and the defect identification of the inspection object. Images captured by aerial inspection platforms that contain critical components often also include a variety of cluttered backgrounds such as mountains, rivers, grasslands, and farmland. In the actual detection environment, aerial images have different perspective transformations for changes and lighting conditions. Processing these images is complex and can easily lead to false detection results. Therefore, locating the inspected critical components in the image requires overcoming these disadvantages. Aiming at this point, a method for detecting pin defects in overhead lines based on multi-sensor data acquisition by UAV is proposed. The pin image is collected by UAV multi-sensor, and the collected image is preprocessed to enhance the contrast of the image and reduce the detection difficulty. The use of watershed algorithm to segment the image of overhead line pins improves the accuracy of image detection, and the Faster-RCNN network is used to complete the identification of pin defects of overhead lines.
2 UAV Multi-Sensor Capture Pin Image In power transmission lines, pins are widely used as fasteners to connect components to stabilize the entire structure. However, due to long-term exposure to the natural environment, bolts are prone to breakage and pins are lost, resulting in hidden dangers such as large-scale transmission line failures. According to the “Administrative Regulations on Operation and Maintenance of Overhead Transmission Lines” (2016 edition), the lack of pins is a critical defect. Therefore, it is necessary to conduct regular inspections of transmission lines, and timely check and repair line equipment to ensure the safe and stable operation of the power grid. Traditional transmission line inspections mostly use manual methods, which are intensive and long-term. It is difficult to carry out line inspections in some areas with complex or even dangerous terrain. Compared with the traditional manual inspection method, the current general aerial photogrammetry technology reduces the field labor intensity of personnel to a certain extent and shortens the construction period. However, in the later stage, experienced auditors mainly check the large number of inspection pictures brought back by the drone, this not only consumes a lot of manpower and resources, but also easily leads to issues such as false positives
52
X. Maihebubai
and missed detections [3]. In recent years, with the continuous progress of sensors and remote sensing technology, it has become possible to use multi-sensors of unmanned helicopters to obtain multi-source data, which provides new ideas and favorable conditions for solving the above problems. The use of airborne sensor system for power line inspection is a high-tech that has been widely used in recent years. Using this technology, high-resolution imagery and accurate spatial 3D information can be obtained directly in the power line corridor by flight without power failure. By using this method, the efficiency of transmission line testing can be greatly improved, and a large amount of field work can be saved, saving line testing costs. The data obtained by inspection can not only provide reference for the ledger data of transmission lines, but also provide multi-source data support for power grid management and maintenance through professional analysis and processing (such as elevation analysis, 3D visualization management, etc.). As an important method of obtaining spatial data, unmanned aerial vehicles (UAVs) have the advantages of long endurance, low flight costs, high data resolution, and flexible scheduling for low altitude data acquisition technology. This system can achieve real-time information transmission and detection of high-risk areas. Combining with conventional aerial photography, it becomes an important supplement to satellite remote sensing and conventional aerial measurement, and prospers together with conventional aerial measurement. Especially in emergency response to major natural disasters, low altitude optical image acquisition under cloudy weather, local real-time remote sensing, and distributed daily low altitude remote sensing monitoring, unmanned aerial vehicle aerial photography systems have the advantages of both satellite remote sensing and conventional aerial photography that cannot be replaced. The use of unmanned aerial vehicles for transmission line inspection and power grid safety evaluation can leverage the advantages of unmanned aerial vehicles in terms of low cost and high timeliness, and overcome many problems faced by existing manual methods [4]. The UAV multi-sensor data acquisition system used in this paper can acquire massive high-precision airborne laser scanning (LiDAR) point cloud data, high-resolution aerial digital images, thermal infrared images, and ultraviolet images of power line corridors. The data acquisition system is mainly composed of the following parts: laser, visible light detector, infrared thermal imager, ultraviolet camera, receiver, global positioning system (GPS), inertial measurement unit (IMU)) composed of positioning and orientation system (position and orientation system, POS) [5]. The specific implementation of the multi-sensor capture pin image of the UAV is as follows: Firstly, input the GPS timing signal and PPS into the time synchronization controller to achieve time synchronization of the time synchronization controller. The synchronous control device first provides time for laser scanners, infrared thermal imagers, ultraviolet cameras, and visible light cameras. After each sensor receives the PPS signal, reset the secondary counter to zero, which can achieve microsecond level measurement accuracy. At the same time, the synchronous control device can also activate the exposure of the camera and transmit the exposure time to the computer for recording. In this way, the time of data such as POS, visible light cameras, laser scanners, infrared thermal imagers, and ultraviolet cameras becomes a GPS time, achieving the goal of multiple data sources referencing at the same time. On this basis, the high-frequency (100–200 Hz) data obtained through POS is used to obtain the spatial pose information corresponding to the time
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
53
of each frame of data obtained by each sensor, achieving the unity of spatial coordinates. Different from the traditional multi-sensor independent control data acquisition method, the multi-sensor synchronous control method proposed in this paper can realize the unification of time and space reference of multi-sensor data acquisition. The collected data frames of laser point cloud data, optical image data, infrared video data and ultraviolet image data are synchronous observation data, which lays a data foundation for multi-source data comparison and analysis to realize fault diagnosis. The UAV multi-sensor acquisition pin image flow chart is shown in Fig. 1.
Fig. 1. UAV multi-sensor acquisition pin image flow chart
3 Overhead Line Pin Image Preprocessing 3.1 Grayscale The grayscale of the image is a picture in which the value of each pixel on the original color image is set to R = G = B. Among them, each value of R, G, and B ranges from 0 to 255, in this way, when storing data, only one set of data needs to be saved. Generally, grayscale is divided into the following four methods: component method, maximum value method, average method and weighted average method [6]. Among them, the component method is to use the respective component luminances of R, G, and B in the RGB image as the grayscale value of the grayscale image, which can be used as needed in practical applications. The formula for the component method is as follows: ⎧ ⎪ ⎨ A1 (i, j) = R(i, j) A2 (i, j) = G(i, j) (1) ⎪ ⎩ A3 (i, j) = B(i, j) In the formula, AI (i, j) and I = 1, 2, 3 represent the gray value of each component at the coordinate point (i, j). 3.2 Image Denoising In aerial images, often due to the influence of dust and small objects on the ground, as well as the camera itself, some irregular patterns such as points and lines are generated. These points and lines are called image noise and cause interference to image processing. Therefore, it is necessary to denoise the image in the preprocessing stage. On this basis, using the principle of signal analysis, the plane image in the spatial domain is transformed into a frequency image. Among them, the high-frequency
54
X. Maihebubai
band represents a rapid change in the grayscale of the original image, while the lowfrequency band represents a slow change in the grayscale of the original image. In this way, frequency domain filters can be used to remove small interference points that change quickly, thereby achieving the goal of denoising the image. Due to the one-to-one correspondence between images in the time and spatial domains, this operation can also be transformed into convolutional filtering on the image matrix. This article proposes a new method based on convolutional filtering, which adds different weights to the image and adds different weights to the image to form a new image. At present, the common filtering methods mainly include median filtering, Gaussian filtering, mean filtering and so on. For UAV inspection images, this section mainly uses Gaussian filtering and median filtering. Gaussian filtering is to weight the gray value of each pixel in the image, and then perform a normalization operation, which is mainly used to remove Gaussian noise in the image. Its template function is shown in formula (2). i2 +j 2
e 2s2 E(i, j) = 2π s
(2)
In the formula, E(i, j) represents the Gaussian filter function, s represents the standard deviation. Theoretically, a Gaussian distribution has all domains of non-negative values, which requires an infinite convolution kernel. In fact, only the part within 3 standard deviations needs to be kept, and the other parts can be deleted directly. The Gaussian function has five important properties, which enable it to play a greater role in different processes. The above properties indicate that Gaussian smooth filtering is a very effective image processing method in the spatial and frequency domains, and it is also a good image processing method [7]. For Gaussian functions, the following four aspects are of great significance: (1) Gaussian filters have a strong rotational symmetry, therefore, ideal Gaussian filters have consistent appearances in all cases. (2) We studied the single numerical properties of a Gaussian filter. That is to say, when each signal is subjected to Gaussian filtering, the corresponding filtering weights are different. (3) The Gaussian function has strong anti-interference ability against time-frequency changes. (4) By using Gaussian filters, filters of any size can be constructed; This is because there are no restrictions on the left and right sides of the Gaussian function, and this unrestricted situation allows users to expand it. The smoothness of an image after Gaussian filtering is determined by the standard deviation. Its output is the weighted average of pixels, with pixels closer to the center being weighted more heavily. This algorithm has better smoothness and edge preservation than the average filtering algorithm. Gaussian filter is essentially a low-pass filter, which is a smoothing filter. Follow these steps to perform Gaussian filtering. Step 1: Move the central element of the associated core directly above the image to be processed.
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
55
Step 2: Take the pixel values of the image as the weights and multiply them with the correlation kernel. Step 3: Accumulate the results obtained from the previous steps to obtain the output. 3.3 Image Enhancement The purpose of image enhancement is to enhance the information you need and reduce unnecessary information as much as possible, so that the differences between the objects in the image can be enlarged. The enhancement in this chapter is mainly to highlight the details in the image by adjusting the range of pixel values. Many backgrounds in the inspection images are useless information, such as houses, sky, trees, etc. Moreover, due to the problem of light, the object to be detected will appear too dim, which greatly increases the difficulty of identification. So, this section adopts histogram equalization to expand the difference between the detected target and the background and improve contrast [8]. For those cases where the gray values of the image are concentrated in a narrow area, find a way to spread them evenly, and distribute the gray values evenly in each pixel interval by introducing a histogram. In this way, the whole image looks sharper, the contrast is higher, and some details will be more obvious, which is the role of image enhancement. In order to make the difference between pixel points and other points more obvious, and also facilitate the detection of points, it is necessary to flatten the points. Histogram equalization is a correction method obtained by using the cumulative distribution function transformation method to process images. Histogram equalization refers to maximizing the information quality (entropy) of an image by making the grayscale distribution of its various parts relatively uniform. Its expression is: e f (v)dv, 0 ≤ e ≤ D − 1 (3) B = C(e) = (D − 1) 0
In the formula, B represents the grayscale value after histogram equalization, C represents an increasing and monotonic grayscale transformation function, e represents the original grayscale value of the pixel before enhancement, f represents a probability density function, v represents the hypothetical variable for integration, D represents the gray level before the equalization operation, d represents the derivation symbol. Histogram equalization is the process of using a grayscale conversion function to transform the original image’s histogram into a uniformly distributed grayscale value, and then correcting the original image through the equalized histogram. This method is based on probability theory and utilizes gray point operations for transformation to enhance the image. The conversion function of an image is determined by the cumulative distribution function of the image. By converting this function, a new image is obtained, and the probability density of the image is consistent. When the histogram of the image is evenly distributed, the amount of information contained in the image is the largest, and the image appears relatively clear. The histogram equalization process is as follows: Step 1: Count the grayscale histogram of the initial image; Step 2: Accumulate and integrate the image to obtain a uniform distribution of grayscale values in the new image, thereby obtaining the conversion relationship of grayscale values in the image;
56
X. Maihebubai
Step 3: The third step is to obtain the image processing results after histogram equalization based on the established grayscale conversion relationship.
4 Overhead Line Pin Image Segmentation Image is an important medium for human information transmission and the main source of understanding the world. In human life, due to the needs of work and research, only a part of the image is needed. How to segment the target image requires image segmentation technology to solve this problem [9]. Image segmentation refers to dividing an image into several non-overlapping sub-regions, so that the features in the same sub-region have a certain similarity, and the features in different sub-regions show obvious differences. Image segmentation is the most important step in the study of images. Whether the work after image segmentation, such as image classification and image analysis, can meet the expected requirements is largely affected by the quality of image segmentation. Due to the complex image segmentation technology, it is difficult to obtain an algorithm suitable for most pictures, and there is no standard for evaluating the quality of the segmentation algorithm, which brings a lot of inconvenience to the specific application of the image segmentation algorithm. An aerial image containing pins can be semantically divided into two parts: foreground (pins) and background (other objects in the frame). Considering the complexity of pictures in natural scenes, it is difficult to find a good and fast traditional image processing method for semantic segmentation of the entire image. However, through a large number of manual annotations, it is still possible to find some algorithms for image region segmentation, among which the image segmentation based on the watershed algorithm is the most effective method. The specific process is shown in Fig. 2. The watershed segmentation algorithm has many advantages in image segmentation, so it is used in various fields. The main reason is that after segmentation by the watershed segmentation algorithm, a single pixel segmentation line, that is, a watershed, can be obtained, and the watershed segmentation algorithm is highly sensitive to subtle grayscale changes in the image. Therefore, the edge of the object can be accurately located in the image, and finally the segmented area is closed and connected.
5 Defect Identification of Overhead Line Pins Traditional power devices and fault identification methods often use manual design. For example, the image was preprocessed for enhancement and denoising. By utilizing features such as Harr, invariant moments, and color space, combined with methods such as SVM and cascaded Adaboost, electrical components such as shock absorbers and insulators and their corresponding faults are identified. In addition to the requirement for a large number of expert techniques, this method is often only used for specific classifications and has low scalability. In recent years, with the rapid development of deep learning, it is proposed to use deep learning algorithms to identify power components and corresponding defects. However, most of these researches focus on the identification of components such as insulators and anti-vibration hammers, and the identification of related defects such as insulator missing and self-explosion. The research on pin-level
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
57
Fig. 2. Image segmentation process of overhead line pins
defects is rare, or even blank [10]. To address these issues, this paper uses the Faster RCNN network to identify pin faults in overhead lines. Faster-RCNN is developed on the basis of Fast-RCNN. The whole model is divided into three parts: the first part uses the hierarchical residual convolution module to extract features, the input is the original image sample, and the output is the feature map of the image. And use a top-down feature map fusion module with horizontal connections and upsampling operations to fuse the deep and shallow feature maps output from the upper part. The second part is the region generation network RPN. The input is the fused high-dimensional feature output from the previous step, and the output is the location score and category score of the region of interest (ROI). The third part sends the ROI position and feature map obtained in the previous steps into the Fast-RCNN network to confirm the category and output the detection result. 5.1 Feature Extraction The traditional feature extraction network mainly outputs feature maps of multiple scales through the external hierarchical output structure to obtain features of multiple scales,
58
X. Maihebubai
such as the commonly used feature pyramid structure. The hierarchical residual convolution module directly adds a multi-level structure inside the convolution layer and connects it in the form of residual, which can extract features of multiple scales at a fine-grained level. At the same time, the receptive field of the output features of each layer is increased, so that each position contains richer contextual information. The hierarchical residual convolution module divides the input features into M layers according to channels, denoted as Pi , i = 1, 2, ..., M , the resolution ofeach layer is the same as the input resolution, and the number of channels is the input 1 M . First, pass the second layer through a 3 × 3 convolution to get the output of this layer. Afterwards, starting from the third layer, add the features of each layer to the output of the previous layer, and then use 3 × 3 convolutions are used to obtain the output features of each layer, and the output of the first layer is an identity map. Finally, the output features of each layer are spliced together by channel, and fused through a 1 × 1 convolution to obtain the final output result. The output Qi , i = 1, 2, ..., M of each layer is expressed as: ⎧ ⎪ ⎨ Pi , i = 1 Qi = G(Pi ), i = 2 (4) ⎪ ⎩ Qi−1 + G(Pi ), 2 < i < M where, G represents the transfer function. Each time the feature undergoes a 3 × 3 convolution, the receptive field will increase, and the output features of each layer can receive the feature information of each previous layer. This makes the output of each layer contain a combination of different receptive field sizes, so that the fused output features contain feature information of multiple scales and richer contextual information. It can effectively improve the model’s ability to recognize small pin targets and the ability to distinguish pins in different parts, and finally improve the detection effect of the model. The more layers the module is divided into, the larger the output receptive field and the richer the acquired features. However, the complexity of the model will increase accordingly. Here, the number of divided layers is finally set to 4 to measure the accuracy and model complexity of the model. 5.2 Delineation of the Region of Interest The traditional target detection method generally uses a sliding window to traverse the image to generate a candidate frame, and then use some classifiers to complete the classification of the candidate frame. This method has poor real-time performance and poor effect. The RPN network is used in Faster-RCNN to generate candidate boxes. First, RPN generates anchors (Anchors) for each position of the feature map. Anchors consist of several fixed scales and several fixed aspect ratio rectangular boxes. Second, these Anchors are input into the RPN classifier, and the classifier determines whether the Anchors contain targets. If they contain targets, the Anchors are retained, otherwise the Anchors are eliminated. And perform bounding box regression on Anchors to output regions of interest (ROIs).
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
59
5.3 Classification of Pin Defects The low accuracy of the “single-order” detector is attributed to the class imbalance of the candidate regions. That is, the target to be recognized in the picture usually only occupies a small part of the whole picture, so that most of the candidate windows generated in the early stage belong to the negative class (background), and only a very small part contains the foreground object. The dominance of the background class obscures the effect of the candidate frame containing the recognized object, so that the training process cannot fully learn the required information. For Fast-RCNN, the Region Proposal Network (RPN) it contains performs simple binary classification of candidate regions in advance, so that the candidate windows belonging to the background will be greatly reduced. This reduces the impact of class imbalance of candidate regions to some extent, but the complexity of its operation slows down the recognition speed. Taking binary classification as an example, Anchors generated by each RPN are assigned a positive or negative label. The cross entropy loss (Focal loss) of the i Anchors is shown in formula (5). R(αi , βi ) = − log αi , βi = 1 (5) R(αi , βi ) = − log(1 − αi ), βi = 0 In the formula, βi ∈ {0, 1} is the sample label of the i Anchors, and βi = 1 indicates that the Anchors and a bounding box have a larger IoU, which is a positive sample. βi = 0 indicates that the anchors are negative samples, αi is the score of the i Anchors belonging to the positive samples, and 1 − αi is the score of the i Anchors belonging to the negative samples. When Anchors are positive samples, the classification loss decreases with the improvement of sample detection accuracy. When Anchors are negative samples, the classification loss decreases with the decrease of sample detection accuracy. In order to learn more information from the candidate region containing the target, a weight control coefficient w is added to the original cross-entropy loss function R(αi , βi ), namely: ˆ i , βi ) = −w(1 − αi )λ log αi , βi = 1 R(α (6)
ˆ i , βi ) = −(1 − w) −(1 − αi )λ log(1 − αi ), βi = 0 R(α In the formula, w is the balance factor, which is used to balance the contribution of positive and negative samples to the classification loss. The smaller w is, the smaller the loss weight of negative samples is. λ is an adjustment factor, which is used to adjust the loss weight of simple negative samples and difficult negative samples. When λ = 0, Focal loss is the cross entropy loss. The larger λ is, the greater the loss of difficult negative samples. For the i Anchors, when the Anchors are positive samples, the larger ˆ i , βi ) is, that is, the loss of the i positive sample decreases with αi is, the smaller R(α the increase of αi . When the Anchors are negative samples, the larger αi is, the larger ˆ i , βi ) is, that is, the loss of the i difficult negative sample increases with the increase R(α of αi . Therefore, by weighting parameters w and λ, Focal loss can effectively improve the loss weight of difficult negative samples.
60
X. Maihebubai
Secondly, the selection criteria of Anchors by RPN are improved, and the number of all Anchors in an image is reserved and input into RPN as a training batch. In Fast-RCNN, the classification loss of the improved RPN is shown in formula (7): ˆ i , βi ) R(α i=1 Z= (7) N In the formula, N is the number of all positive and negative sample Anchors of an image in each batch input to the RPN, βi is the i Anchor and the corresponding positive ˆ i , βi ) is the Focal loss of the i Anchors. and negative samples; R(α
6 Method Application Testing 6.1 Experimental Dataset To verify the effectiveness of the method for identifying pin defects in overhead transmission lines based on drone multi-sensor data, this project plans to use the original dataset obtained during drone detection of transmission lines, and some of the images of overhead line pins are shown in Fig. 3.
(a) Pin defect 1
(b) Pin defect 2
(c) Pin defect 3
(d) Pin defect 4
Fig. 3. Schematic diagram of overhead line pins
The original image contains five types of targets, such as no defect, missing pins, pin prolapse, missing spacers, and pin deformation. They are unevenly distributed in the
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
61
image, and there is a situation where one image contains multiple targets and multiple targets. In the original data set of this paper, the number of samples with no defects, missing pins, pin prolapse, and missing gaskets is relatively large, while the number of samples with pin deformation is relatively small. Therefore, this paper generates artificial samples according to the number of samples of different types of targets, and finally makes the number of samples of each type equal. The specific number of samples is shown in Table 1. Table 1. Experimental dataset distribution Type
Sample type:
Number of samples
No defect
Training sample
235
Test sample
241
Lack of pin
Training sample
362
Test sample
354
Training sample
287
Test sample
241
Training sample
300
Test sample
292
Training sample
252
Test sample
238
Pin out Lack of gasket Pin deformation
6.2 Image Feature Extraction Results Taking the partial empty line pin image as an example, the image features extracted by the hierarchical residual convolution module are shown in Table 2. 6.3 Delineation of the Region of Interest The RPN network is used to delineate the region of interest (target) in the overhead line pin image, and the results are shown in Fig. 4. 6.4 Analysis of Detection Method Performance Use training samples to train the pin defect detection method based on Faster-RCNN (the research method) and the online fault detection method based on DKPCA algorithm for the fixed wing UAV with multiple operating conditions (the method of reference [2]), then use the trained method for defect detection, and finally compare it with the actual results to calculate the AP value. The AP value is composed of precision ψ and recall ζ . Accuracy ψ refers to the proportion of truly defective samples identified as defective samples. The recall rate ζ can also be called the detection rate, which refers to the
62
X. Maihebubai Table 2. Image feature extraction results
Image Characteristic scale Characteristic scale Characteristic scale Characteristic scale 1 2 3 4 1
0.2456
2.5852
2.8558
6.8652
2
0.3557
2.9878
1.4422
4.4233
3
2.5452
3.4552
0.5532
0.0123
4
0.0335
0.8465
2.5474
1.2222
5
0.6852
0.7425
0.0532
3.5452
6
1.5233
3.8641
0.0014
6.4502
7
0.1523
1.1255
2.4522
4.3212
8
2.1441
1.5221
3.2152
0.0212
9
0.1521
0.3845
3.2001
0.4822
10
0.0662
0.7452
3.7220
0.3435
11
1.5922
1.1257
4.1254
1.2122
12
0.1256
2.6522
2.0531
2.2020
13
0.0865
0.0865
0.1501
2.2127
14
0.1245
3.4571
0.5320
0.0422
15
1.8745
2.5455
0.5643
0.7429
16
2.5653
0.0162
0.2933
4.1233
17
0.0652
0.0252
2.9872
2.6513
18
0.5381
0.0142
3.1225
2.8622
19
1.6239
3.5422
4.2152
3.4221
20
2.2250
0.4522
1.6422
0.5274
proportion of the identified true defective samples to all defective samples. The specific calculation formula is: x (8) ψ= x+y x ζ = (9) x+z In the formula, x represents the number of defects detected correctly, y represents the number of defects detected by mistake, and z represents the number of defects that were missed. Recall and precision vary for different confidence thresholds. Taking ζ and ψ as the horizontal and vertical coordinates, respectively, an ψ-ζ curve can be obtained. The AP value is the area of the area enclosed by the curve and the horizontal and vertical axes. The value is between 0 and 1. The larger the value, the higher the accuracy of the detection method. The AP value can comprehensively and comprehensively evaluate the performance of the algorithm.
Defect Detection Method of Overhead Line Pins Based on Multi-Sensor
63
Fig. 4. Example of delineating the region of interest
The AP values of the two detection methods are shown in Table 3. Table 3. AP value of two detection methods Type
AP value of the research method
AP value of the method of reference [2]
No defect
0.9555
0.7525
Lack of pin
0.9254
0.8754
Pin out
0.9712
0.7012
Lack of gasket
0.9314
0.8384
Pin deformation
0.9252
0.8292
It can be seen from Table 3 that under the application of the research method, the AP values of the ‘five types of samples are all above 0.9, indicating that the research method has high accuracy.
7 Conclusion In addition to working outdoors in harsh environments, most power fittings in the power system also need to withstand the external mechanical load tension and the power load inside the power system for a long time. This makes the pins on the fittings prone to defects such as missing or loose, affecting the stable operation of the power grid. To this end, a method for detecting pin defects in overhead lines based on multi-sensor data
64
X. Maihebubai
acquisition by UAV is proposed. The pin image is collected by UAV multi-sensor, and the image is preprocessed. The watershed algorithm is used to segment the background part of the overhead line pin image, and the Faster-RCNN network is used to complete the identification of the overhead line pin defect. The method is finally tested by application, and the obtained AP values are all above 0.9, which proves the effectiveness of the method.
References 1. Tang, F., Gao, Q., Du, Z.: Algorithm of object localization applied on high-voltage power transmission lines based on line stereo matching. Opt. Eng. 60(2), 023101 (2021) 2. Liang, S., Zhang, S., Zheng, X., et al.: Online fault detection of fixed-wing UAV based on DKPCA algorithm with multiple operation conditions considered. Xibei Gongye Daxue Xuebao/Journal of Northwestern Polytechnical University 38(3), 619–626 (2020) 3. Le, V., Yao, X., Hung, T.B., et al.: Series DC arc fault detection based on ensemble machine learning. IEEE Trans. Power Electron. 35(8), 7826–7839 (2020) 4. Li, H., Li, G., Ye, Y., et al.: A high-efficiency acquisition method of LED-multispectral images based on frequency-division modulation and RGB camera. Optics Communications 480(4), 126492 (2021) 5. Swamy, A., Srinath, S.: POS Tagging and NER System for Kannada Using Conditional Random Fields. International J. Information Retrieval Research (IJIRR) 11(4), 1–13 (2021) 6. Yang, Y., Zhang, P.: A novel bond wire fault detection method for IGBT modules based on turn-on gate voltage overshoot. IEEE Trans. Power Electron. 36(7), 7501–7512 (2021) 7. Zheng, H., Wang, R., Yang, Y., et al.: Intelligent fault identification based on multisource domain generalization towards actual diagnosis scenario. IEEE Trans. Industr. Electron.Industr. Electron. 67(2), 1293–1304 (2020) 8. Wang, L., Chang, X., Ren, W.: Color image enhancement simulation based on weighted histogram equalization. Computer Simulation 38(12), 126–131 (2021) 9. Yanxia, C., Yanyan, X., Tierui, Z., et al.: Threshold image target segmentation technology based on intelligent algorithms. Komptepna optika 44(1), 137–141 (2020) 10. Jiang, C., Fang, Y., Zhao, P., et al.: Intelligent UAV identity authentication and safety supervision based on behavior modeling and prediction. IEEE Trans. Industr. Inf.Industr. Inf. 16(10), 6652–6662 (2020)
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline Based on Machine Vision Yanmei Sun1(B) and Ya Xu2 1 Yantai Vocational College, Yantai 264001, China
[email protected] 2 Modern College of Northwest University, Xi’an 710111, China
Abstract. Municipal water supply and drainage pipeline is a key component of underground pipeline, and the design of monitoring system is of great significance for its development. In this paper, the machine vision network is established, and the integrated monitoring system is introduced to design a monitoring method for municipal water supply and drainage pipelines. The communication sensor network is introduced to ensure the comprehensive monitoring of information. At the same time, the information management platform is introduced to combine digital monitoring and video monitoring, and alarm software is set to complete the accurate display of data. The method proposed in this paper can effectively combine big data content and urban management content, better extend the industrial value chain, and ensure the realization of urban intelligence. The experimental results show that the relative error of the online monitoring method of municipal water supply and drainage pipelines based on machine vision is always less than 0.02%, the monitoring process will not be affected by wind speed, and can well ensure the operation safety of municipal water supply and drainage pipelines. Keywords: Machine Vision · Municipal Water Supply and Drainage · Water Supply and Drainage Pipeline · Pipeline Monitoring · Online Monitoring · Monitoring Methods
1 Introduction The drainage pipeline is a continuous tunnel space with power, network, rainwater, heat supply and other pipelines. At the same time, there are also vents, manholes and feeding ports in the drainage pipeline. The drainage pipeline is equivalent to a “network system” under the municipal infrastructure. During the construction of a smart city, machine vision is introduced, combined with security management and command management, sensor nodes are set, The set sensor nodes are used to sense, monitor and perceive the data. The wireless sensor network can assist the drainage pipeline in comprehensive monitoring [1, 2]. At present, with the rapid increase of network equipment and data traffic, people have begun to use machine vision technology. Machine vision technology has the advantages © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 65–81, 2024. https://doi.org/10.1007/978-3-031-50571-3_6
66
Y. Sun and Y. Xu
of high transmission rate and good overall performance, which can make the drainage pipeline safer. In addition, machine vision can combine artificial intelligence technology, big data and other technologies to establish drainage pipeline ventilation, drainage and other systems, and establish a comprehensive monitoring and management platform for drainage pipelines through big data technology. Through this platform, toxic gas, water pressure, air pressure and other environmental variables in the drainage pipeline can be collected, analyzed and monitored. Once the concentration of toxic gas exceeds the standard or other dangerous situations occur, Early warning and disposal can be carried out in a timely manner through online transmission, and efforts are made to build “municipal water supply and drainage pipeline”. On the basis of the above technologies, this paper designs an online monitoring method for municipal water supply and drainage pipelines based on machine vision. Machine vision technology is an interdisciplinary subject involving artificial intelligence, computer science, image processing, pattern recognition and many other fields. Machine vision mainly uses computers to simulate human visual functions, extract information from the images of objective objects, process and understand it, and finally use it for actual detection, measurement and control. Machine vision technology is characterized by high speed, large amount of information and multiple functions. The innovation of this method is to design the integrated monitoring system of municipal water supply and drainage pipeline under machine vision, which can ensure the stability of the bottom system and the top system of the drainage pipeline, so that both can operate stably and reliably.
2 Design of Comprehensive Monitoring System for Municipal Water Supply and Drainage Pipeline Based on Machine Vision During the construction of drainage pipelines, it is necessary to minimize the potential safety hazards in all aspects of the drainage pipelines, and ensure that the top floor area and the bottom floor area are separated. The two areas are on two interfaces, so that the top floor area and the bottom floor area are managed separately. The bottom floor area has multiple functions, such as early warning, positioning, safety precautions, environmental monitoring, etc. The designed municipal water supply and drainage pipeline integrated system has multiple pipelines, heating pipelines, power pipelines Drainage pipeline, etc. The top level area needs to have the functions of gas management, power production management, water supply management and heat supply management. The bottom level area transmits the toxic gas concentration information, temperature information, humidity information and other environmental parameters of the drainage pipeline to the top level system through machine vision, so that the bottom level system can establish a connection with the top level system. Machine vision is the basic technology for the establishment of the municipal water supply and drainage pipeline comprehensive monitoring system. It can ensure the stability of the bottom system and the top system of the drainage pipeline, so that they can operate stably and reliably. The monitoring system scheme designed in this paper is relatively simple. The construction unit should design a more complete, accurate and detailed construction schedule scheme according to the specific situation of the municipal water supply and drainage pipeline. The construction schedule scheme should specify the details of construction
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
67
preparation, personnel allocation, construction equipment selection, etc. In the actual construction process, the progress should be tracked regularly, and the data and content should be updated according to the actual situation, Ensure to complete the construction of municipal water supply and drainage pipelines within the time specified in the contract [3]. The machine vision is used to control the wireless sensor. The monitoring range of the wireless sensor to the drainage pipeline can reach up to 80 km, which is set at the outlet of the drainage pipeline [4]. This can reduce the limitations of uncontrollable factors to the wireless sensor. The wireless sensor has high computing power, but poor transmission performance. In order to solve this problem, a relay node is set in the drainage pipeline, To ensure the smooth transmission of environmental data.
3 Design of Municipal Water Supply and Drainage Pipeline Comprehensive Monitoring System Based on Machine Vision The monitoring system structure is shown in Fig. 1:
5
Fig. 1. Monitoring System Structure
According to Fig. 1, the structure of the integrated monitoring system for municipal water supply and drainage pipelines mainly includes intrusion prevention system, geographic information system, visual inspection system, etc. A new water environment monitoring and early warning solution based on machine vision technology is developed. It uses the 5G wireless sensor network to realize the integrated monitoring of municipal water supply and drainage pipelines by measuring the reflected light intensity of the water. 3.1 Design of Environmental Monitoring Subsystem The environmental monitoring subsystem can monitor the surrounding environment. Vents, personnel entrances and exits, and escape exits are set in the fire protection area
68
Y. Sun and Y. Xu
of the drainage pipeline. In order to monitor the oxygen, methane, humidity, temperature and other environmental parameters in the drainage pipeline more effectively, the environmental subsystem is equipped with monitoring sensors to complete the connection with fans and pumps. The signal monitored by the monitoring sensor is transmitted to the control monitoring system through the pipeline, and transmitted to the monitoring terminal through the 5G wireless network [5]. On the display screen of the monitoring terminal, various environmental parameter data monitored by the monitoring sensor will be displayed. When the temperature, humidity, methane and other parameter concentration data exceed the standard, the monitoring center personnel will immediately send alarm information, report the location of potential safety hazards, and timely dispose of them. 3.2 Design of Equipment Monitoring Subsystem Collect the parameters of lighting, fan, environmental parameter detector, drainage pump and other equipment in the drainage pipeline. After the collection is completed, the collected data will be transmitted through remote transmission, and logic control will be carried out. The reason for collecting the parameter data of these equipment is to provide data basis for the stable operation of electrical equipment, lighting equipment, ventilation equipment and other instruments. Through the monitoring command sent by the 5G base station, according to the collected equipment parameter data, the fan, lighting equipment, etc. are remotely monitored, and the start and stop of the drainage pump and electrical equipment are remotely controlled. 3.3 Design of Video Monitoring Subsystem The network camera in the drainage pipeline integrated system is used to collect the image information of fans, lighting equipment, drainage pumps, etc. After being processed by the CPU of the monitoring system, the operating scene of various equipment in the current drainage pipeline can be displayed on the monitor. After the drainage pipeline is divided into 5G base stations and fire zones, network cameras are set at the entrance and exit to monitor the personnel in the drainage pipeline. The monitored video images are processed and displayed by the monitor, so as to monitor the entire drainage pipeline. 3.4 Intelligent Access Control System The intelligent access control system includes electronic lock, LED display, card reader and identifier. The intelligent access control system needs to be set at the entrance and exit of the drainage pipeline, so as to control the identity of the personnel entering and leaving the drainage pipeline. When the drainage pipeline inspector shows the registered identification card outside the gate, the card reader will conduct identification and authentication, and the electronic control lock in the intelligent access control system will automatically open the gate. At this time, the LED display will record the time of entry and the identity information of the inductive card. When the drainage pipeline inspector leaves, directly touch the release key in the door, open the gate with the electronic control lock, and record the departure time with the LED display. In addition to
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
69
recording the entry and exit time, LED display can also display the operation data of fans, lighting devices, drainage pumps and other equipment in the drainage pipeline in real time, providing data basis for inspectors and maintenance personnel. The intelligent access control system can effectively prevent unauthorized personnel from entering the drainage pipeline. 3.5 Alarm System The drainage pipeline contains a variety of combustibles, among which there are many cables, pipelines and optical cables. Once a violent fire occurs in the drainage pipeline, it will cause significant safety losses. The alarm system includes sensors, 5G monitoring base stations, communication base stations and temperature measuring optical fibers. Fire alarm screens are installed at the ground and underground monitoring terminals. The alarm information is received and displayed through optical cables, and the location of the fire is located [6]. In case of fire, the alarm in the drainage pipeline will give an alarm sound immediately, the monitoring terminal will call the video of the fire area and transmit it to the display, the fire emergency broadcast will immediately broadcast the alarm information, and the access control system will automatically open the door for release to ensure the rapid evacuation of trapped personnel. The automatic fire extinguishing device is turned on. After the fire is successfully extinguished, the fan will discharge the harmful gas in the drainage pipeline. The structure of the alarm system is shown in Fig. 2:
Fig. 2. Alarm System Structure
70
Y. Sun and Y. Xu
During the establishment of the alarm system, the construction department shall carefully review and confirm, set alarms according to the provisions of the construction cost, and strengthen the control of the number of alarms.
4 Integrated Information Management Platform of Municipal Water Supply and Drainage Pipeline Based on Machine Vision Through the centralized management of the environmental monitoring subsystem, equipment monitoring subsystem, video monitoring subsystem, intelligent access control system and alarm system, the “smart” goal of drainage pipeline is achieved. The information management platform contains a large amount of pipeline corridor information. Cloud computing, big data, and data aggregation technologies are used to classify and summarize the sensor information, environmental monitoring information, equipment operation status information, fire alarm information, etc. in the municipal water supply and drainage pipeline comprehensive information management platform, and form information management images on the monitoring terminal display screen, sensor terminals, and mobile intelligent equipment of the drainage pipeline, So as to realize the command, dispatching, control and monitoring of drainage pipelines. The municipal water supply and drainage pipeline comprehensive information management platform mainly includes the following requirements: First of all, the monitoring terminal can display all business information, provide data basis for the normal operation and production scheduling of the drainage pipeline, and solve the potential security risks in the drainage pipeline [7]. Machine vision is used to identify and verify each target in the pipe gallery. The 5G technology is used to determine the location of the drainage pipeline, read the electricity meter, conduct business survey, daily patrol inspection, etc., and return the fault location information and the fault image displayed by the monitoring terminal. In order to effectively maintain the drainage pipeline, BIM and other technical means need to be used to call the large screen, monitoring display and other equipment in the pipe gallery and operate according to the business process [8]. Through the comprehensive information management platform for municipal water supply and drainage pipelines based on machine vision, the system can automatically log in to each subsystem, establish different accounts, manage permissions, and apply to other systems for targeted daily maintenance, which can simplify the approval process and improve work efficiency. Construction process of water supply and drainage pipeline: grooving construction → non grooving construction → pipeline installation → auxiliary facilities → functional test → maintenance and repair. (1) Trench pipe construction: trench excavation (sloping/enclosure) - trench inspection trench bottom reinforcement - cushion construction; It belongs to relevant knowledge points of foundation pit engineering. (2) Non grooved pipeline construction: pipe jacking, shield tunneling, shallow buried excavation, directional drilling, pipe tamping.
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
71
(3) Pipeline installation: PE pipe hot melt or electric melt connection is used for water supply pipeline, and prefabricated concrete pipe is used for drainage pipeline, with socket connection. (4) Ancillary facilities: trenches/pipe channels, process wells, gate wells, inspection wells, pay attention to the construction requirements of masonry works. (5) Functional test: pressure pipeline water pressure test, non pressure pipeline tightness test, water supply pipeline flushing and disinfection. (6) Maintenance and repair: local repair (sealing method, patching method, articulated pipe method, local soft lining method, grouting method, robot method), full section repair (lining method, winding method, spraying method); Renewal of pipes (external extrusion and jacking of broken pipes).
5 On Line Monitoring Process of Municipal Water Supply and Drainage Pipeline Based on Machine Vision 5.1 Water Flow Status Tracking Under the machine vision technology, the water flow in front is tracked in real time. Based on the continuous images, the search time is improved through the real-time tracking method. The target data in the image sequence is tracked in real time to determine the moving track of the target. The correlation tracking method, difference image and pattern recognition method are simultaneously applied to the machine vision technology. If the sampling interval of the image is short, it means that the moving distance of the detected target water state is small. Using a local window for detection can effectively improve the detection speed [9]. The process of water flow state road tracking in front of municipal water supply and drainage pipeline road based on machine vision is shown in Fig. 3: The movement track of the detected target water flow state is a regular continuous movement track. After analyzing the state of the target that has occurred, predict the state that will occur, and then judge its possible position. The machine vision based water flow state target detection and tracking method for municipal water supply and drainage pipelines has the following advantages: (1) The prediction of single target trajectory is relatively accurate, the search scope of the search window is limited to a small range, the amount of calculation is greatly less, and the search speed is effectively improved; (2) If there are multiple similar retrieval targets in the same image, the target in the current image can be matched with the target in the previous image through target trajectory prediction to improve the tracking accuracy; (3) There may be noise or other objects similar to the target characteristics in the image. According to the trajectory prediction, the target can be well distinguished from other objects to determine the target you want to find. Kalman filtering equation is added to remove the filtering of water flow status on the x and y axes, and the removal results are shown in Fig. 4. It can be seen from Fig. 4 that at the initial stage, the error value is extremely large. With the increase of the measured value, the error of the x-axis curve and the error of the y-axis curve are decreasing. During the whole processing process, it is necessary
72
Y. Sun and Y. Xu
Fig. 3. Tracking process of municipal water supply and drainage pipeline flow state based on machine vision
to always make a simple judgment on the area near the prediction point to determine whether the water flow state exists. Compared with other methods, this method has great advantages [1]. 5.2 Abnormal Monitoring Process The anomaly monitoring process of municipal water supply and drainage pipelines is designed with machine vision technology, as shown in Fig. 5: Step 1: locate the abnormal municipal water supply and drainage pipeline. First, initialize the microcontroller, analyze the abnormal line image data transmitted in the image acquisition pipeline using the image analysis function of machine vision technology [10], predict the suspected abnormal line points, preliminarily determine the suspected abnormal line points according to the values in the abnormal line image data, and upload the determination results to the intelligent judgment module in the machine vision system in the form of data packets, assuming that there are n suspected abnormal line points, The total number of lines is k, abnormal judgment is made for k lines in the environment with noise and interference, and the determined abnormal line points are set as m, comparative analysis is made for the suspected abnormal line points and the determined abnormal line points, overlapping abnormal line points and non overlapping abnormal line points are marked, and abnormal positioning is carried out respectively.
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
73
Fig. 4. Error Curve
The positioning formula established by machine vision is as follows: Wn = δm
k
yn+1
(1)
x=1
In the formula, Wn represents abnormal line pixels acquired by machine vision; δm represents the morphological information of abnormal line points captured by machine vision; yn+1 represents the error weight of the n + 1 th suspected abnormal line point; x indicates abnormal points of pipeline. The abnormal lines in the pipeline can be located by the location formula of abnormal points in the pipeline. Step 2: judge the line status. After positioning the abnormal lines, it is necessary to judge the line status before warning the abnormal lines to ensure the smooth implementation of automatic warning. In the water supply and drainage environment, machine vision technology is used to centralize line services. The line status includes two types. If the water supply and drainage pipe network system is in normal operation, the server displays normal; if the water supply and drainage pipe network system is in interrupted status, the server displays interrupted. Because the server will be affected by the network, lines, etc., it cannot display the status of the water supply and drainage pipe network system, Therefore, based on formula (1), the formula for displaying the status of water supply and drainage pipe network system is established in this paper: Zn = Am Bk+1
(2)
74
Y. Sun and Y. Xu
Fig. 5. Software Flow of Early Warning System
In the formula, Zn represents the judgment result of water supply and drainage pipe network system status; Am indicates that the water supply and drainage pipe network system is in operation when the abnormal line point is m; Bk+1 indicates that the water supply and drainage pipe network system is in an interrupted state under k lines. The current water supply and drainage pipe network system state can be judged by the state display formula. Step 3: Give an early warning to the abnormal lines. Under the condition that the water supply and drainage pipe network system is in operation, an early warning is given for the abnormal lines. The early warning formula is as follows: √ (3) T = Zn U In the formula, T represents the alert result; U indicates the collected warning voltage. The abnormal energy consumption diffusion matrix BN ×1 of municipal water supply and drainage pipeline is expressed as: BN ×1 = SN ×L + TL×1
(4)
In the formula, TL×1 is the time series matrix; SN ×L is the transient data attribute fusion matrix. According to the characteristics of the transient load of the drainage network, the same characteristic data of the abnormal state of the line is extracted, and the characteristic quantity fij (n + 1) of the transient line output power consumption of the drainage network is obtained.
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
75
Based on the determined distribution characteristics of drainage pipe network and random load yR,j (n), the overflow density function of abnormal data is established: μMCMA = fij (n + 1)yR,j (n)f (si )
(5)
According to AI self-learning theory and transient load data control method, load data balanced output is obtained as follows: σy2 = E yjT (n)yj∗ (n) (6) In the formula, E is the total load data. Under the maximum power gain of the system, the monitoring data acquisition model of the transient data of the drainage network is constructed, and the expression is: 2 el,j = yl,j (n) − R2,I × σy2 (7) In the formula, yl,j (n) is the response characteristics of municipal water supply and drainage pipe network system affected by abnormal data; R2,I is the sample parameter of the system gain constraint under the maximum power meeting the conditions. The early warning system will automatically send out an alarm sound, and the alarm signal will be sent to the image processing system of the machine vision system. Through this system, the specific location of the abnormal line point can be obtained, prompting the municipal water supply and drainage pipeline monitoring personnel to timely repair the abnormal line, so as to solve the line fault.
6 Experimental Study In order to verify the effectiveness of the municipal water supply and drainage pipeline monitoring method based on machine vision proposed in this paper, the municipal water supply and drainage pipeline monitoring method based on machine vision proposed in this paper is selected for experimental comparison with the traditional municipal water supply and drainage pipeline monitoring method based on data topology and the municipal water supply and drainage pipeline monitoring method based on traveling wave technology. The experimental environment is implemented with the help of virtual machine VMW WORK station 8.2, which simulates five MASTER machines, and uses dual-core processors and SLAVER single-core processors. The programming software is the Eclipse SDK, and the network addresses are set to 192.168.121.5 and 192.168.122.5. Set the experimental parameters as shown in Table 1: In order to ensure the experimental effect, before the experiment, set up wiring and install measuring devices in different places to facilitate better measurement. Install the measuring device. The length of the test line connecting line is 45m. Connect the sensor and other devices with a line to ensure that the line length is 15m. The connection diagram between the devices on the test site is shown in Fig. 6:
76
Y. Sun and Y. Xu Table 1. Experimental Parameters
Project
Parameter
Video camera
Stereoscopic CCD camera
Radar
4GHz
Wave filter
Kalman filter
Image
Infrared thermal image
Memory
256G
Hard disk
20GB
CPU
370GHz
Pipe Network A
Connecting line 45m
Pipe Network B
Collection device
Pipe Network
Air gap
Fig. 6. Field Device Connection Diagram
During the experiment, the collected signal has the characteristics of high fluctuation amplitude and large wave head jitter, so the time spent from the initial state to the end state of the signal can be better determined during the acquisition. During the experiment in this paper, the line on site is too long, so it is impossible to apply voltage at both ends of the insulator, and the discharge of air gap cannot be guaranteed. In order to solve this problem, this paper introduces the line to reduce the impact of the line on the ground distribution process. After the on-site device layout is completed, three systems are used to locate the municipal water supply and drainage pipeline monitoring methods based on machine vision, data topology, and traveling wave technology, and observe the positioning results on the warning host. The positioning results of water supply and drainage pipe network A are shown in Fig. 7: It can be seen from Fig. 7 that the fault location began to fluctuate at 0.45s. The online monitoring method proposed in this paper can predict the fault location at 0.45s. However, the response time of the traditional municipal water supply and drainage pipeline monitoring method based on data topology and the municipal water supply and drainage pipeline monitoring method based on traveling wave technology is too
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
77
10 9 8 7 6 5 4 3 2 1 0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
Fig. 7. Positioning Results of Water Supply and Drainage Pipe Network A
slow to predict the fault location change at 0.45s. The positioning error between the online monitoring method proposed in this paper and the actual fault location is less than 0.3m, while the positioning error between the municipal water supply and drainage pipeline monitoring method based on data topology and the actual fault location is more than 0.7m, while the positioning error of the municipal water supply and drainage pipeline monitoring method based on traveling wave technology is more than 1m, and the early-warning capability is the worst. The positioning results of water supply and drainage pipe network B are shown in Fig. 8:
10 9
Positioning result/m
8
Monitoring method of municipal water supply and drainage pipeline based on machine vision
7 6 5
actual fault location
4 3 2 1 0
Line Monitoring Based on Data Topology Line Monitoring Method Based on Traveling Wave Technology 0.2
0.4
0.6
0.8
1.0 Time/s
1.2
1.4
1.6
Fig. 8. Positioning Results of Water Supply and Drainage Pipe Network B
According to Fig. 8, the online monitoring method proposed in this paper always has a small difference from the actual fault location. In 1.1 s1.6 s, the location of the fault is basically consistent with the actual fault location, and the location capability is extremely strong. However, the positioning error of the municipal water supply and drainage pipeline monitoring method based on traveling wave technology differs greatly from the actual position in 00.5 s, and the difference is about 0.5 m after 0.6 s. The
78
Y. Sun and Y. Xu
municipal water supply and drainage pipeline monitoring method based on data topology always differs from the actual location, which is difficult to meet the actual operation requirements. To sum up, the positioning capability of the predetermined system proposed in this paper is better than that of the traditional early warning system. The positioning error for the line fault location is smaller, and the positioning capability with higher accuracy can meet the on-site requirements. According to the above research data, carry out experimental research, adjust the municipal water supply and drainage pipeline data information, compare the environmental protection rate of the municipal water supply and drainage pipeline project applied in this technology with the environmental protection rate of the traditional municipal water supply and drainage pipeline project, and set the comparison results as shown in Table 2: Table 2. Results of environmental protection rate of municipal water supply and drainage pipeline project under different methods Study time/h
Environmental protection rate/% Method in this paper
Traditional method
1
76%
52%
2
83%
64%
3
87%
71%
4
95%
77%
5
99%
84%
In Table 2, when the study time is 1h, the environmental protection rate of the project applied in this paper is 76%, and that of the traditional study is 52%. When the study time is 2h, the environmental protection rate of the project applied in this paper is 83%, and that of the traditional study is 64%. It can be seen that the application effect of this paper is better than that of traditional research. Because the application of this paper in the process of application constantly improves the degree of combination of technologies, and strengthens the construction management of municipal water supply and drainage pipelines to achieve reasonable application, thereby improving the application effect of the overall application, enhancing the degree of construction environmental protection, and achieving the purpose of application. To sum up, the application effect of this paper is good, which can determine the application mode in municipal water supply and drainage pipelines, improve the effectiveness of the application, obtain better application data, and provide a solid data basis for subsequent research. The comparison of feature value extraction results of the current frame is shown in Fig. 9 and Fig. 10.
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
79
Fig. 9. Characteristic Value Extraction Results of Current Frame of Traditional Technology
Fig. 10. Eigenvalue extraction results of current frame of this technology
By comparing Fig. 9 and Fig. 10, it can be seen that the current frame image obtained by this technology and the general outline of the water flow state are more obvious, and the difference between two adjacent frames can more clearly reflect the movement track of the water flow state. Although the traditional detection technology can detect the moving water flow state information, the expression results are not accurate enough, and the extracted feature values are not prominent enough. The results of the water flow state positioning experiment are shown in Fig. 11. According to the analysis of Fig. 11, with the increase of the number of water flow monitoring points, the positioning results and positioning results of this method and traditional methods are gradually increasing. When there are 10 water flow monitoring points, the positioning result of the traditional method is 95.07%, the positioning result of this method is 95.12%; When there are 30 water flow monitoring points, the positioning result of the traditional method is 95.24%, the positioning result of this method is 95.39%; When there are 50 water flow monitoring points, the positioning result of the traditional method is 97.74%, the positioning result of this method is 98.7%; When there are 70 water flow monitoring points, the positioning result of the traditional method is 98%, the
80
Y. Sun and Y. Xu
Fig. 11. Results of Water Flow Monitoring Point Positioning Experiment
positioning result of this method is 98.3%. According to the above experimental results, in the same time, the detection technology in this paper is more accurate in the front water flow state detected by the municipal water supply and drainage pipeline, and the extracted water flow state characteristics are more consistent with the actual information of the water flow state, which has a better effect in practical application.
7 Conclusion The online monitoring method of municipal water supply and drainage pipelines plays a key role in the maintenance and operation of drainage pipelines. This paper designs five subsystems, including environmental monitoring subsystem, video monitoring subsystem, intelligent access control system, and builds a comprehensive information management platform for municipal water supply and drainage pipelines. Through this system, the safe and stable operation of municipal water supply and drainage pipelines can be achieved. The following conclusions are obtained through research: (1) The online monitoring method proposed in this paper can predict the fault location in 0.45 s, and the error of the actual fault location is less than 0.3 m, and the early warning ability is good; (2) The online monitoring method proposed in this paper always has a small difference from the actual fault location. In 1.1 s1.6 s, the location of the fault is basically consistent with the actual fault location, and the location ability is very strong; (3) With the increase of research time, the environmental protection rate of the project applied in this paper is also increasing; (4) The current frame image obtained by this technology and the approximate outline of the flow state are more obvious. The difference between the two adjacent frames can more clearly reflect the movement trajectory of the flow state. The extracted flow state features are more consistent with the actual information of the flow state.
Online Monitoring Method of Municipal Water Supply and Drainage Pipeline
81
References 1. Yang, Y., Chen, Y., Tang, Z.: Analysis of the safety factors of municipal road undercrossing existing bridge based on fuzzy analytic hierarchy process methods. Transp. Res. Rec. 2675(12), 915–928 (2021) 2. Talaat, M., Arafa, I., Metwally, H.: Advanced automation system for charging electric vehicles based on machine vision and finite element method. IET Electr. Power Appl.Electr. Power Appl. 14(13), 2616–2623 (2020) 3. Zhang, Z., Bai, S., Xu, G.S., et al.: Research on the knitting needle detection system of a hosiery machine based on machine vision. Text. Res. J. 90(15–16), 1730–1740 (2020) 4. Ramos, R., Douglas, D., Almeida, V.D., et al.: A video processing and machine visionbased automatic analyzer to determine sequentially total suspended and settleable solids in wastewater. Anal. Chim. ActaChim. Acta 1206, 1–10 (2022) 5. Zza, B., Yang, L.B., Qi, H.B., et al.: Multi-information online detection of coal quality based on machine vision. Powder Technol. 374, 250–262 (2020) 6. Valkova, T., Parravicini, V., Saracevic, E., et al.: A method to estimate the direct nitrous oxide emissions of municipal wastewater treatment plants based on the degree of nitrogen removal. J. Environ. Manage. 279, 111563 (2020) 7. Qin, A., Guo, L., You, Z., et al.: Research on automatic monitoring method of face milling cutter wear based on dynamic image sequence. The Int. J. Advanced Manufacturing Technol. 110(11–12), 1–12 (2020) 8. Berkmortel, N., Curtis, M., Johnson, C., et al.: Development of a mechatronic system for plant recognition and intra-row weeding with machine vision. J. Agric. Sci. 13(3), 1–8 (2021) 9. Wei, W., Yin, J., Zhang, J., et al.: Wear and breakage detection of integral spiral end milling cutters based on machine vision. Materials 14(19), 5690 (2021) 10. Baoshui, Z., Hailong, H., Hao, T.: Method for tamper detecting of copied images of small targets based on machine vision. Computer Simulation 38(8), 227–304 (2021)
Design of Intelligent Security Inspection System for Airport Passengers’ Carry on Luggage Based on Machine Learning Chun Zheng1(B) and Xiafu Pan2 1 Anhui Sanlian University, Hefei 230601, China
[email protected] 2 Department of Public Safety Technology, Hainan Vocational College of Political Science
and Law, Haikou 571100, China
Abstract. When carrying out the airport passenger baggage intelligent screening, due to the complexity of the algorithm, there are errors in the screening results. Therefore, the design of the airport passenger carry-on baggage intelligent screening system based on machine learning is studied. The hardware design of the system is completed by designing the interface between the security system and the departure system, the external interface between the security system and the baggage system, and the airport passenger carry-on baggage intelligent screening module; In the software design, the computer vision technology in the machine learning algorithm is used to identify the carry-on baggage of airport passengers, and the airport passenger carry-on baggage security detection algorithm is designed, which effectively realizes the intelligent security check of airport passenger carry-on baggage. The test results show that the change of ASII code value of the system designed in this paper is relatively stable, and the memory occupancy is always within 25%, which can effectively meet the requirements of users and improve the performance of the system. Keywords: Machine Learning · Airport Passengers · Personal Luggage · Intelligent Security Check
1 Introduction Since the 9/11 incident in the United States, all countries have attached great importance to airport security inspections. In order to ensure aviation safety, many additional security inspection measures have been added, resulting in a lot of manpower and material resources spent on airport security inspections. With the development of social economy and the continuous increase of airport passenger volume in recent years, new requirements have also been put forward for the service time and service links of security inspection work [1]. Considering the limited internal space of the terminal building, the long cycle time and high cost of the new terminal building, the simple method of adding security check channels will not be effective in the short term. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 82–99, 2024. https://doi.org/10.1007/978-3-031-50571-3_7
Design of Intelligent Security Inspection System
83
The development of air transport industry can not only promote the cultural and trade exchanges between different regions and countries in the world, but also an important symbol of national modernization and the embodiment of national comprehensive strength. With the rapid and rapid development of civil aviation industry, civil aviation safety has become a problem that cannot be ignored. The disasters caused by civil aviation accidents are often serious, and the loss of life and property often exceeds people’s expectations. Therefore, safety is the basis and guarantee for the development of the civil aviation industry and the basis for all the work of the civil aviation industry [2]. Among them, airport security inspection is an important portal to ensure the safety of passengers travelling by air, an important checkpoint to ensure aviation safety and the safety of passengers’ lives and property, and one of the important factors related to civil aviation safety. In terms of practical application, the International Air Transport Association and the International Airports Association have jointly developed a future airport security inspection system - intelligent security inspection. This security inspection system will classify passengers into three categories: high-risk, medium-risk and low-risk passengers according to the risk level. And set up three corresponding levels of security channels. With the continuous improvement of the security level, passenger security inspection is more strict, which leads to long waiting time for passengers and low passenger satisfaction, affecting the development of the civil aviation industry. In addition, the airport needs to invest a lot of manpower, material resources and financial resources every year to ensure the reliability of the airport security inspection system, so as to ensure the safety of passengers. Such large-scale investment can improve the reliability of the security inspection system, but there is also excessive consumption of human, material and financial resources, which increases the operating cost of the airport security inspection system. Therefore, how to ensure the reliability of the airport security inspection system while ensuring the efficient and fast operation of the airport security inspection system, and also save the operation cost of the airport security inspection system, has become the most important and noteworthy problem to solve the current situation of the airport security inspection system. The relevant research of domestic scholars on the airport security system is as follows: Xu Yong et al. [3] analyzed the amplitude nonuniformity error, clock jitter and sampling trigger jitter of the acquisition system, and proposed the acquisition system structure of the active terahertz human body security detector. Under the condition of determining the allowable range of clock jitter in the acquisition system, a high-precision programmable delay clock tree network structure is designed to realize the output of seven channels of the same source, low jitter and phase consistent sampling clock, sampling trigger signal and synchronous clock. Liu Bode et al. [4] introduced the two-level management and three-level control architecture of the security inspection system of urban rail transit based on networked big data, described the solutions and functions of the security inspection information management system at the line network level, station level and site level, and introduced the expanded functions of the networked security inspection system. However, when the above methods are used for baggage intelligent screening, the algorithm is relatively complex, which leads to differences in screening results and has limitations.
84
C. Zheng and X. Pan
Foreign scholars pay attention to the importance of passengers in security inspection and the application of passenger classification. Zhu Y et al. [5] proposed a GAN-based image data enhancement method for X-ray prohibited items. Use GAN-train and GANtest to evaluate the images generated by our model, However, this method has strong limitations. Based on the above research background, this paper applies machine learning to the design of airport passenger carry-on baggage intelligent security system, uses computer vision technology in machine learning algorithm to identify airport passenger carry-on baggage, designs airport passenger carry-on baggage security detection algorithm, and completes the design of airport passenger carry-on baggage intelligent security system in combination with system software and hardware. The experimental results show that:, The change of ASII code value of the system designed in this paper is relatively stable, and the memory occupancy is always within 25%, effectively realizing the intelligent security check of airport passengers’ carry-on baggage.
2 Hardware Design of Airport Passenger Baggage Intelligent Security Inspection System 2.1 Design the Interface Between the Security Inspection System and the Departure System The interface between the security inspection information system and the departure system is mainly to obtain the baggage information of passengers from the departure system. When the passengers check in, the departure communication server sends the message information containing passengers and baggage to the security inspection information system communication server. The security inspection information system communication server receives the information, stores and processes it, and establishes the corresponding relationship between passenger records, baggage records and security inspection results. The communication between the security information system and the departure system uses the message queue MQ middleware to exchange data in XML format or BSM format. The departure system is the server side of MQ. In the operating environment of the MQ server, there are queue managers, queues, message channels and other objects, which provide comprehensive message services; the security information system, as the client side of MQ, is implemented through the API of MQ. Development and operation of an MQ application. The system provides data by triggering events such as passenger check-in and printing boarding passes, and places the data in the MQ target queue; The security information system takes data from the queue in time. As shown in Fig. 1, the MQ mode interface flow chart of the security inspection system and the departure system. The closed box represents the queue; the closed rounded box represents the two processes of the interface program of the departure system and the interface program of the security inspection information system; the connection with an arrow represents the event and its sending direction, and the beginning of the arrow points to the event Reader, the tail of the arrow points to the event writer.
Design of Intelligent Security Inspection System
3DVVHQJHU LQIRUPDWLRQ UHTXHVW PHVVDJH
3DVVHQJHU GXW\ PHVVDJH 4XHU\ PHVVDJH
3DVVHQJHU LQIRUPDWLRQ LQTXLU\ TXHXH %DJJDJH LQTXLU\ TXHXH
%DJJDJH PHVVDJH
'HSDUWXUH V\VWHP
6HFXULW\ LQVSHFWLRQ V\VWHP
3DVVHQJHU PHVVDJH TXHXH
85
Fig. 1. Flow chart of the MQ mode interface between the security inspection system and the departure system
2.2 Design the External Interface Between the Security Inspection System and the Baggage System The interface between the security inspection information system and the baggage system is mainly responsible for sending the security inspection results of checked baggage to the baggage handling system, which determines the baggage movement according to the received interpretation result signal. The interface mode adopted is Socket mode, and the security inspection information system will use an open TCP/IP protocol socket to maintain communication with the baggage handling system [6]. Here, the security inspection information system is the server side and the baggage handling system is the client side. The connection is established by the baggage handling system. After the information is sent, the connection will remain active. As shown in Fig. 2, the interface flow chart of the security inspection system and the baggage handling system in Socket mode is shown.
6HFXULW\ LQVSHFWLRQ UHVXOW FRQILUPDWLRQ LQIRUPDWLRQ +HDUWEHDW LQIRUPDWLRQ
+HDUWEHDW LQIRUPDWLRQ
%DJJDJH KDQGOLQJ V\VWHP
6HFXULW\ LQVSHFWLRQ V\VWHP
6HFXULW\ LQVSHFWLRQ LQWHUSUHWDWLRQ UHVXOW LQIRUPDWLRQ
Fig. 2. The interface flow chart of the Socket mode of the security inspection system and the baggage handling system
All message information here is in XML format. After interpreting whether the baggage has been inspected, the security inspection information system sends an inspection result message to the baggage system, and the baggage system sends an inspection
86
C. Zheng and X. Pan
result confirmation message to the security inspection information system. Each of the two systems sends heartbeat packets at a certain frequency. 2.3 Design an Intelligent Security Check Module for Passenger Carry-on Luggage at the Airport The security inspection module of the airport passenger carry-on baggage intelligent security inspection system is composed of three large modules, namely the verification workstation, the unpacking workstation, and the basic operation. This security check module is the key point of the intelligent security check system for passenger carry-on luggage at the airport. Whether this security check module can operate normally is also related to the success or failure of the system design. The functional design diagram of the security check module is shown in Fig. 3. 6HFXULW\ FKHFN PRGXOH 9HULILFDWLRQ ZRUNVWDWLRQ
2SHQ SDFNDJH ZRUNVWDWLRQ
%DVLF RSHUDWLRQ
%DJJDJH XQSDFNLQJ VWDWLRQ
&DUU\RQ EDJJDJH XQSDFNLQJ VWDWLRQ
Fig. 3. Functional design diagram of the security inspection module
2.3.1 Basic Operation In the basic operations of the security check module, the main operations involved are the opening and closing of the security check channel, staff commuting management, and system settings. Generally speaking, the opening and closing of the security inspection channel is mainly a switch problem. When the security inspection channel needs to be opened, click to open the security inspection channel, and when the security inspection channel needs to be closed, click to close the security inspection channel. For the management of employees’ commuting, it refers to the fingerprint swiping on behalf of employees at work and the fingerprint swiping on behalf of employees at work is used to sign in. This fingerprint operation can avoid the occurrence of others on behalf of the post. After all, everyone’s fingerprint is unique, so using fingerprint or iris to sign
Design of Intelligent Security Inspection System
87
in can be more effective and reasonable for the security check employees’ commuting management. As for system settings, almost every system is inseparable from system settings, and the system settings here are no exception. They are all about the operation settings of the entire system, such as intermittent time settings, system language settings, and system settings. Switch settings, etc. These three function modules constitute the basic operation function module of the security inspection module. 2.3.2 Baggage Detection Passengers’ luggage is generally divided into carry-on luggage and large checked luggage, but no matter what kind of luggage, it needs to go through security inspection. Therefore, the entire security inspection process for passengers’ luggage is divided into the security inspection process for passengers’ carry-on luggage. As well as the security screening process for passengers’ bulky checked baggage. But for the security inspection process, in fact, the entire security inspection process for luggage is the same. For the whole baggage security inspection process, it is mainly divided into three steps: radiographic transmission inspection and picture collection, reminder of problematic baggage, and baggage information entry and processing. First of all, after the baggage enters the security inspection equipment, the equipment will emit X-rays, and use the penetration ability of X-rays to collect the image of baggage information without opening the bag. The obtained pictures and passenger information shall be stored and saved accordingly for the inquiry and evidence collection by the national security department. After the image obtained by the X-ray irradiation, the security inspection module will check the dangerous goods in combination with the manual comparison method. Once one of the two finds dangerous goods, the system will automatically alarm and create a query result log at the same time. Write results at any time to ensure the integrity and consistency of query records. At the same time, the system will automatically open the safety box and put the dangerous luggage into the safety box for inspection by security personnel. At this time, the passenger’s information will also be stored in the dangerous goods query database together with the video pictures during the luggage security inspection. Then, the security department will conduct a whole process video inspection of suspicious dangerous goods, and record all videos in the database, which can not only ensure security, but also protect the personal privacy of passengers, and when necessary, retain evidence to maintain the respect of the law [7]. Figure 4 shows the baggage security check procedure. 2.3.3 Passenger Verification At the same time as baggage inspection, passengers also have to pass the security check themselves, from boarding pass verification to ID card verification to facial image collection and comparison, which is also a key business process. The first is the extraction of the passenger’s ID card information. The passenger’s name, gender, ID number, home address and other information are obtained through the second-generation ID card reader that is very commonly used in the industry. Compare
88
C. Zheng and X. Pan &KHFNHG EDJJDJH SDVVHV VKXWGRZQ LQVSHFWLRQ )RXQG VXVSLFLRXV OXJJDJH
&DSWXUH ;UD\ LPDJHV RI VXVSLFLRXV EDJJDJH
%DJJDJH ;UD\ PDFKLQH LPDJH
2SHQ WKH SLFWXUH ZLWKRXW EDJJDJH QXPEHU
-XGJLQJ WKH XQSDFNHG LPDJH
&KHFNHG EDJJDJH LQIRUPDWLRQ RI SDVVHQJHUV
9LGHR UHFRUGLQJ RI SDVVHQJHUV
FKHFNHG EDJJDJH
7KH SDFNDJH SLFWXUH FRQWDLQV WKH EDJJDJH QXPEHU
7KH SRUWHU VFDQV WKH EDJJDJH FRGH +DQGOLQJ RI VXVSLFLRXV OXJJDJH
+DQGOLQJ RSLQLRQV
+DQGRYHU SURFHVVLQJ
7HPSRUDU\ VWRUDJH
2WKHU WUHDWPHQWV
(QG RI FKDQQHO XQSDFNLQJ
Fig. 4. Baggage unpacking security inspection process
the information obtained with the data of the public security organs on the Internet to confirm the true identity of the passenger. In this way, the exact information of the user can be left in the airline’s passenger boarding record, so as not to affect the personnel contact in the event of a special situation. Secondly, it is the information collection of passenger boarding passes. The QR code or barcode of the ticket is scanned to obtain the passenger’s personal information and ticket information (including flight number, departure time, passenger seat number, passenger baggage number, etc.) when purchasing the ticket. Compare the obtained user’s boarding pass information with the passenger’s ID card information obtained in advance, and finally confirm whether the passenger’s information is consistent. If it is consistent, the user is allowed to board. If it is not consistent, the user is refused to pass the security check and is refused to board. In addition, it is also necessary to set the manual input and query of the passenger ID number. If some users have handled temporary ID cards or no ID cards, they only have household registration books. At the same time, the security checkpoint is specially set up for foreigners to compare the identity information of foreigners manually. If the temporary identity of foreign passengers is consistent with the ticket information, they can pass the security checkpoint. If not, they will be refused to pass the security checkpoint, as shown in Fig. 5.
Design of Intelligent Security Inspection System
89
Fig. 5. Passenger verification process
Finally, it is to obtain the passenger’s facial information for later verification. Through simple facial recognition technology, the facial information of passengers is initially obtained, and the collected facial information of passengers should be stored in the database, and the size of the pictures should be compressed as much as possible. This is not only for the convenience of storage, but also saves the resources of the serverside database, but also to ensure that the system can run smoothly. The passenger’s facial information obtained shall be compared with the criminal fugitives or wanted
90
C. Zheng and X. Pan
criminals obtained through the Internet. If the facial conditions are consistent, the security personnel at the security check station shall be notified, and the public security organ stationed at the airport shall be notified through the Internet to assist the public security organ to stop the criminal fugitives or wanted criminals and maintain social stability, which is also for the flight safety of the airline.
3 Software Design of Airport Passenger Baggage Intelligent Security Inspection System 3.1 Identify Airport Passenger Carry-on Luggage Based on Machine Learning In the identification of airport carry on baggage, the traditional method is to obtain the image of the items in the baggage by judging the R value, but it can not directly determine whether the items in the baggage are dangerous goods. Therefore, this paper introduces machine learning algorithm to design the identification algorithm of airport passenger carry on baggage. In the security inspection system, the position and orientation of the inspected items in the aisle are unknown. Even regular objects have thickness differences due to different incident angles [8]. In an imaging system with a fan-shaped X-ray beam like this paper, the thickness of the transmitted X-ray detected by each detector array is not necessarily the true thickness of the object. Computer vision technology using machine learning algorithms is needed to memorize The position and orientation information of the object can accurately identify the object. Assuming that h is the thickness of the ray passing through the object in the passenger’s luggage, and h0 is the true thickness of the object in the passenger’s luggage, which depends on the transmission angle of the ray, then h can be calculated by Formula (1): h = h0
sin(180 − β) sin(β − θ )
(1)
The interactions between X-rays and passenger luggage include photoelectric effect, coherent scattering and incoherent scattering. When the input energy Ein of the X-ray is the same, the machine learning algorithm is used to learn the principle of X-ray, and the integral of the transmission signal in the energy range is calculated: T (x, y) =
Ein
N (E)e−
dzμt (x,y,z)ρ(x,y,z)
(2)
0
Among them, T (x, y) represents the transmission signal detected by computer vision technology, μt (x, y, z) represents the total absorption coefficient without occlusion, and ρ(x, y, z) represents the density of the detected object. Since the orientation of passenger luggage in the security inspection system is arbitrary, the computer vision technology of the machine learning algorithm is used to identify the orientation of the passenger’s carry-on luggage at the airport, and combined with data mining technology, the orientation information of the luggage is extracted
Design of Intelligent Security Inspection System
91
and the transmission signal is eliminated. The influence of spatial coordinates on the identification of passenger carry-on luggage at the airport is expressed as: Ein N (E)e−μ(E)t (3) T= 0
It can be seen from Formula (3) that the transmission signal is affected by the thickness of passenger baggage. Using the data mining technology of machine learning algorithm, the length, width and thickness information of passenger baggage is extracted to realize the identification of airport passenger carry on baggage, which is expressed as: E TH = 0 H N (E)e−μ(E)t Pd (E)EdE E (4) TL = 0 L N (E)e−μ(E)t Pd (E)EdE According to the above calculation process, a machine learning algorithm is used to design an airport passenger carry-on luggage identification algorithm. 3.2 Security Inspection of Carry-on Luggage of Passengers at Airports Dangerous goods detection algorithm is the core of the software design of the airport passenger carry on baggage intelligent security inspection system, which is used to automatically detect various dangerous goods in the X-ray image. Deploy the dangerous goods detection algorithm on the algorithm server through TensorFlow Serving. The CPU of the dangerous goods detection algorithm server is Intel i7-6850k, the memory is 32G, the GPU is GTX 1080Ti, and the operating system is Ubuntu 16.04. The model file generated from the training of dangerous goods detection algorithm based on machine learning is checkpoint, and four files will be generated at the same time when saving. They are checkpoint text files, which record and save the latest checkpoint file and other file lists; The meta file stores the graph structure containing variables, operations, sets, and other information; The index and data files save the binary files of all weights, offsets and gradients in the network.This format file brings convenience to transfer learning, but it is more troublesome to deploy. Therefore, it is necessary to save the model export in the SavedModel model format deployed in TensorFlow Serving, including a saved_model.pb file and a variables folder, where the saved_model.pb file saves the calculation graph from input to output nodes. The deployment of machine learning on embedded devices requires specifying input/output tensor names for models. SavedModels in machine learning provide SignatureDefs to define the calculation signatures supported by TensorFlow, which simplifies this process and makes it easy to find suitable tensors in the calculation diagram to specify specific input/output nodes. When the client calls the model, it first converts the X-ray image processing into Tensor format, and then builds a gRPC stub to call the dangerous goods detection algorithm on the server [9]. Then create and set the request object, specify the model name and signature name, and input data to the server in the form of a predefined signature. Since the security inspection software client needs to scan and image the subsequent baggage locally when it requests the dangerous goods detection algorithm module, the Predict.future() method is used to call asynchronously to complete the asynchronous work
92
C. Zheng and X. Pan
between the security inspection software and the dangerous goods detection algorithm server, and improve the security inspection efficiency. The X-ray security inspection system drives the baggage to move relative to the Xray source and X-ray sensor by the conveyor belt, so as to scan the entire baggage object and obtain a complete X-ray image. In this process, the security inspection software will judge whether the package is completely imaged according to the attenuation of X-ray, and separate different bags and packages for backup. Therefore, the dangerous goods algorithm server needs to monitor whether the software client triggers the event of completing the whole package imaging to obtain the X-ray image data. The specific process is shown in Fig. 6.
EHJLQ
,QLWLDOL]H PDFKLQH OHDUQLQJ DOJRULWKP
QR
0RQLWRU SDVVHQJHUV
OXJJDJH \HV
,GHQWLI\ DUWLFOHV LQ SDVVHQJHUV
OXJJDJH
'DQJHURXV JRRGV GHWHFWLRQ
*HQHUDWH D GHWHFWLRQ UHVXOW
HQG
Fig. 6. The security check process of passenger carry-on luggage at the airport
If the imaging of a baggage package is completed, the data will be backed up and transmitted to the algorithm server, otherwise it will continue to monitor whether a baggage package imaging event occurs. After receiving the X-ray image data, the dangerous goods detection algorithm server calls the dangerous goods detection algorithm for detection, and saves the coordinates, categories and confidence results and returns them to the security inspection software client [10]. In addition to real-time dangerous goods detection, the software platform also supports the detection of historical X-ray image data. The user only needs to upload the X-ray image data and complete the detection by calling the dangerous goods detection algorithm module.
Design of Intelligent Security Inspection System
93
4 Test Analysis 4.1 Test Environment The system test environment refers to the description of the software and hardware environment during the operation of the system, as well as the interactive test with other software. A good test environment can improve the accuracy of system testing, and there are not too many choices for test cases. Requirements, the stability of the test environment can not only save the tester’s time, but also reduce the error rate of the system test and ensure the authenticity and reliability of the test results. For the test of the intelligent security inspection system for airport passengers’ carry-on luggage based on machine learning, the specific test environment is as follows: Operating system: Windows 7 Ultimate 32-bit Processor: Intel (R) Pentium (R) CPU G645 @ 2.90 GHz 2.90 GHz Memory: 32 GB Server software: Apache Tomcat 7.0 JAVA environment: JDK 1.8.0_ forty-five Database software: MYSQL 5.7.16 Browser: Internet Explorer 10 4.2 Functional Testing The functional test is mainly to test in an orderly manner according to the functional division of the test software or system, mainly to ensure that the functional test part covers the combination of functional conditions. Functional testing is mainly aimed at testing the degree of functional realization and functional usability. 4.2.1 Platform Login Module The login module is relatively easy to implement. In order to prevent illegal users from accessing the system, it is necessary to test whether unauthorized users can access the system normally, and whether they can log in when the user’s input information is incomplete. If there is an error, relevant login reminders will be given. Table 1 shows the function test of the platform login module. Table 1 shows that the platform login module of the system in this paper functions normally and can meet the user’s requirements for system functions. 4.2.2 User Management Module The user management module is a module that is prone to errors in the system. This module includes the entry of user information and the management between users, etc. However, if the operator does not know the user’s activation process, or there is a need to pay attention during the operation process. Place. It is very likely that an unintentional error by the operator will cause the customer to be unable to operate normally. The test case for the user management module is shown in Table 2. The results in Table 2 show that the user management module of the system in this paper functions normally and can meet the user’s requirements for system functions.
94
C. Zheng and X. Pan Table 1. Platform login module functional test cases
Test items
Test steps
Test results
Does it reach the requirement
User login
Enter the login page, the user name is empty, the password is empty
Prompt user name does not Meets the exist
User login
Enter the login page and enter the wrong username and password
Prompt user name does not Meets the exist
User login
Enter the login page, enter the correct user name, wrong password
Incorrect password prompt Meets the
User login
Enter an existing username and password, but no permissions have been assigned to the user
Prompt no access rights, please log in again
Meets the
4.2.3 System Security Management Module The specific testing of the system security management module is shown in Table 3. The results in Table 3 show that the safety management module of the system in this paper functions normally and can meet the user’s requirements for system functions. 4.2.4 Baggage Management Module The baggage management module of the hand baggage security check information system is similar to the user management above. Some required information needs to be filled in completely, but multiple application types can be customized in the application type module. This function is generally implemented by the administrator. This place mainly tests the application type management function, for example, the baggage management module test is shown in Table 4. The results in Table 4 show that the functions of the baggage management module of the system in this paper are normal and can meet the user’s requirements for system functions. 4.3 Performance Test 4.3.1 Performance Index In the performance test, the throughput of the system and the number of user requests are taken as independent variables, and the ASII code value index and memory occupancy index are used to measure the performance of airport passenger carry-on baggage security. (1) ASII code value
Design of Intelligent Security Inspection System
95
Table 2. User management module test cases Test items
Test steps
Test results
Does it reach the requirement
User name
Save the user information, jump to the user add page, leave the user name field blank
Reminder: The filled data cannot be empty
Meets the
User description
Save user information, Reminder: The filled jump to user add page, user data cannot be empty description is not selected
Meets the
User function
Save the user information, jump to the user add page, the functions included by the user are not selected
Reminder: The filled data cannot be empty
Meets the
Query user
Select all users to display all users
Display user information in pagination
Meets the
Modify user
Click the Edit button
Tip: Prompt that the modification is successful
Meets the
Delete users
According to the user after Tip: Deleted the query, click the delete successfully button to prompt whether it is really deleted, click OK button
Meets the
The ASII code of capital letters directly calculates the corresponding binary number. The ASII code of C is 67, which is calculated in decimal system. After converting the decimal 67 into binary, it is exactly 1000011. In the arrangement of the ASII code table, the characters A to Z, lowercase a to z, and numbers 0 to 9 are all arranged in order, so if A is 65, then B is 66, C is 67, D is 68, and E is 69. The higher the fluctuation of ASII code value, the better the system performance. (2) Memory occupancy Memory is one of the most important components in the computer. All programs in the computer run in memory. The operation of memory determines the stable operation of the computer. Therefore, the memory occupancy has a great impact on the computer. The lower the memory occupancy, the better its performance. When the memory occupancy is too large, it will affect the overall performance of the system. The memory occupancy formula is as follows: U =
(s − free − buffer/cache) × 100% s
(5)
Among them, U indicates memory occupancy, s represents the total memory size.
96
C. Zheng and X. Pan Table 3. Security management module test cases
Test module
Test items
Test steps
Test results
Does it reach the requirement
User registration
Username
Do not fill in the user Reminder: The filled Meets the name data cannot be empty
User registration
Username
Duplicate username
Tip: The username is Meets the duplicated, please re-fill
Permission settings
Permission
There is no such authority assigned to the baggage security inspection information system, then log out of the system and log in with this user
Return to the login interface, prompt: No access rights to the system
Meets the
4.3.2 Performance Comparison Results In order to verify the superiority of the system, The following results are obtained by comparing the security system of the human body security detector with the security system based on networked big data. The results in Fig. 7 show that when the security inspection system based on terahertz human security detector and the security inspection system based on networked big data are used, the ASII code value fluctuates greatly. When the intelligent security inspection system for airport passengers’ carry on luggage based on machine learning is used, the change of the ASII code value is relatively stable, indicating that the system in the text has better security inspection effect. The reason is that the system in this paper can identify passengers’ baggage first in the process of security check on passengers’ carry on baggage at the airport, ensuring the performance of security detection. From the results in Fig. 8, it can be seen that when the security inspection system based on the terahertz human body security inspection instrument and the security inspection system based on networked big data are used, as the number of user requests increases, the memory occupancy of the system becomes larger and larger. When the number of requests reaches 1,000, the memory occupancy rates are 57% and 48% respectively; while in the process of using the system in the text to conduct security checks on the passenger’s carry-on luggage at the airport, the memory occupancy rate is always within 25%, indicating that the memory occupancy rate of the system in the text can be guaranteed The smoothness of system operation.
Design of Intelligent Security Inspection System Table 4. Baggage Management Module Test Cases Test module
Test items
Test steps
Test results
Does it reach the requirement
Baggage information location query
Location query
Enter the characters contained in the bag number
Display all the Meets the application type information containing this character in the baggage number or prompt that there is no current data
Baggage information registration query
Application type name
Enter application type name keyword
Display all application type information containing this keyword or prompt that there is no current data
Meets the
Baggage information registration query
Description of application type
Enter application type description keyword
Display all application type information containing this keyword or prompt that there is no current data
Meets the
$6&,, FRGH YDOXH
6HFXULW\ LQVSHFWLRQ V\VWHP EDVHG RQ WHUDKHUW] KXPDQ ERG\ VHFXULW\ GHWHFWRU 6HFXULW\ LQVSHFWLRQ V\VWHP EDVHG RQ QHWZRUNHG ELJ GDWD
7H[W V\VWHP
7KURXJKSXW0ESV
Fig. 7. ASIC code value test results
97
98
C. Zheng and X. Pan
0HPRU\ XVDJH
7H[W V\VWHP
6HFXULW\ LQVSHFWLRQ V\VWHP EDVHG RQ QHWZRUNHG ELJ GDWD
6HFXULW\ LQVSHFWLRQ V\VWHP EDVHG RQ WHUDKHUW] KXPDQ ERG\ VHFXULW\ GHWHFWRU
1XPEHU RI XVHU UHTXHVWVSLHFHV
Fig. 8. Memory occupancy test results
5 Conclusion This paper studies the application of machine learning to the design of the airport passenger carry-on baggage intelligent security system, uses the computer vision technology in the machine learning algorithm to identify the airport passenger carry-on baggage, designs the airport passenger carry-on baggage security detection algorithm, and completes the design of the airport passenger carry-on baggage intelligent security system in combination with the system software and hardware. The experimental results show that the change of the ASII code value of the system designed in this paper is relatively stable, and the memory occupancy rate is always within 25%, effectively realizing the intelligent security check of airport passengers’ carry-on baggage. The function and performance of the system can meet the requirements of users. However, the research in this paper still needs to be improved. Because there is a certain distance between the security check point and the boarding office, how to ensure the linkage of the data of the two points is a problem worth studying. In addition, how to coordinate the business relationship between the airport aviation and other administrative organs and ensure the data connectivity is also an important part of the next improvement work. Acknowledgement. Research on intelligent security inspection technology based on big data and deep learning, No.: KJ2021A1183.
References 1. Zhang, S., Yu, M.: Design of new security inspection system for urban rail transit based on network centralized map judgment. Urban Mass. Transit. 04, 174–177 (2021) 2. He, N., Zhang, Z., Ma, W., et al.: Construction of smart safety inspection system in urban rail transit. Urban Mass. Transit. 25(4), 214–216+220 (2022)
Design of Intelligent Security Inspection System
99
3. Xu, Y., Qiu, L., Zhu, Y., et al.: Research on Terahertz human security inspection acquisition system based on clock tree mechanism. Opt. Techniq. 47(3), 282–287 (2021) 4. Liu, B., Zhang, S.: Security check system in urban rail transit based on internet-driven big data. Urban Mass. Transit. 22(6), 182–186 (2019) 5. Zhu, Y., Zhang, H.G., An, J.Y., et al.: GAN-based data augmentation of prohibited item X-ray images in security inspection. Optoelectron. Lett. 16(3), 225–229 (2020) 6. Zhang, S., Yu M.: Design of new security inspection system for urban rail transit based on network centralized map judgment. Urban Mass. Transit. 24(7), 174–177 (2021) 7. Zhao, Z.-W., Zhang, C.-C.: Performance of airport passenger carry-on baggage security screenig system. J. Saf. Environ. 21(5), 2149–2154 (2021) 8. Jin, L., Lei, W., Min, L., et al.: Study on service parameter optimization of security inspection system considering carrying behavior of passengers. J. Safety Sci. Technol. 17(3), 26–32 (2021) 9. Wen, J., Zhang, T.-R.: Simulation of balanced and decentralized joint inspection method for terminal safety of smart airport. Comput. Simulat. 17(3), 26–32 (2021) 10. Ye, N.: Research on intelligent baggage security system based on Internet of Things and artificial intelligence technology. Wirel. Internet Technol. 18(4), 16–17+60 (2021)
Robust Design of Machine Translation System Based on Convolutional Neural Network Pei Pei1(B) and Jun Ren2 1 Department of Foreign Languages, Changchun University of Finance and Economics,
Changchun 130200, China [email protected] 2 North Automatic Control Technology Institute, Taiyuan 030000, China
Abstract. Aiming at the problem of low load robustness coefficient and recovery robustness coefficient of machine translation system in different scenarios and working conditions, which leads to poor robustness, convolutional neural network algorithm is used to optimize the robustness of machine translation system. The operation process of the machine translation system is simulated through the steps of corpus preprocessing according to the composition and working principle of the machine translation system, word alignment processing, and phrase extraction. Obtain the load data of the machine translation system, and with the support of building a convolutional neural network model, according to the measurement results of the vulnerability of the machine translation system, use the convolutional neural network algorithm to determine the system load scheduling amount. The robust controller is selected as the executive element to complete the robust design of the machine translation system. The experimental results show that the machine translation system designed by the optimization method has higher load robustness coefficient and recovery robustness coefficient under different scenarios and operating conditions, which confirms that the robustness design effect of the optimized machine translation system is better. Keywords: Convolutional neural Network · Machine Translation System · Robust Design · Decoding
1 Introduction Machine translation [1, 2] is the process of converting one natural language into another by using computers, also known as automatic translation. Translation is the main method of equivalent information transfer between different languages. The traditional manual translation is inefficient, so intelligent translation is used instead of manual translation. The incompleteness of the current translation results indicates that there is still much room for research in this field. However, deep learning has good advantages for speech recognition, so deep learning [3] is introduced to study translation. At present, there are many kinds of machine translation systems, which have been well applied in teaching and international communication, and they have good development prospects. However, the system operation is unstable in the practical application © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 100–113, 2024. https://doi.org/10.1007/978-3-031-50571-3_8
Robust Design of Machine Translation System
101
of machine translation system due to the low robustness of the system. The robustness of the machine translation system is designed according to its working principle in order to improve the running robustness of the machine translation system and thus improve the running stability. Robustness plays an important role in the system. At present, the research on system robustness has gradually become a hot topic. For example, literature [4] studied a method to quantify the robustness of chaotic systems, and proposes a scheme to determine the degree to which system parameters can be changed before the probability of destroying chaos exceeds 50%. The Monte Carlo method is used in the calculation and is applied to several common dissipative chaotic maps and flows with different number parameters. This method has a good effect in practical application, but its performance of restoring robustness coefficient is poor. Literature [5] studied the multi-stage robust scheduling method of this system, which proposed a new proposition. In view of the uncertainty of load and renewable output, unpredictability and robustness were taken into account, and the gradient scheduling model is established. However, the method has the problem of low load robustness coefficient. Therefore, convolutional neural network algorithm is introduced. Due to the characteristics of convolutional neural networks, this paper applies them to optimize the robustness of machine translation systems to improve the robustness.
2 Robustness Design of Machine Translation System 2.1 Simulate the Running Process of the Machine Translation System Machine translation is divided into three parts, and the overall structure is shown in Fig. 1. Model training includes basic processing steps and training language models and sequencing models. The parameter feature tuning stage uses the existing bilingual corpus to continuously update the feature weights of the model to make it optimal. The translation and decoding module translates the source sentence through the model and feature weights learned during model training and parameter feature tuning. The operation process of the machine translation system is shown in Fig. 2. The machine translation model can be directly mapped into the output sequence in an end-to-end manner. If the source language is given as A(t) and the target language is set as B(t), the conditional probability of the machine translation description is shown in Formula 1. m P(bt |b < t, A) (1) P(B|A) = t=1
Variable m in Eq. 1 is the number of source languages. The encoder encodes it into a fixed-length low-dimensional real number vector as the semantic representation of the sentence for each input source language target sentence, and the encoder encodes it under the condition of the given source language target sentence semantic representation, the translation results are word for word sentences in the target language. Corpus Preprocessing It is necessary to convert the corpus into a regular form that meets the model input
102
P. Pei and J. Ren
h
h
ĂĂ
h
x
x
ĂĂ
x
y
y
ĂĂ
y
Fig. 1. Block diagram of the machine translation system
requirements before the machine translation system runs the translation program [6, 7]. The irregular corpus is preprocessed, including word segmentation and special words. The word segmentation process of the initial translated words is shown in Fig. 3. The content of special vocabulary processing mainly includes: all uppercase letters are changed to lowercase letters; because the word segmentation of English data only needs to separate words from punctuation marks by spaces, so when processing, it is only necessary to separate the terminator at the end of the sentence on the original data. Yes; the named entity recognition part is in Chinese and English, so it also needs to be processed accordingly, that is generalization of special nouns. Word Alignment Processing Word alignment is a basic and important step in the process of machine translation. It is an essential link in many research tasks that use parallel corpora, such as question answering system, word sense disambiguation, lexicography, machine translation, etc. Generally speaking, alignment can be divided into several forms at different levels, such as text, paragraph, sentence, phrase and vocabulary. However, the goal of this step is to find language fragments that can be translated from parallel corpora [8]. The method of word alignment based on dictionary is based on the fact that the dictionary contains high quality information of word translation. The fully dictionary-based method has good accuracy in the case of non-empty word alignment, but the context is diverse in actual translation scenarios, so the translation is flexible and the coverage of dictionary
Robust Design of Machine Translation System
103
Fig. 2. Flowchart of the operation of the machine translation system
translations is relatively low. Therefore, if only dictionary-based word alignment cannot meet the requirements, other methods need to be introduced. Some words can achieve a certain degree of matching, but they cannot meet the matching requirements 100%. Therefore, a fuzzy mechanism is used to match words. The matching process of any two words can be expressed as: ⎧ ⎪ ⎨ s(E, C) = φ + ϕ φ = max(s(d , c)) (2) ⎪ ⎩ ϕ = (Count(s(d , c) > η) − 1) × 0.1 where: s(d , c) is the matching degree of words d and c, max() and Count() are the maximum value calculation function and the number statistics function respectively, and η is the matching degree threshold of words. The specific calculation formula of variable s(d , c) can be expressed as: s(d , c) =
2 ∗ (d ∩ c) |d | + |c|
(3)
104
P. Pei and J. Ren
,
Fig. 3. Process flow chart of machine translation word segmentation
Substitute the calculation result of Eq. 3 into Eq. 2 to complete the alignment of the source translation words of the machine translation system. Phrase Extraction Phrase extraction is an important step. It is the difference between phrase based method and word based method. The decoding part can be successfully implemented only by extracting phrases according to corresponding rules and constructing translation tables. The target is to search each phrase in the source sentence and its corresponding target phrase based on the results of word alignment in the process of phrase extraction. First, the phrase is determined to be processed from the source language, then search for the corresponding relationship between each word that composes it and the target word, and finally determine whether the relationship is satisfied between the target phrase and
Robust Design of Machine Translation System
105
the source phrase, and expand the corresponding relationship. Empty word. Extracting phrases can only be regarded as a basic step. All phrase pairs cannot be treated equally, because the rationality of different phrase pairs is different. Therefore, the rationality of phrases needs to be scored. Phrase pair scores are calculated as follows: Count(Fr (e, f )) Fr (f , e) = Count(Fr (el , f ))
(4)
fl
where: e and f are phrases in the target language and in the source language respectively. When the training data is not sufficient, some low-frequency phrase pairs may appear although the phrase translation probability can represent the corresponding relationship score of the phrase level. If the frequency of occurrence of both is small and the possibility of co-occurrence is high, the phrase score is too high. Therefore, the combined decision of lexicalized weighted information is supplemented in order to deal with this situation better. Machine Translation Result Output Decoding is to traverse the set space of solutions, obtain suitable statements, and output them as translation results. Traversal is a search process that uses a heuristic depth-first search algorithm. The search behavior depends on the calculation results of the different modules, which determine the next direction until the translation terminator is generated, ending the search. Statistics-based machine translation calculates a probability. Given the source sentence to be input, the translation model will produce multiple translation results [9]. The result score is obtained through evaluation, and the translation result with high score is selected as the output result of the final model. 2.2 Obtaining Machine Translation System Load Data Obtain the number of translation tasks executed by the machine translation system at any time as the system load data. When the server detects the load, it uses the load prediction method to determine whether the server is overloaded through load prediction, which can avoid errors caused by instantaneous high load. When the server agent detects that k times in the past n times exceed the load threshold, the load prediction algorithm is triggered. The algorithm is based on time-series forecasting, so the load values detected at the past t moments are calculated in time order. If the detected load value sequence is: L(t) = {l1 , l2 , · · · , lt }
(5)
The n-order autoregressive model is used to predict the load value at time t + 1, then the load prediction value at time t + 1 is calculated as follows: lt+1 = τ1 lt + τ2 lt−1 + · · · + τn lt−n+1 + δ
(6)
where: τ1 is the time series parameter, and δ is the noise disturbance coefficient of the machine translation system. When a certain resource exceeds the load threshold for k consecutive times, the prediction model is triggered to predict the load at time k + 1. If
106
P. Pei and J. Ren
the load still exceeds the threshold, virtual machine migration is triggered. The server agent selects the virtual machine with a large proportion of the resource to be migrated according to the type of overloaded resource. When the agent manager detects that multiple resources exceed the load threshold at the same time, it needs to set the weight according to the proportion of various resources of the server, it is converted into a comprehensive load index, and obtain the real-time load data acquisition result of the machine translation system. 2.3 Building a Convolutional Neural Network Model Neuron is the basic unit of convolutional neural network, and also the basic unit of information processing domain in the network. The neuron is equivalent to a many to one nonlinear mapping in the mathematical field, and the neuron model constituting the CNN is shown in Fig. 4.
Ă
Ă
Fig. 4. Schematic diagram of neuron model
In Fig. 4, variable xj is the input value, xj is the threshold value, and fjh () and yi represent the transfer function and the output value, respectively. A CNN model is formed through the connection of neuron models, and its topology is shown in Fig. 5. As can be seen from Fig. 5, it is a three-layer network, and CNN is a feedforward network, which can process information cooperatively and store information distributed in parallel. The hidden layer is the middle layer of CNN, which contains convolution operation, its role is to extract features, input and output layers, while the hidden layer adjusts weights, which is a “self-organizing process”. The output of any neuron i in the convolutional neural network can be expressed as: n
net ωji xj − λi (7) yi = sgn i=1
where: ωji is the weight value between neurons i and j, and sgn() is the step function or the sign function. The Sigmoid function is used as the activation function of the neuron, and its function expression is: fjh (x) =
1 1 + e−x
(8)
Determine the number of neurons in each layer of the CNN and the weight between each layer, and complete the construction of the CNN model.
Robust Design of Machine Translation System
107
Fig. 5. Convolutional neural network topology diagram
2.4 Measuring Machine Translation System Vulnerability The load instability coefficient of the machine translation system is set as the vulnerability measurement index to measure the vulnerability of the machine translation system. The calculation formula of the system load instability coefficient index is as follows: nthread
κInstability =
(Li − Li−1 )
i=1
Lavg
(9)
where: Li and Li−1 are the load values of the i and i − 1 threads in the system, nthread is the total number of sites running in the system, and Lavg is the average value of the system load. The quantitative measurement results of the vulnerability of the machine translation system can be obtained through the acquisition of relevant market protectors and the calculation of Formula 9. 2.5 Use Convolutional Neural Network Algorithm to Determine System Load Scheduling Amount Substitute the obtained machine translation system operating data and the system vulnerability result obtained from the measurement into the constructed CNN model, and determine the load scheduling amount of the machine translation system through the operation of the CNN algorithm. Figure 6 shows the execution principle of the CNN algorithm. The output results of each level of the convolutional neural network model are obtained after the calculation from the front to the back, which as follows:
xImplied = fjh ωImplied · xin
(10) xout = fjh ωout · xImplied
108
P. Pei and J. Ren
Fig. 6. Operating principle diagram of the convolutional neural network algorithm
where: xin , xImplied and xout are the output results of the input layer, hidden layer and output layer of the convolutional neural network, respectively. The corrections of the weights of each layer can be expressed in the forward propagation process, which as follows:
ω1j (n) = κcon κ0 xout + κinertia ω1i (n − 1)
(11)
where: κcon , κ0 and κinertia are learning rate, convergence coefficient and inertia coefficient respectively. Then the weight correction process during the operation of the convolutional neural network algorithm can be expressed as: ωij (n + 1) = ωij (n) + ωij (n)
(12)
The correction of the weight value can be completed by combining Formula 11 and Formula 12. Repeat the above work until the termination condition is met. The determined value of the load scheduling amount of the system is output through the loop iteration of the convolutional neural network algorithm, and the specific value can be expressed as: (13)
Li = Li − Lavg Thus, the load scheduling of MT system is solved. 2.6 Implement Robust Design of Machine Translation System Systems generally have defects [10]. Therefore, a robust controller is installed in the machine translation system as an executive element for robust design in order to further improve the performance of machine translation systems. The structure of the robust controller is shown in Fig. 7. Where: r(t) is the input of the nonlinear system, yl (t) is the output of the system, and U (t) is the pseudo-system input containing the u(t − 1) polynomial. The real control input u(t − 1) can be obtained by solving the polynomial method, such as the Newton
Robust Design of Machine Translation System
109
y
Fig. 7. Structure diagram of robust controller of machine translation system
iteration algorithm. Among them, RBFNN is defined as the local controller; the local controller and the Newton iterative (t) solver are collectively defined as the generalized convolutional neural network controller [10]. Make full use of the framework of the U model to estimate unknown parameters online. The wide area controller adjusts the input u(t − 1) of the system with the obtained estimated parameters. This is a new type of control structure and identification structure based on the U model, and many similar convolutional neural networks use this standard control. The robustness of machine translation system is realized with its support.
3 Experiment Analysis of Robust Design Effect Test A number of different types of systems are selected as the research objects in order to test the design effect of the robust design method of machine translation system based on convolutional neural network, and the stability of the analysis system is tested under different operating environments and operating conditions. 3.1 Configure the Research Object of Machine Translation System The machine translation system based on phrase statistics in the experiment, the machine translation system based on microengine pipeline and the machine translation system based on paraphrase information are selected as the research objects of the experiment. The selected machine translation system uses Intel(R) Xeon(R) CPU E5-2660 V4 @2.00 GHZ as the processor, Tesla K80 as the graphics card, Python(3.5.3); pytorch(0.4.1) as the third-party library In the experimental environment, the development of the system was completed. Prepare the translation sample and input it into the developed system to obtain the corresponding machine translation result. The operating interface of the translation system is shown in Fig. 8. Using Mteval_sbp as a test tool, data was collected via the website. The training corpus is used to collect 2 million sentences in a laboratory. The benchmark translation model takes the gated convolutional neural network LSTM as the base through the encoder and decoder, and the attention mechanism is used in the connection between the decoder and the encoder. 3.2 Input the Operating Parameters of the Convolutional Neural Network Algorithm The running parameters of the CNN are set from the initial value of the network’s learning rate, the number of neural nodes in each layer, the number of hidden layers in
110
P. Pei and J. Ren
,
Fig. 8. Machine Translation System Operation Interface
the network, etc. The single-layer CNN has high error and low identification accuracy. The number of hidden layers can improve identification accuracy is increased, error is reduced and error accuracy is improved. However, while it has advantages, it also has disadvantages, which will make the network overly complicated, because it will increase the training time of the network connection weight coefficient. If you only increase the error accuracy, you can achieve this requirement by increasing, which will make it easier to observe and adjust the training effect of the convolutional neural network. Therefore, the first consideration should be to increase the number of nodes in the hidden layer in general, rather than the first consideration to increase the number of hidden layers. The number of nodes in the hidden layer in the CNN is expressed as: (14) ny = 0.43vg + 0.12g 2 + 2.54v + 0.77g + 0.35 where: v and g are the number of nodes in the input layer and output layer respectively. An initial value is obtained through the above calculation, and then the step-by-step pruning method is used, that is starting from a relatively complex convolutional neural network, the hidden layer nodes are deleted step by step until the network reaches the best. If not, increase the number of nodes in the hidden layer. The learning rate can determine the change of the connection weight coefficient in each cycle. When the value of the learning rate is too large, the system will run erratically, and even the system will be paralyzed; if the value of the learning rate is too small, the training time of the system will definitely be longer, resulting in a slower convergence rate of the system. Although the learning rate is too small, it will have a certain negative impact on the system, but the learning rate can make the system avoid getting trapped in the local convergence of the error function, and the system eventually is made to tend to the minimum error. Therefore, a small learning rate is usually adopted in order to ensure the stability of the convolutional neural network.
Robust Design of Machine Translation System
111
3.3 Set the Test Index of Robust Design Effect The load robustness coefficient and recovery robustness coefficient are respectively set as quantitative test indicators in the experiment, and the numerical results can be expressed as: ⎧ Lmax ⎪
⎪ μload = ⎨ Li − Lj (15) ⎪ (Rf − Rd ) ⎪ ⎩ μrecovery = 1 − R where: Li and Lj are the load values of the i and j threads in the machine translation system, respectively; Lmax is the maximum thread load of the system; Rf , Rd , and R are the number of tasks interrupted by the system, and the number of tasks recovered after the interruption. The number of translation tasks and the total number of translation tasks entered in the system. The higher the load robustness factor and the recovery robustness factor of the system, the better the robustness of the system design. 3.4 Experimental Test Process and Result Analysis The traditional method based on the direct state space theory and the robust design method based on the disturbance observer are set as the comparison methods of the experiment in the system test experiment. The robustness of the machine translation system is optimized by the design method, and the robustness design is completed. This experiment tests the robustness of different systems from two aspects: the running scenarios and operating conditions of the machine translation system, machine translation tasks is ran under different attack environments, counts the running data of the system, and obtains the evaluation test results of the system robustness under different operating scenarios through the calculation of Formula 15, as shown in Fig. 9. It can be seen intuitively from Fig. 9 that the higher the attack intensity in the system operating environment, the lower the system robustness coefficient. The robust design method of the convolutional neural network-based machine translation system applying the optimal design has higher load robustness coefficients and recovery robustness coefficients is compared with the traditional robust design method, that is the robustness design effect is better. This is because the load instability coefficient of the machine translation system is set as the vulnerability measurement index to measure the vulnerability of the machine translation system. The final test results are shown in Fig. 10. It can be seen from Fig. 10 that the two are negatively correlated. Through vertical comparison, it is found that the average value of the MT system load robustness coefficient of the comparison method is 0.37 and 0.28, the average value of the MT system load robustness coefficient of the optimization design method is 0.66, and the value of the optimization design is 0.29 and 0.38 higher than that of the comparison method, This proves that the robust design method of machine translation system based on convolutional neural network has better robustness design effect. This is because the neural network algorithm is introduced into the system, through which the load scheduling amount of the system is determined, and the robust controller is designed. The robustness is improved through the combination of various aspects.
112
P. Pei and J. Ren 1.0
0.8
0.6
0.4
0.2 1
2
3
4
5
db
Fig. 9. System robustness test results under different scenarios 1.0
0.8
0.6
0.4
0.2 20
40
60
80
100
Fig. 10. System robustness test results under different working conditions
4 Conclusion The machine translation system realizes the intelligent transformation of translation work. This technology is used in translation work can innovate the retrieval technology of cross-language information in my country, and it has industrial application value and social value. The operation stability of the machine translation system in different scenarios and operating conditions is effectively improved through the design and application of the robustness design method of the machine translation system based on the convolutional neural network, and the application value of the machine translation system in the translation work is improved.
Robust Design of Machine Translation System
113
Acknowledgement. Social Science Research Planning Project (JJKH20221258SK) of Jilin Provincial Department of Education: A study on improving foreign language skills of the language services industry in Jilin Province.
References 1. Zou, D.F., Hu, Q.B.: Construction of neural machine translation model based on tree-to-string model enhancement. Comput. Simulat. 38(2), 344–347, 476 (2021) 2. Bayatli, S., Kurnaz, S., Ali, A., et al.: Unsupervised weighting of transfer rules in rule-based machine translation using maximum-entropy approach. J. Inf. Sci. Eng. 36(2), 309–322 (2020) 3. Zhu, Z.Q., Wang, S.H., Zhang, Y.D.: ROENet: a ResNet-based output ensemble for malaria parasite classification. Electronics 11(13), 2040 (2022) 4. Sprott, J.C.: Quantifying the robustness of a chaotic system. Chaos 32(2), 1–7 (2022) 5. Zhou, Y.Z., Zhao, J.X., Zhai, Q.Z.: 100% renewable energy: a multi-stage robust scheduling approach for cascade hydropower system with wind and photovoltaic power. Appl. Energy 301(1), 1–12 (2021) 6. Ternero, C., Pastor, G.C.: Bridging the ‘gApp’: improving neural machine translation systems for multiword expression detection. Yearbook Phraseol. 11(1), 61–80 (2020) 7. Kim, S.D., Lee, S.K.: English-Korean machine translation system with the improved ability to resolve linguistic differences by pre- and post-processing. J. Linguist. Sci. 92(3), 151–179 (2020) 8. Pandey, V., Padmavati, D., Kumar, R.: Hindi chhattisgarhi machine translation system using statistical approach. Webology 18(2), 208–222 (2021) 9. Matheus, A., Adriano, P., Fabrício, B.: A comparative study of machine translation for multilingual sentence-level sentiment analysis. Inf. Sci. 512(8), 1078–1102 (2020) 10. Dabre, R., Chu, C., Kunchukuttan, A.: A survey of multilingual neural machine translation. Assoc. Comput. Mach. Comput. Surv. 53(5), 1–38 (2020)
Design of English and American Literature Online Learning System Based on Android Yaping Liang1(B) and Di Qi2 1 Bozhou University, Bozhou 236800, China
[email protected] 2 Fuyang Normal University, Fuyang 236000, China
Abstract. In order to improve the user response rate and maintain the stability of the online learning host, an online learning system for British and American literature based on Android is designed. From the perspective of user behavior and use case structure, the learning needs of the accessed user objects are analyzed, and based on this, a learning behavior model is established. Using Java programming language, debug the operation mode of the technical architecture, and combine functional modules at all levels to complete the design of the online learning system for English and American literature based on Android. The comparative experimental results show that with the application of the Android based online learning system, the user response rate has been increased to 15.9 bit/ms, and the stable running time of the learning host has been fully extended, which can play a role in promoting the stability of the online learning host. Keywords: Android System · British and American Literature · Online Learning System · Use Case Structure · Learning Needs · User Behavior · Programing Language
1 Introduction With the rapid development of wireless networks, 4G has been marketed through the decline of smartphone prices, and the popularity of Android applications has reached an unprecedented height. As the Android system launched by Google is open source, mobile terminal devices produced by major manufacturers have chosen the Android system as the first operating system, making various applications based on the Android system widely developed and applied. For example, in the field of education, various mobile online learning systems based on the Android platform are changing people’s traditional learning methods, making students’ learning no longer limited by time and region, and can achieve independent learning, which is very suitable for people to effectively use their leisure time to learn, and is conducive to students’ faster access to knowledge content. In order to promote and popularize online learning, it is necessary to design and develop an online learning system suitable for people [1]. At present, there are a wide variety of online learning systems in the market, and they have been applied to © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 114–127, 2024. https://doi.org/10.1007/978-3-031-50571-3_9
Design of English and American Literature Online Learning System
115
a certain extent. Only by selecting a suitable online learning system can the learners’ learning be most effectively promoted. Online learning systems can be classified in many ways according to different functions, learning objects, etc. Of course, in the process of using the online learning system for learning, a series of problems brought about by the network science, such as easily distracted learning and not timely answering questions, will also affect the efficiency of online learning to a certain extent. Online learning system refers to putting all kinds of learning resources in the network server, and the whole learning process is carried out with the network as the carrier. The communication and exchange between students and teachers, students and students are conducted through the network. This new online learning mode has some unique advantages compared with the traditional learning mode: 1. Teaching mode: the online learning system realizes the multimedia of teaching information, no longer like the traditional classroom teaching, mainly relying on the teacher’s teaching. You can add rich and colorful teaching information such as pictures, sounds, videos, etc. At the same time, the organization of online learning content is more using the hyperlink information organization method, which has a good human-computer interaction experience, and can provide more fun for learners to learn; 2. Learning objects: The resources of the online learning system are stored in the network server, which is the same as other resources stored in the network server. These resources can be freely obtained on the network. They are open and shared. Anyone who is connected to the Internet has crossed the constraints of geographical location on people’s learning needs; 3. Learning mode: The network is characterized by 24-h service, and learning will no longer occupy the whole time of learners, making autonomous learning a reality; 4. Teaching arrangement: The online learning system requires learners to log in and use after self registration, so that learners have a personalized learning space in the cyberspace, and students can arrange their own learning content and work and rest time to facilitate self-management of learners. On this basis, an online learning system of British and American literature based on Android is designed. First, from the perspective of user behavior and use case structure, analyze the learning needs of connected user objects, and then build a learning behavior model based on this. Finally, using Java programming language, debug the operation mode of the technical system architecture, and combine the functional modules at all levels to complete the design of the online learning system of British and American literature. The experimental results show that the designed system has good performance.
2 Analysis of Learning Needs For the analysis of user learning needs in the online learning system of British and American literature, which involves user behavior definition, use case structure design and learning behavior modeling, this chapter will conduct in-depth research on the above contents. 2.1 User Behavior Through reading British and American literature works, students can be encouraged to independently complete the exploration and thinking of life, and gain life insights in
116
Y. Liang and D. Qi
reading. In terms of British and American literary works, the relationship between the author and the work is inseparable. When students read literary works, they also have a direct dialogue and emotional exchange with the author, and feel the author’s thoughts, feelings and ways of thinking through the text [2, 3]. Through reading British and American literature, students can be encouraged to strengthen emotional communication with works, taste life, gain insights, and then experience a full and positive life state, and gain comprehensive improvement of quality and ability. Let α, δ represent two unequal English and American literature learning parameters, Pα , pα represent the user learning indicator and learning feature value based on parameter α, respectively, Pδ , pδ represent the user learning indicator and learning feature value based on parameter δ, respectively, β represents The user learning behavior discrimination conditions, combined with the above physical quantities, can express the learning ability of the user objects in the online learning system for English and American literature as: √ Pα (1) I= +∞ +∞ √ 2 Pδ + β pα − pδ2 α=1 δ=1
The construction of the online learning system of British and American literature is based on the Android program framework. With the system host fully open, the operation behavior of the accessed learning users involves the following aspects: (1) The online learning system is mainly provided to students for online learning of courses to assist classroom teaching, deepen the mastery of course knowledge, improve students’ learning and practical ability, and improve students’ learning interest, so as to facilitate students’ mutual learning. Therefore, the system should provide such functions as course selection, online learning, online answering, score management, resource sharing, etc. (2) In order to better serve students’ learning, students’ learning can be more interactive. The online learning system allows teachers to participate in students’ online learning process, so as to understand students’ learning progress and learning status, and provide students with teaching suggestions and methods in a timely manner. The system should provide curriculum management function, test management function, teaching resource management function and other functions. (3) In order to give full play to the effectiveness of the online learning system, the online learning system not only supports the traditional teaching method of teachers and students, but also provides independent learning for non school students. The system should be able to provide a series of independent learning functions for non online users. Therefore, the system also needs to improve the role of tourists. Any user can register a tourist account, You can use the learning function of the open course of the online learning system. (4) The administrator is the super user of the system, and is mainly responsible for the daily maintenance of the system, system data backup, system security and other work to ensure the normal operation of the online learning system. The main
Design of English and American Literature Online Learning System
117
functions include user management, data management, log management, resource management, and course management. The complete user behavior relationship is shown in Fig. 1.
Fig. 1. Detailed explanation of user behavior relationship
As the lowest access object, student users accept the scheduling and management of tourist users in the online learning system of British and American literature, while teacher users and administrator users, as the intermediate access objects of the online learning system, only accept the direct management of the online learning host. 2.2 Use Case Structure Through the investigation and analysis of various users of the system and the security requirements of the information management system, we can get the main use cases of the online learning system, including login function, registration function, personal information management function, my course function, course management function, resource management function, user management function, log management function, data management function, etc. According to different user roles, the functions are different, The main use case diagram of the system can be obtained by using the objectoriented analysis method, as shown in Fig. 2. The student users have the use cases of login, personal information management, resource management, online learning and online examination; Teachers have use cases such as login, personal information management, curriculum management, resource management, learning effect evaluation, etc.; The system administrator has login, user management, log management, data management and other use cases; Tourists have user registration, password retrieval, online learning, online examination and other use cases. Under the condition that the user’s learning behavior does not change, the use case structure function capability of British and American literature online learning system can be expressed as: ⎛ ⎞ 1 2 −U ⎠ (2) O = χ 2I + ⎝ umin φ umax
118
Y. Liang and D. Qi
Fig. 2. Use Case Structure
Among them, χ represents the literary aesthetic coefficient, φ represents the literary performance characteristics, umin represents the minimum value of the use case vector, umax represents the maximum value of the use case vector, and U represents the average of the coefficient umin and the coefficient umax . In order to improve the security of the system, each user in the online learning system must log in with his/her own account and password before using the system. Users who have not registered with the system can register themselves. When logging in, users can log in to the system through their user name and password. The system provides corresponding service solutions according to the type of users. 2.3 Learning Behavior Modeling The social reality reflected in British and American literary works is helpful to understand that literary works with western cultural background are an intuitive reflection of social culture and realistic development, among which classic literary works are the effective crystallization of human wisdom and civilization, with extremely important appreciation value. British and American literary works directly reflect the cultural background of the West. They are the feelings and thoughts of foreign writers based on social reality and their own perception. At the same time, they are processed with certain literature and art, and truly reflect the system and environment of Western society [4]. Through the study and study of British and American literary works, students who are not native speakers of English can intuitively feel the differences and similarities between eastern and western cultures, so that they can understand the internal differences of different languages from different cultural backgrounds, and more intuitively feel the charm of English language, which is where the learning value and significance of British and American literary works lie. In the online learning system of British and American literature oriented to Android system, when modeling the learning behavior, the activity diagram is one of the graphical tools for modeling the dynamic behavior of the software system in the unified modeling language. The activity diagram describes the process of participants’ specific interaction with the system in order to achieve their own goals. In essence, an activity diagram is also a kind of flow chart, which only represents the control flow from one activity to another. It describes the sequence of activities, supports the expression of conditional and concurrent behaviors, and solves the problem that text event flows are difficult to read and understand.
Design of English and American Literature Online Learning System
119
The so-called learning behavior modeling is also called user learning feature modeling based on English and American literature data. Under the effect of the Android system framework, the greater the cumulative amount of English and American literature data, the stronger the unit performance ability of learning behavior [5]. In the modeling process, in order to accurately define the expression form of British and American literature data, the data sample characteristics and online learning intensity should be solved simultaneously. The specific calculation expression is as follows: ⎧ γ R 2 −2 ⎪ ⎪ ⎪ ε = O ⎪ ⎨ yˆ 2 (3) e˜ ⎪ 1 − ln ⎪ ln e ⎪ ϕ = W √ ⎪ ⎩ r˙ Ow ln e˜ =ln e
In the formula, ε represents the characteristics of the data sample, ϕ represents the online learning intensity, yˆ represents the real-time learning coefficient of the user object, γ represents the learning ability definition coefficient, R represents the total amount of student learning per unit time, r˙ represents the data sample identification coefficient, and w Indicates the learning authority held by the administrator, W represents the unit accumulation of online learning samples, e˜ represents the literary measure of the data sample, and e represents the unit performance mean of the literary feature. On the basis of formula (3), let r1 , r2 , · · · , rn represent the values of n non-zero English and American literature learning data samples, and E represent the transmission mean of the learning data samples in the online learning host. Combine the above physical quantities to deduce the learning behavior. The modeling expression is: 1 ε·ϕ (4) Q = 2 exp − 2 r1 + r22 + · · · + rn2 E In addition to meeting the functional requirements of students, teachers, administrators and other users, the online learning system also needs to meet the requirements of security, ease of use, maintainability, openness and reliability. For the sake of security and clear responsibilities, the online learning system is required to have certain authority management functions, and each functional module must have corresponding authority to enter and operate. The system shall be able to prevent data loss and destruction caused by various misoperations or physical damage. Prevent illegal users from acquiring web pages and background data, and prevent illegal users from operating the system database illegally.
3 System Development and Design In order to realize the smooth operation of the online learning system for British and American literature, it is also necessary to improve the Android technical architecture under the influence of the Java programming language, and then provide a stable connection environment for the relevant functional modules on this basis.
120
Y. Liang and D. Qi
3.1 Programming Language Java is a concurrent, class-based, object-oriented, and specially designed possible computer programming language implemented with as few dependencies as possible. Its purpose is to enable application developers to “write once and run everywhere”, which means that running from one platform to another does not require recompiling code [6]. It is stipulated that smin represents the minimum value of English and American literature data coding based on Java programs, smax represents the maximum value of English and American literature data coding based on Java programs, λ represents the guiding coding coefficient, S represents the single coding step value of English and American literature data samples, and f represents the coding parameters of data samples based on Android system architecture. With the support of the above physical quantities, the simultaneous formula (4) can define the application expression of programming language as: λ(smax − smin ) 2 (5) A=Q 2 f · S In the process of Android application programming, interface control, event processing, network applications, data storage and I/O are also very important aspects. These are also very mature processing methods in the Android system architecture. For an application on a mobile device, the design of the interface is the key point to leave a first impression on the user. The quality of the interface design will directly affect the user’s choice of applications. In the Android development tool, a set of simple and direct interface layout and design methods are provided. Developers use XML files to develop and design UI interfaces. Through simple operations, they can add interface interaction buttons, design layout locations, adjust the format of displayed text and related beautification pictures [7]. The layout interface can be directly called through the code in Java programs, which makes it easier for developers to make excellent interfaces. At the same time, separating the front-end interface design from the related logic control code can bring great convenience for later maintenance and modification, and better reflect the principle of three-tier architecture. Since the coding of English and American literature data samples in the Java language meets the three-tier architecture mode, the following conditions must be met when the coding parameter f of data samples is taken: f ≤
d1 + d2 + d3 ι1 · ι2 · ι3
(6)
Among them, d1 , d2 , d3 respectively represent the coding vectors in the first, second and third layer architectures, and ι1 , ι2 , ι3 respectively represent the real-time coding coefficients of British and American literature data samples in the first, second and third layer architectures. For English and American literature online learning system, its convenience and communication attributes are its biggest characteristics. Under the increasingly developed network conditions, Android system provides a complete set of network application interfaces. In Android applications, support for Web services is achieved through remote
Design of English and American Literature Online Learning System
121
calls. At the same time, Android has a built-in HttpClient, through which you can easily send http requests and obtain http responses. Android simplifies communication with the server through the built-in HttpClient. 3.2 Technical System Architecture The three-tier architecture of Android system is the mainstream design framework for the development of online learning system application software of British and American literature. Based on the idea of modular programming, the three-tier architecture has become a standard of module division method, which can achieve the purpose of decomposing software project requirements and reducing the confluence between modules. The most outstanding advantage of adopting the three-tier architecture is that it is no longer necessary to overturn the previous work because of some changes in customer requirements during the software development process, and the entire program does not need to be modified. When requirements change, it is only necessary to modify one or more layers of the three layers according to the changed content to meet the user’s requirements, sometimes just a few code changes [8]. In this way, the maintainability of the code is greatly enhanced, the confluence between modules is reduced, and the cooperation between developers working at different layers is also facilitated to the greatest extent. Programmers can achieve parallel development only by following the corresponding interface standards. Finally, the development of the entire application is completed by combining all layers together. Generally, the three-tier architecture includes the presentation layer, business logic layer and data access layer. The purpose of adopting the three-tier architecture is to realize the software design idea of “high cohesion and low coupling”. The relationship between the three-tier architecture is shown in Fig. 3. The specific functions are as follows: 1. Presentation layer: provides users with an interactive operation interface to show the data content to users, that is, what they see. 2. Business logic layer: it is responsible for the processing of key businesses and the data transmission with the presentation layer and the data access layer. Operation for specific problems, that is, the operation of the data layer and the processing of data business logic. 3. Data access layer: It realizes the operation of adding, deleting, modifying and querying data. This layer directly operates the database, including adding, deleting, modifying and querying data. In practical applications, in order to effectively divide the three-tier architecture of the Android system, you can distinguish it in the following ways: (1) Data access layer: It mainly depends on whether the British and American literature data samples in the data layer contain logical processing. Each of its methods mainly completes the operation of data files, without other operations. (2) Business logic layer: mainly responsible for the operation of the data layer, that is, the logical processing of British and American literature data samples obtained from the data layer. (3) Presentation layer: it is visible to the accessed learning users and the outermost layer that can be accessed by the application. It is mainly used to display the requested British and American literature data samples and receive the data information input by the user.
122
Y. Liang and D. Qi
Fig. 3. Three layer architecture of Android system
3.3 Functional Module The function module of English and American literature online learning system consists of three parts: client, management and server. (1) Client function module Learning users can log in and watch English and American literature learning materials. Log in to the Internet, extract the video information from the server database, and display it on the client. Users can also transmit operation information to the server through display information and store the information in the database [9]. (2) Management end function module In order to provide convenience for administrators, you can design a server URL linker on the computer client. It mainly realizes administrator login, video type management, video list display, video details view, user comments view and other management functions. (3) Server function module The server side functions of the system mainly include administrator video type management, video input, video list display, video details view, comment information input, etc. When a user logs in to the Internet using the client, the client connects to the system server. The system server implements the following functions: when the user logs in, the system server verifies the legitimacy of the user; When the user views the video, the server will now accept the user’s request and access the database query. According to the user’s request, the server will access the database and return the query results to the user client in the form of learning materials through the Android system [10]. When the administrator logs in to the Internet through the computer, the browser is connected to the server, the server displays the interface of this function to the administrator, and the administrator performs corresponding operations. Generally, in the process of Android development and programming, there are two event processing mechanisms. One is event processing based on listening, that is, binding specific event listeners to various operations in the Android interface to provide corresponding feedback operations when an event occurs; The other is based on the processing of feedback events, which rewrites specific callback methods in
Design of English and American Literature Online Learning System
123
Android components or in activities [11]. Developers only need to rewrite related functions, regardless of which interface components need to call these methods. Assume to represent the sample connection conditions of British and American literature, and the value expression is as follows: ϑ ∈ [1, +∞)
(7)
Let μ represent the carrying capacity of the Android architecture for the learning host, g˙ represent the functional characteristic value of the British and American literature data samples, H represent the average cumulative quantity of the British and American literature data samples in the functional module, ϑ represent the display coefficient of the Chinese and American literature data samples, ˆj represent the query characteristics of the data samples based on the Android architecture, and the inequality condition of ˆj = 0 is always true. With the support of the above physical quantities, Formula (6) and Formula (7) are used to deduce the connection expression of the functional modules of the online learning system of British and American literature as follows: +∞ 2 (g˙ × ϑ)2 − H L=
μ=1
A
+∞ μ=1
(8) θ · ˆj
The online learning system mainly provides services for users through a large number of British and American literature data samples. These data parameters are also resources stored in the network and can be accessed by users. Information management is mainly achieved through the management of the link address of the Android system. Here, data can also be classified according to the type to help users more effectively select British and American literature data, Save time for users to select interesting videos and improve learning efficiency.
4 Example Analysis According to the principles shown in Fig. 4, select the data samples of English and American literature for experiment, input these data information into the Windows host, and record the specific values of relevant experimental indicators. Choose the online learning system of British and American literature based on Android, the online learning system based on intelligent coding [12], and the online learning system based on neural network [13] as three different control methods. Use the above three methods to control the Windows host respectively. When the input amount of British and American literature data samples reaches the experimental standard, record the numerical changes of the user response rate and the stable running time of the online learning host. Both the user response rate and the stable running time of the host can affect the application ability of the American Literature Online Learning System. Without considering other interference conditions, the faster the user response rate and the longer the
124
Y. Liang and D. Qi
Fig. 4. Experimental principle
stable running time of the host, the stronger the application adaptability of the American Literature Online Learning System. The following table records the numerical changes of user response rate indicators during the experiment. Table 1. User Response Rate Sample input of British and American Literature/ (×109 Mb)
Android based online learning system/ (bit/ms)
Online learning system based on intelligent coding/ (bit/ms)
Online learning system based on Neural Network/ (bit/ms)
1.0
13.5
12.7
10.4
2.0
13.8
12.7
10.6
3.0
14.1
12.7
10.8
4.0
14.6
12.7
11.0
5.0
15.0
12.7
11.3
6.0
15.3
12.7
11.3
7.0
15.6
12.7
11.3
8.0
15.8
12.7
11.2
9.0
15.9
12.7
10.9
It can be seen from Table 1 that with the increase of the input amount of British and American literature data samples, the user response rate based on the Android online learning system shows a significantly increasing trend of numerical change. By the end of the experiment, the maximum value reached 15.9 bit/ms. The user response rate under the action of the online learning system based on intelligent coding has always maintained a relatively stable numerical change trend throughout the experiment, but its numerical level is low, which is 3.2 bit/ms lower than the former. The user response rate under
Design of English and American Literature Online Learning System
125
the effect of the online learning system based on neural network keeps the numerical change trend of first increasing, then stabilizing, and finally shrinking. The maximum experimental result is 11.3 bit/ms, which is still far lower than the experimental result of the online learning system based on Android. The figure below reflects the numerical changes of the stable running time of the online learning host under the action of the Android based online learning system, the intelligent coding based online learning system, and the neural network based online learning system.
Fig. 5. Stable operation duration of online learning host
It can be seen from the analysis of Fig. 5 that when the sample input of British and American literature reaches 6.0 × 109 Mb, the stable running time of the application host based on the Android online learning system reaches the maximum of 138 min; When the input amount of British and American literature data samples reaches 9.0 Mb, the stable running time of the application host of the online learning system based on intelligent coding reaches the maximum of 100 min; When the sample input of British and American literature reaches 4.5 × 109 Mb, the stable operation duration of the online learning system based on neural network reaches the maximum of 90 min. Based on the above experimental results, it can be seen that: the online learning system based on intelligent coding and the online learning system based on neural network are relatively weak in improving the user response rate, so these two types of systems cannot effectively maintain the operating stability of the application host in the process of users’ online learning of British and American literature data; The Android based online learning system can effectively improve the user response rate, which can play a role in promoting the stability of the application host during the online learning of British and American literature data.
126
Y. Liang and D. Qi
5 Conclusion The study of British and American literature should include two purposes: language acquisition and cultural acquisition. From the perspective of the nature of literature, the value of using British and American literature for cultural quality education is far greater than the value of language education. British and American literature is a mirror, which reflects the history and culture of the English nation. The significance and role of British and American literature research is to deepen the knowledge learned by learners at the basic stage, enhance their understanding of western literature and culture, and improve students’ pragmatic competence by reading and analyzing British and American literature works. This requires English and American literature learners to constantly strengthen their cultural awareness in order to successfully learn and use language, and in today’s multicultural development, it is more conducive to improving learners’ intercultural communication ability and avoiding the interference of communication barriers. Therefore, the study of English and American literature should have cultural awareness and gradually expand and deepen. The online learning system for English and American literature based on Android allows students to independently choose the course, content, time and place of learning, and the system has the evaluation of students’ learning effect. It can analyze and compare students’ mastery of each course, so as to improve students’ enthusiasm for learning. In addition, the system is developed using the Android system and Java technology, which can realize the client and smartphone client, It can better facilitate students’ learning anytime and anywhere. This kind of learning mode is more convenient, with less energy and material resources. The application involves a wide range of contents, which can meet the needs of most people. Through performance analysis and product testing, the learning system can meet the autonomous learning needs of all kinds of people, achieve the desired goal, and has certain application value. However, there are still many defects in the Android application system framework, such as the limitation of network bandwidth, the limited storage space and configuration of servers, and the performance of mobile terminals also determines the user experience. The system development for different learning objects, different learning needs and characteristics needs further study.
References 1. Tabassum, F., Akram, N., Moazzam, M.: Online learning system in higher education institutions in Pakistan: investigating problems faced by students during the COVID-19 pandemic. Int. J. Web-Based Learn. Teach. Technol. 17(2), 1–15 (2022) 2. Mu, W.: Research on the construction of multimedia assisted classroom education mode of British and American literature course. J. Phys.: Conf. Ser. 1915(4), 42–53 (2021) 3. Falke, C.: Essentially the greatest poem: teaching new ways of reading American literature. Univ. Gothenburg Depart. Lang. Literat. 20(2), 283–301 (2021) 4. Chou, J.S., Truong, D.N., Le, T.L.: Interval forecasting of financial time series by accelerated particle swarm-optimized multi-output machine learning system. IEEE Access 8(1), 14798– 14808 (2020)
Design of English and American Literature Online Learning System
127
5. Yang, S.H., Wang, Y.Y., Lai, A.F., et al.: Development of a game-based e-learning system with augmented reality for improving students’ learning performance. Int. J. Eng. Educ. 2(1), 1–10 (2020) 6. Ashiku, L., Dagli, C.: Network intrusion detection system using deep learning. Proc. Comput. Sci. 185(1), 239–247 (2021) 7. Yan, W.S., Funabiki, N., Kuribayashi, M., et al.: A proposal of android programming learning assistant system with implementation of basic application learning. Int. J. Web Inf. Syst. 16(1), 115–135 (2020) 8. Sahboun, Y., Razak, N.A.: Adoption and usage of learning management system technologies among EFL students: factors and issues. Solid State Technol. 63(6), 11519–11531 (2020) 9. Saputro, B., Saerozi, M., Siswanta, J., et al.: Validation of learning management system (LMS) of E-problem-based learning based on scientific communication skill and plagiarism checker. Tech. Rep. Kansai Univ. 62(6), 3097–3113 (2020) 10. Kushwaha, P.S., Mahajan, R., Attri, R., et al.: Study of attitude of B-School faculty for learning management system implementation an Indian case study. Int. J. Dist. Educ. Technol. 18(2), 52–72 (2020) 11. Wang, H., Wang, Z.: Design of simulation teaching system based on modular production and processing. Comput. Simulat. 39(4), 205–209 (2022) 12. Wang, D., Wei, B.: A small-sample image classification method based on a Siamese variational auto-encoder. CAAI Trans. Intell. Syst. 16(2), 254–262 (2021) 13. Qu, S., Wang, Y.: Research on habit search based on the block-chain storage of learning resources under neural network. China-Arab States Sci. Technol. Forum 9, 116–118 (2021)
Design of College English Reading Inculcate Feedback Channel Under Cloud Terrace Di Qi1 and Yaping Liang2(B) 1 Fuyang Normal University, Fuyang 236000, China 2 Bozhou University, Bozhou 236800, China
[email protected]
Abstract. For the sake of collect and study English inculcate feedback information in a timely and convenient pattern, adjust college English inculcate methods and means, maximize inculcate calibre, and materialize the direction of monitoring inculcate, this investigate designs a feedback channel for college English reading inculcate under the cloud Terrace. The channel case is bottom on the current popular B/S (browser/server) anatomy, and is devised using MVC (pattern, opinion and manipulate storey) utilize progression and redeem mode. Design the channel record pool and give the record pool table. For the sake of facilitate centralized management, the task handle software procedure is devised, including 6 channel function modules. The fruit display that the feedback postpone index of the devised channel is always above 0.9, which attests the efficiency of task handle of the channel. With the enhancement of the quantum of access requests, the quantum of concurrent consumer carried by the channel has been kept at a relatively stable level, which attests that the bearing fulfill of the channel is relatively strong and can handle multiple request tasks at the same time. Keywords: Cloud Terrace · College English · College English · Feedback channel
1 Introduction The trend of informatization and globalization makes the importance of English language increasingly prominent. In social life, English, as one of the most significant carriers of information dissemination, is widely used in various fields. In terms of population, English is the world’s third largest language after Chinese and Spanish. At present, more than 300 million people in the world use it as their mother tongue. In terms of the scope of use, three quarters of the information on the Internet is written in English, and more than 70% of the world’s e-mails are written in English or addressed in English; More than 60% of the world’s radio procedures are conducted in English; In international politics, commerce, culture, trade, transportation and other fields, English is the communication tool [1]. It can be said that English is an absolute “world language”. Under such a background, many countries have taken English inculcate as an significant part of citizens’ calibre inculcate in their fundamental inculcate progression strategies, © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 128–143, 2024. https://doi.org/10.1007/978-3-031-50571-3_10
Design of College English Reading Inculcate Feedback Channel
129
and put it in a prominent position. For the sake of meet the needs of the country and society for talent training in the new era, China has promoted the position of English in inculcate to a strategic height. English language related subjects continue from primary school, middle school to university. At the university level, we also set up the National College English Test Band 4&6 (CET4&CET6) to measure college students’ English proficiency. To a large degree, no matter what level of inculcate, no matter what field students, English is an unavoidable topic. College English is a force fundamental subject for college students of non foreign language fields. Under the guidance of modern foreign language inculcate theories, its main inculcate content covers English language knowledge and utilize skills such as listening, speaking, reading, writing and translation, while giving consideration to English learning and cross-cultural communication strategies. After years of progression and accumulation, it has become an indispensable part of China’s current higher inculcate channel and has cultivated a large quantum of versatile talents who can communicate in English for the country and society. It is an significant link in inculcate management to grasp and deal with inculcate feedback information in time, which is very significant for improving inculcate calibre. At this stage, higher inculcate pays more regard to the improvement of inculcate calibre, and more and more schools even regard it as a key means to improve inculcate calibre. For the sake of collect and investigate inculcate feedback information more accurately, we must ensure that the feedback information collected is accurate and timely, so as to improve inculcate calibre to a greater degree and materialize the direction of monitoring inculcate [2]. Inculcate information feedback can let us see the strengths and weaknesses of teachers themselves, so that we can correct them more accurately and timely, which plays a great deed in further improving teachers’ inculcate calibre and inculcate level. Modern skill has been developed more and more perfectly and applied widely. With the progression of these technologies, the gather of inculcate feedback information is more convenient. The utilize of modern science and skill to develop inculcate information feedback channel can further improve the inculcate level. Literature [3] proposes the resource scheduling of piano inculcate channel in the Internet of Things bottom on mobile edge computing, analyzes the resource scheduling problem of postpone-sensitive utilizes, sets the resource scheduling mode bottom on the spatio-temporal difference of edge container load in multi-cluster environment, and puts forward the cross-cluster scheduling strategy. At the same time, the fulfill of the proposed strategy is analyzed by simulation experiment. The strategy proposed in this paper can implement postponeinsensitive utilize scheduling in the running plication of the channel, materialize the goal of multi-cluster cooperative scheduling, and make the load balance between clusters more balanced. Literature [4] proposes an interactive English reading inculcate channel bottom on hybrid communication network. The hardware part of the channel is composed of seven modules, among which the subject management module mainly uses the file channel and multimedia attribute record pool to manage the record uploaded by teachers. Students can use the hyperlink location field to mark the connection location of media, so that they can see more colorful learning pages and realize the interactive design of the inculcate channel. The software part uses the covering method or the error method to construct the learning evaluation pattern to realize the comprehensive evaluation of students’ learning state. This method can fully understand and evaluate the learning
130
D. Qi and Y. Liang
state and knowledge point of the students and shorten the response time of the channel. However, the above channel has the problem of poor inculcate calibre and can not effectively collect and study the feedback information of English inculcate. Therefore, using a specific management information channel and software progression Terrace to scientifically manage English reading inculcate feedback and realize the network management of inculcate feedback is the most effective way to solve problems, and it is also the inevitable trend of inculcate feedback and inculcate calibre evaluation. Therefore, it is urgent to develop a college English reading inculcate feedback channel under the cloud Terrace to effectively improve the management of college English inculcate. Bottom on the current popular B/S (browser/server) anatomy, using MVC (pattern, opinion and manipulate storey) utilize progression and redeem design mode. Select DBRIC as the MongoDB record checking channel. In the plication of constructing the network inculcate channel bottom on SSH case, the logical separation of the above hierarchy anatomy is realized according to the MVC design pattern. Design record pool, design task handle software procedure.
2 Skill and Architecture Related to Channel Progression This channel design is bottom on the current popular B/S (browser/server) anatomy, and adopts MVC (pattern, opinion and manipulate storey) utilize progression and redeem design mode. It also makes use of the principles and methods of software engineering to carry out channelatic analysis, design, implementation and testing in combination with its own practical characteristics of open English inculcate. The overall technical route of the channel is mature, and it is carried out in strict accordance with the software engineering progression specifications. The following is a brief statement. 2.1 B/S Mode The traditional C/S anatomy has been difficult to adapt to the growing scale and complexity of the current management information channel, especially in the environment of multi-user, decentralized record pool network, For the sake of further improve the flexible utilize of the network English inculcate assistant management channel, the channel uses the current popular B/S browser and server anatomy to meet user needs. C/S (client/server mode) type software is divided into two storeys: server and client. It is also an entry and exportation device. The server usually adopts high-fulfill PC, workstation or minicomputer, and the client also has certain record handle and storage capabilities. Many jobs can be submitted to the server after being plicationed by the client. Only by reasonably allocating the record and logic handle of utilize software between the server and the client, can the balance between server computation and network traffic be materialized in the network transmission plication, and the maximum handle efficiency of the channel be given play. The software developed by this progression mode is mostly limited to LAN utilizes, and special technologies need to be installed to materialize remote access. Because the quantum of connections to various servers is various and the limit of record traffic is various, the software of C/S anatomy is adapted to the limited quantum of consumer. In addition, when software needs to be upgraded or improved, its
Design of College English Reading Inculcate Feedback Channel
131
redeem cost is also high. At present, most internal and ERP (financial) software products of domestic companies belong to this kind of anatomy. For the sake of overcome the shortage of C/S anatomy, B/S (browser/server mode) was born with the upbeat of Internet skill. B/S anatomy, the user fulfill of software utilize is completely implemented in the Web server, the commerce logic is completely implemented in the utilize server, and the client can only need a browser to plication commerce, which belongs to a new software channel construction skill. This anatomy has become the preferred architecture for today’s utilize software, and most companies currently use this anatomy. 2.2 MVC Design Pattern MVC refers to pattern opinion manipulate, specifically, pattern + opinion + manipulate. It is a popular design pattern today. Its idea is to disjunction the procedure from entry and exportation manipulate, record handle and record representation. It uses a section to manage the commerce logic of the procedure. In this way, it is not imperative to change the commerce logic of the procedure frequently to ensure the flexibility and stability of the channel. Pattern: hold all record, status and procedure logic. Opinion: mainly used to obtain record and status directly from the pattern and present them visually. Manipulateler: its position is between the opinion and the pattern, receiving the user’s record and giving feedback to the pattern after preliminary analysis. The anatomy diagram is display in Fig. 1.
Fig. 1. MVC anatomy pattern
The pattern can disjunction the pattern from the opinion and is very beneficial to the flexibility and reusability of the channel. It has the following advantages:
132
D. Qi and Y. Liang
Low coupling: high cohesion and low coupling are the progression requirements of software engineering procedure anatomy. MVC can fulfill this requirement. MVC can disjunction the opinion interface and commerce logic. Since the three parts are independent of each other, no matter which one is changed, it is not imperative to modify the others. The same pattern can provide a variety of various opinion representations. Create a new opinion bottom on the existing one. When the pattern record changes, the corresponding opinion will be notified to make adjustments. Repeatability: MVC mode can copy the pattern to the new Terrace independently, which is independent of the opinion. High efficiency: In the plication of interface progression, it is only imperative to consider how to design a friendly and easy to operate interface. When developing the pattern, you only need to consider the commerce logic and record redeem, which imattests the progression efficiency. 2.3 Logical Architecture Design of Cloud Terrace The cloud Terrace includes the following logical modules: Fundamental access and fundamental logic module, mainly used to handle the access and request of on-site and network optical environment. The routing balance channel is mainly used to handle the routing path allocation and pressure sharing of access. The main bearing frame of the NET Code channel. Network acceleration and network firewall. Channel monitoring management module. MongoDB based record storage channel. DB RIC is selected here as the record check channel of MongoDB. Code hosting and image warehouse. 2.4 Channel Case The channel developed in this paper refers to the logical architecture of the cloud Terrace. Combining the B/S anatomy, MVC is a design mode that can simplify the progression and redeem of utilizes. It can be divided into four storeys: commerce logic storey, presentation storey, domain module storey and record persistence storey. Therefore, in the plication of building a network inculcate channel bottom on SSH case, it is imperative to materialize the logical separation of the above hierarchical anatomy according to the MVC design mode. In this channel, the fundamental commerce plication of its implementation is as follows: using Struts as the infraanatomy of the overall channel, it is responsible for receiving requests and transmitting responses through JSP pages to materialize interactive interfaces, and allocating commerce logic Actions through Struts; Hibernate case is used to support the persistence storey, manipulate commerce jump, plication record requested by DAO components, and return handle fruit. Spring is used to manage the transactional operations responsible for struts and hibernate. The Spring IoC container that manages the service components provides the commerce pattern component and the Collaborative Object record handle (DAO) component of
Design of College English Reading Inculcate Feedback Channel
133
the component to Action, and provides transaction handle; According to the channel requirements, the specific pattern is built by using the object-oriented analysis method. The DAO class implemented by Hibernate architecture is used to realize the conversion and access between Java classes and record pools. The DAO implementation of Hibernate is given. The persistence storey, commerce logic storey, opinion storey and pattern storey are independent and completely isolated from each other, and the function of the network inculcate channel is well implemented according to the MVC design pattern, that is, the changes of the client user interface have less changes to the pattern storey, and the changes of the record pool persistence storey will not affect the design and implementation of the opinion storey interface of the utilize procedure, and can be implemented according to the channel scalability requirements, It is convenient to add new commerce modules on the basis of compatibility with the old channel modules. The storeyed SSH case makes the coupling between various levels of the channel low, which is conducive to the cooperation of the project team. The code anatomy of the feedback channel for college English reading inculcate developed in this paper is clear, and the entire software utilize channel anatomy can be grasped through simple configuration of the channel, which imattests the progression efficiency and also provides convenience for later redeem.
3 Record Pool Design A record pool is a warehouse for record storage. It is a gather of large amounts of organized and shared record stored in computers for a long time. Record pool design is a record management skill that classifies, organizes, encodes, stores, retrieves and maintains the record obtained in the demand analysis stage. Record pool Design refers to the construction of an optimal record pool pattern and the establishment of a record pool and its utilize channel for a given utilize environment, so that it can effectively store record and meet various utilize needs (handle needs and information needs) of consumer. Generally speaking, record pool design includes two meanings: the first meaning is in a broad sense, that is, record pool design is the design of record pool and record pool utilize channel, and is the record pool utilize channel of the entire management information channel [5]. The second meaning is in a narrow sense. Record pool design is to design all levels of record pool patterns and establish a record pool, that is, to design the record pool itself. The goal of record pool design is to provide an information infraanatomy and an efficient operating environment for utilize channels and consumer. The design methods of record pool can be broadly summarized into two types: the first type of record pool design method is a record pool design method that focuses on information needs and also takes into account the handle needs. It is called record oriented design method - record oriented approach; The second kind of record pool design method is a kind of record pool design method that mainly deals with requirements while taking into account information requirements. It is called plication oriented approach. The record oriented record pool design method bottom on information needs can clearly reflect the internal relationship of record, and can meet the utilize needs of current consumer as well as potential consumer. The record pool design method of college English reading inculcate feedback channel under cloud Terrace adopts a record oriented
134
D. Qi and Y. Liang
record pool design method bottom on information demand. It abstracts the fundamental record obtained in the demand analysis stage, and then designs the corresponding record table, and then establishes the relationship between the record tables according to the actual operation plication of the Academic Affairs Office of our school, thus forming the record pool design of the channel. Generally speaking, the record pool design of the channel is bottom on the standardized design method. Combining the whole plication of record pool design and progression, the record pool design of the channel is divided into the following six stages: the first stage is the demand analysis of the record pool; The second stage is the conceptual anatomy design of the record pool; The third stage is the logical anatomy design of the record pool; The fourth stage is the physical anatomy design of the record pool; The fifth stage is the implementation of the record pool; The sixth stage is the operation and redeem of the record pool [6]. Table 1. Record pool Table Learning record
record type
Length
Keyword or not
subject information sheet
subjectID
int[50]
Primary key
subjectName
Varchar[50]
no
subjectCode
Varchar[50]
no
Operation table
Learning record
subjectTextbook
Varchar[50]
no
subjectDescription
Varchar[50]
no
ID
int
Primary key
title
Varchar
no
link
varChar
no
date
datetime
no
type
char
no
subject code
varchar
no
User site
varchar
no
subject note
char
no
LearningID
int(10)
Primary key
UserID
int(5)
Foreign key
TextID
int(4)
Foreign key
LearningStartTime
datetime
no
LearningEndTime
datetime
no
LearningSpeed
int(4)
no
LearningAnswerRec
text
no
LearningScore
int(3)
no
LearningCount
int(1)
no
Design of College English Reading Inculcate Feedback Channel
135
The design of record pool physical anatomy is the plication of transforming the logical anatomy of the record pool into an optimal physical anatomy, which determines the final storage anatomy and access mode of the record pattern. Through the summary and analysis of the logic design, the third paradigm is reached, and the background record pool of the network inculcate channel is created, which is called English inculcate. Create the following main tables (use MySQL 5.045 to create record pools and tables). The design of the record table is bottom on the relationship mode, but for the convenience of channel management, the channel combines the user information of students, teachers and administrators in the user information record table, some of which are display in Table 1 below.
4 Channel Function Module Realization The college English reading inculcate feedback channel under the cloud Terrace installs the technical channel on the hardware equipment. For the sake of facilitate the centralized management of components, the software procedure, record pool and information release for task handle are all completed by the server, reducing the workload of the user end [7]. The inculcate feedback channel, bottom on the B/S network anatomy and the background record pool as the core, will serve the consumer as the goal, reasonably arrange the curriculum resources, such as the upload of curriculum videos, the handle and redeem of test questions, and provide assistance and support for English reading inculcate. The channel functions include 6 parts, as displayn in Fig. 2. Start
User access page N User registration
Registered user Y User Login
N Identity Check
Error page
Y
Autonomo us learning module
Reading teaching module
Achievem ent feedback and evaluation module
Teaching manageme nt module
Resource manageme nt module
Communic ation module
database
End
Fig. 2. Channel Function Flow Chart
136
D. Qi and Y. Liang
4.1 Self Learning Module Autonomous learning module is the key for students to learn English. According to the functional characteristics, it can be divided into two parts: curriculum management and special exercises. The subject management module is divided into four parts: subject selection, subject learning, class management and statistical analysis. After students register and log in, they can choose subjects. After selecting a subject, students can start learning the subject. Student consumer can choose to add a subject to enter the subject interface. Students can see the learning content arranged by the teacher. Click the link of the specific content title to learn the corresponding content. The default subject interface will include: subject management panel: this includes learning group settings, student fulfill opinioning, and subject learning activity statistics; Learning activities: add relevant learning contents and activities to the subject content, and list relevant items in the learning activity area; So that consumer can quickly access relevant types of learning content; Learning notes: used for students to record some relevant learning experience. To learn the subject, you can directly click the link of the subject to be entered in the subject list on the desktop, and then select a subjectware to read or download. After student consumer select subjects and subjectware, the channel will search the history of students’ reading the subjectware [8]. Special exercises include five parts: word grammar, listening training, reading comprehension, writing and translation. In the autonomous learning module, students can not only select subjects to study, but also select their own weak listening, speaking, reading, writing and translating knowledge for special exercises according to their own background, English foundation, knowledge anatomy and interest, so as to improve their English level. 4.2 Reading Inculcate Module The main direction of the “reading inculcate” module is to publish subject content, update the test question bank, and assign English reading tasks for teachers; It provides students with the functions of taking part in reading exercises, completing unit tests and final exams. Therefore, the reading inculcate module mainly involves three sub modules: reading learning, inculcate commerce and reading test. The reading learning sub module is mainly divided into extracurricular reading tasks and classroom reading tasks, and the learning fruit under the two tasks will be recorded in the record pool. The commerce flow chart of extracurricular reading tasks is the commerce plication for teachers to carry out extracurricular independent reading inculcate of college English. First of all, the teacher assigns the English reading tasks to be checked in the next class according to the inculcate arrangement. Before or during the next class, the teacher checks and records the students’ fulfill. The student’s fulfill is recorded according to whether the students meet the requirements of the reading tasks. If the students are qualified, the student’s actual fulfill is recorded; Otherwise, the students will be recorded as unqualified and asked to complete the reading task again until all the reading tasks are completed. The recorded fruit will form the reading inculcate fruit. The commerce flow chart of classroom reading tasks is a commerce flow chart for teachers to assign and evaluate
Design of College English Reading Inculcate Feedback Channel
137
classroom English reading tasks. On the one hand, it can evaluate students’ usual fulfill; On the other hand, it can also evaluate students’ English reading learning. First of all, after completing other inculcate tasks, the teacher will assign reading practice tasks to students in the classroom. After students finish reading in the classroom, they will submit them to the teacher. The teacher is responsible for checking the completion of students’ reading assignments. The students’ materializements will be recorded and included in the formative evaluation of the curriculum. The reading test sub module can be further divided into unit test and final test. Both tests can include objective questions and subjective questions. 4.3 Fulfill Feedback and Evaluation Module The module of “score feedback management” is mainly intended to provide teachers with the function of marking students’ test questions. The fruit after automatic marking will be recorded in the record pool for score analysis and evaluation. The fulfill feedback management module mainly involves four sub modules: automatic marking, formative evaluation, summative evaluation and learning analysis. The automatic paper reading submodule mainly involves two submodules: automatic and manual. The automatic evaluation sub module is mainly devised for the objective question type in the test, which is plicationed by comparing with the pre entered standard answers and scores. The manual scoring sub module can be further divided into two types of questions: short answer questions and blank filling questions. By default, the channel will automatically compare the information entered by students with the standard answers. If it is identical, full marks will be given. If it is inconsistent, no marks will be given and manual scoring will be prompted. The formative assessment sub module is mainly devised for the learning plication in the online inculcate of college English reading, assessing students’ online learning time and unit test scores. On the one hand, students’ usual materializements can be recorded; On the other hand, it can also evaluate students’ reading level. First of all, during the subject, the teacher prepares English extracurricular or in class reading assignments for students. After students complete the reading tasks, they submit them to the teacher or are ready to be checked by the teacher. The teacher is responsible for checking and recording the students’ completion. Finally, the students’ materializements in the learning plication are recorded in the score registration form and included in the formative evaluation. Summative evaluation sub module can be further divided into mid-term test and final exam. The scores of the two parts are calculated automatically by the channel according to the preset proportion and reported to teachers and administrators. First of all, teachers should prepare the final examination question bank in advance before the end of the subject, organize a unified examination after the end of the subject (or in the classroom before the end of the subject), and students should submit the test questions after completing the task. Teachers are responsible for evaluating students’ answers and recording. For students who fail to meet the test requirements, they are required to take the test again until they reach the standard. Finally, the students’ scores in the learning plication and in the final test are summarized to form the subject scores. The learning analysis sub module is used to evaluate the fulfill in the learning plication. The content includes the evaluation of learning attitude, participation, homework
138
D. Qi and Y. Liang
and testing. Through record gather and analysis, the plication can be evaluated as accurately as possible, and learning problems can be summarized and fed back to teachers and students in a timely pattern. 4.4 Inculcate Management Module The inculcate management module provides teachers with a simple and convenient interface for online inculcate. Teachers have various functions from learners. Teachers can use the online learning module to organize inculcate activities, guide learners to learn, upload materials for learners’ reference, and put forward learning requirements and evaluation for each student. The main task of the function module is focused on online learning, where students and teachers communicate synchronously and asynchronously. Consumer with teacher identity log in to the teacher’s inculcate activity channel. If online inculcate is required, class construction should be carried out first, the maximum quantum of students in the class should be set, and students in their own class should be selected. Teachers can establish three or fewer online learning classes at the same time. The channel takes classes as a unit. In the online learning part, teachers can independently prepare electronic inculcate plans, network textbooks, inculcate videos, case record pools, test question record pools, reference record pools, etc., and upload documents for release, modify and delete these contents for students to learn according to the actual inculcate situation. In this module, teachers register and create classes according to their own needs, and then organize online inculcate content, upload materials, organize online Q&A, and statistically analyze students’ learning. 4.5 Resource Management Module The characteristics of English reading inculcate determine that the inculcate plication requires a lot of practice, and a relatively complete set of resources should be provided to teachers so that learners can teach in many ways. For example: text subjectware, pictures, audio, question bank, etc. The resource management module includes two parts: upload resources and download resources. The function of the inculcate material upload module is mainly for teachers to upload the required inculcate videos, inculcate subjectware, English listening skills, chapter cultural background, audio and other materials to the channel for students to browse and download. Teachers upload homework and test questions to the question bank, which can be called when they need to assign homework and subject tests. Teachers can also upload subjectware. When teachers upload subjectware information, the channel will automatically detect whether the information format is correct. The function of downloading materials mainly refers to the inculcate materials used by students in pre class preopinion and subject learning, including reading skills, vocabulary and grammar materials.
Design of College English Reading Inculcate Feedback Channel
139
4.6 Communication Module The communication module is mainly for the convenience of students to communicate with teachers or classmates on the problems they encounter in the plication of independent learning, which can quickly solve problems. This module mainly provides chat room, BBS forum, message board and E-mail communication.
5 Channel Function Test 5.1 Channel Test Direction Channel testing is mainly to ensure the calibre and reliability of the channel. It is the last link in the channel progression plication. The plication of executing procedures to find errors is called channel testing. A successful channel test is the discovery of undetected errors. The direction of channel testing is to find all kinds of potential errors with the least labor cost and time cost. Generally speaking, channel testing of management information channel mainly includes channel hardware testing, channel software testing and channel network testing. The hardware test and network test of the channel can be performed according to specific fulfill indicators. Generally, the channel test refers to the software test of the channel. For the software testing of this channel, the design of software test cases and test plans mainly focuses on the following three aspects: The first aspect is to confirm that all functional modules of the software can be implemented according to the fruit given in the software requirements analysis plication, and also ensure the calibre of software implementation. That is to say, it is imperative to ensure that all functional modules of the channel can be realized and operated according to the expected goals, and at the same time, it is imperative to ensure that the commerce handle plication of each functional module is correct and conforms to the specifications of the channel design. The second aspect is to further improve the channel according to the information confirmed in the first aspect and the feedback information and fruit obtained in the test plication, and to evaluate the status of each progression plication of the channel in the plication of improvement, so as to ensure the reliability of the channel. Third, combining the life cycle of software progression, and according to the fruit of software testing, we can find errors or defects in the whole software progression plication at various time periods, so as to further improve the calibre of software progression. 5.2 Channel Test Plan In the plication of channel testing, various channels mainly decide which testing method should be selected for the target channel according to the various skill of channel testing. The test can be divided into black box test and white box test according to whether the tested channel needs specific algorithms to implement the anatomy of the test channel in the test plication. White box test. White box testing is also a kind of dynamic testing. Its full English name is White Box Testing, which is also commonly called logic testing or anatomy
140
D. Qi and Y. Liang
testing. White box testing is equivalent to putting the procedure to be tested into a transparent box, and the channel tester tests the procedure on the premise of understanding the channel anatomy and handle plication. The white box test generally determines whether the actual state of the procedure is consistent with the expected state by setting the test fruit of various test points. Black box test. Black box testing is also a kind of dynamic testing. Its full English name is Black Box Testing, which is also commonly referred to as I/O driven testing or functional testing. Black box testing is equivalent to putting the procedure to be tested into a completely invisible box, that is, the channel tester conducts testing without knowing the internal anatomy of the procedure and the plication of the procedure. Black box testing is mainly used by testers to discover whether the software has defects by observing the entry and exportation fruit of the software from the perspective of consumer. According to the relevant theoretical knowledge of the above software testing, the testing scheme of the college English reading inculcate feedback channel under the cloud Terrace is roughly as follows: The first step is to conduct synchronous testing for each module, interface, plication, etc. in the design plication. Second, for each individual module, white box testing can be used to test each module. This testing plication mainly focuses on unit testing, while verifying the internal independence of the module. Step 3: After completing the unit test of each module in step 2, carry out the integration test of each module and complete the fundamental test of the channel. The test in this step is mainly to verify whether the channel can meet the needs of customers and whether the channel functions can be realized. Step 4: After completing the above three steps of testing, implement black box testing on the channel. Invite teachers or students with software progression experience to test as much as possible, and correct errors in a timely pattern. Details such as channel fault tolerance, channel security, record legitimacy and record integrity need to be tested at this stage. 5.3 Sample of English Reading Inculcate Materials The samples of English reading inculcate materials selected in the channel test are as follows: ‘It’s silly, isn’t it, Ellen,’ he muttered, ‘that I have worked all my life to destroy these two families, the Earnshaws and the Lintons. I’ve got their money and their land. Now I can take my final revenge on the last Earnshaw and the last Linton, I no longer want to! There’s a strange change coming in my life. I’m in its shadow. I’m so little interested in daily events that I even forget to eat and drink. I don’t want to see those two, that’s why I don’t care if they spend time together. She only makes me angry. And he looks so like Catherine! But everything reminds me of Catherine! In every cloud, in every tree I see her face! The whole world reminds me that she was here once, and I have lost her!’ … I can’t continue like this! I have to remind myself to breathe – almost to remind my heart to beat! I have a single
Design of College English Reading Inculcate Feedback Channel
141
wish, for something my whole body and heart and brain have wanted for so long! Oh God! It’s a long fight! I wish it were finished!’ —Heathcliff (Wuthering Heights) 5.4 Channel Test Index There are two main test indexes of this channel, which are used to analyze the fulfill in white box test and black box test respectively. For the former, the feedback postpone index is used as the evaluation index, and the quantum of concurrent consumer is used as the evaluation index. (1) The calculation plication of feedback postpone index is as follows: first, calculate the time difference between the beginning of reading and the end of feedback, that is T = T1 − T2
(1)
where, T represents the time difference; T1 stands for reading start time; T2 represents the end time of feedback. Then calculate the average time difference, and the calculation formula is as follows: n
T =
T i
i=1
n
(2)
where, T represents the average of time difference; T i represents the time difference of the th-i user; n represents the quantum of consumer. Finally, the feedback postpone index is calculated as follows: 2 n T i − T i=1 (3) S= n where, S represents the feedback postpone index, which is 0–1, and the closer to 1, the better. (2) Quantum of concurrent consumer The quantum of concurrent consumer, that is, the quantum of consumer that the channel can simultaneously carry and satisfy. The calculation formula is as follows: m
fi (4) H = max i=1
where, H represents the quantum of concurrent consumer, the greater the value, the better; fi represents the i rd user who successfully received feedback; m represents the quantum of consumer who successfully received feedback.
142
D. Qi and Y. Liang
Fig. 3. Feedback postpone Index
5.5 Fruit and Analysis (1) Feedback postpone index As can be seen from Fig. 3, under the utilize of the devised channel, the feedback postpone index remains above 0.9, while the feedback postpone index of the two literature channels remains below 0.85, which attests the task handle efficiency of the devised channel. Quantum of concurrent consumer
Table 2. Quantum of concurrent consumer Quantum of access requests/piece
Quantum of concurrent consumer
100
100
200
200
300
300
400
397
500
498
600
593
700
692
800
793
900
892
1000
991
Design of College English Reading Inculcate Feedback Channel
143
It can be seen from Table 2 that with the enhancement of the quantum of access requests, the quantum of concurrent consumer carried by the channel has remained at a relatively stable level without a sharp decline. This attests that the channel has a strong bearing fulfill and can handle multiple request tasks at the same time.
6 Conclusion At present, English inculcate is facing unprecedented challenges: in the inculcate reform carried out in many colleges and universities, the credits and class hours of English subjects are all in reading, which is bound to be challenged. It is difficult to find a balance between the limited classroom inculcate time and the large amount of reading intake required in language learning. We must find an effective way to ensure that the “calibre” and “quantity” of students’ English reading can meet the requirements of language acquisition. Therefore, a feedback channel for college English reading inculcate under the cloud Terrace is devised. After testing, the channel has good task handle fulfill and concurrency fulfill, and can be applied to the actual English reading inculcate work. However, the devised feedback channel for college English reading inculcate cannot realize intelligent feedback generation and self-adaptive adjustment. In the future, the main work will focus on the automatic generation and self-adaptive adjustment of feedback, further reduce the degree of intelligence of the channel and realize the automatic operation of the channel.
References 1. Kariapper, R., Samsudeen, S.N., Fathima, S.: Quantifying the impact of online educational system in teaching and learning environment among the teachers and students. Solid State Technol. 63(6), 12118–12132 (2020) 2. Sun, J.: Research on resource allocation of vocal music teaching system based on mobile edge computing. Comput. Commun. 160(2), 342–350 (2020) 3. Yu, X.: Resource scheduling for piano teaching system of internet of things based on mobile edge computing. Comput. Commun. 158(99), 73–84 (2020) 4. Yang, L.: Design of interactive English reading teaching system based on hybrid communication network. Int. J. Cont. Eng. Educ. Life-Long Learn. 30(4), 460 (2020) 5. Wang, M.: Design of step-by-step teaching system for English writing based on cloud network technology. Int. J. Cont. Eng. Educ. Life-Long Learn. 30(4), 428 (2020) 6. Tian, G., Darcy, O.: Study on the design of interactive distance multimedia teaching system based on VR technology. Int. J. Cont. Eng. Educ. Life-Long Learn. 31(1), 1 (2021) 7. Wójcik, K., Piekarczyk, M.: Machine learning methodology in a system applying the adaptive strategy for teaching human motions. Sensors 20(1), 314 (2020) 8. Zhang, Z., Ren, Y.: Function matching of cloud manufacturing resources based on text semantic similarity. Comput. Simulat. 38(12), 172–175, 340 (2021)
Smooth Switching Control Method for Parallel and Off Grid of Distributed Photovoltaic Power Grid Based on Deep Reinforcement Learning Xinran Liu1(B) , Wenyu Liu1 , Lu Liu1 , Haishan Zhou1 , Yudan Liu1 , and Yanfa Xu2 1 State Grid Liaoning Marketing Servide Center, Shenyang 110000, China
[email protected] 2 Department of Information Engineering, Shandong Vocational College of
Science and Technology, Weifang 261053, China
Abstract. The parallel and off grid switching of distributed photovoltaic power grid will cause sudden changes in voltage and current, which is a key factor affecting its stable operation. Therefore, a research on the parallel and off grid smooth switching control method of distributed photovoltaic power grid based on deep reinforcement learning is proposed. The in-depth reinforcement learning method DQN algorithm is used to build the energy management model of the distributed photovoltaic power grid, explore the characteristics and laws of the distributed photovoltaic power grid, and on this basis, in-depth analysis of the transient phenomenon of the parallel off grid switching of the distributed photovoltaic power grid is carried out. Based on the PQ and VF control principles, the grid connected controller and the off grid controller are designed, and the smooth parallel off grid switching control strategy is formulated, The smooth switching control of the parallel and off grid of the distributed photovoltaic power grid can be achieved by implementing the strategy. The experimental data show that the minimum value of the sudden change coefficient of voltage and current obtained by the proposed method is 0.1 and 0.2, which fully proves that the proposed method has better control effect of parallel and off grid switching. Keywords: Parallel and Offline Distributed Photovoltaic Power Grid · Smooth Switching · Deeply Strengthen Learning · Control Strategy · Island Detection
1 Introduction The independent photovoltaic power generation system has a serious problem of daynight imbalance. The photovoltaic array can convert solar energy into electric energy only when there is sunlight in the day, and does not generate any electric energy at night. Photovoltaic power generation is also affected by natural factors such as light and season, and the output power fluctuates sharply [1]. If there is only an independent photovoltaic power generation system in the power supply network of a region, the quality of load power supply in that region is extremely poor, and there is the possibility © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 144–157, 2024. https://doi.org/10.1007/978-3-031-50571-3_11
Smooth Switching Control Method
145
of power failure at any time, which seriously affects work and life; If the independent photovoltaic power generation system is directly integrated into the distribution network, the power quality of the entire distribution network power supply area will be reduced, and even the regional distribution network will collapse, affecting the entire large power grid. When the distributed photovoltaic power grid is not equipped with energy storage, due to the limitation of the output of the distributed energy itself, the ideal control effect cannot be achieved. The traditional microgrid controller is applied to the distributed photovoltaic power grid. Because it is not designed for the distributed photovoltaic power grid, it cannot adapt to the photovoltaic output characteristics, resulting in the unstable control effect. The voltage and current in the grid change greatly during switching [2]. Reference [3] proposes an improved Gaussian filter based on phase-locked loop (PLL) for power grid monitoring and seamless transmission control, which is used to improve the power quality of microgrid with photovoltaic cell energy storage (PV-BES). Reference [4] controls the connection of multiple PV strings and multiple inverters based on a photovoltaic DC switch topology. After the energy storage is added to the controller of the distributed photovoltaic power grid, the problem of day night imbalance of photovoltaic power generation can be solved by timely charging and discharging. The controller controls the distributed power supply module and energy storage module at the same time. During grid connected operation, the controller based on energy storage needs to keep the output power of the source storage module unchanged. During off grid operation, the controller needs to adjust the output power of the source storage module following the power fluctuation in the network. In addition, the controller needs to complete the state synchronization before and after the switching of the distributed photovoltaic power grid. At this time, if the charging and discharging control of the energy storage device is improper, the controller will still have poor effect. Therefore, the controller of distributed photovoltaic power grid is always considered to be the part prone to problems, and also the key to the parallel off grid switching technology of distributed photovoltaic power grid. Therefore, a research on the parallel off grid smooth switching control method of distributed photovoltaic power grid based on deep reinforcement learning is proposed.
2 Research on Smooth Switching Control Method of PV Grid Parallel and Off Grid 2.1 Establishment of Distributed Photovoltaic Grid Energy Management Model In order to improve the smoothness of the parallel and off grid switching control of the photovoltaic grid, the first step is to build the energy management model of the distributed photovoltaic grid, explore the characteristics and laws of the distributed photovoltaic grid, and lay a solid foundation for the subsequent transient analysis of the parallel and off grid switching of the distributed photovoltaic grid [5]. In order to simplify the calculation, the AC side of the distributed photovoltaic grid is set to consider the photovoltaic power generation system and load. The DC side includes batteries and hydrogen based energy storage system composed of electrolyzers,
146
X. Liu et al.
hydrogen storage devices and fuel cells. The AC side and the DC side are connected through AC/DC converters. The main parameters of various equipment are shown in Table 1. Table 1. Parameters of Distributed PV Grid Equipment Equipment
Photovoltaic power generation
Battery
Hydrogen based energy storage
Peak power/kW
200
10
10
service life
20
20
20
Maximum capacity/kW·h
-
20
1500
Efficiency/%
20
90
60
In the distributed photovoltaic power grid, the final energy source is photovoltaic power generation, and its characteristic law conforms to the formula (1): Pmax = Pt × αz
(1)
In formula (1), Pmax represents the maximum power output of photovoltaic power generation; Pt represents the solar radiation power received by the photovoltaic panel in real time; αz represents the conversion efficiency of photovoltaic power generation. The battery is mainly used as short-term energy storage equipment with high charging and discharging efficiency. It is mainly used to maintain the real-time supply and demand balance. Its characteristic law conforms to the formula (2): Rt+1 = Rt + Pc × αc dc − Pf × αf df (2) In Formula (2), Rt and Rt+1 represent the battery capacity corresponding to t and t + 1 at time respectively; Pc and Pf represent charging and discharging power; αc and αf indicate the charging and discharging efficiency of the battery. Hydrogen based energy storage is mainly used as long-term energy storage equipment with low charging and discharging efficiency and low peak power. However, it can store energy for a long time through electrolysis, mainly to balance the energy imbalance between seasons. Its characteristic law conforms to Formula (3): (3) Rˆ t+1 = Rˆ t + Pˆ c × βc dc − Pˆ f × βf df In Formula (3), Rˆ t and Rˆ t+1 represent the capacity of hydrogen storage device corresponding to t and t +1 at time respectively; Pˆ c and Pˆ f represent the power of electrolyzer and fuel cell; βc and βf indicate the efficiency of the electrolyzer and fuel cell. Since the total load in the distributed photovoltaic grid is small, it is greatly affected by random factors, and the load curve has obvious volatility over time, so it will not be repeated [6].
Smooth Switching Control Method
147
Based on the above analysis results of the components of the distributed photovoltaic power grid, a distributed photovoltaic power grid energy management model is built by applying the in-depth reinforcement learning method - DQN algorithm, as shown in Fig. 1. Distributed photovoltaic grid environment
Generate action
Deep reinforcement learning
Q network
Reward value
copy
storage Experience cache
Q
network
error function
Current Q value
Target Q value
sampling Current status, current action, next status, reward value
Fig. 1. Schematic diagram of distributed photovoltaic grid energy management model
As shown in Fig. 1, the construction model fully demonstrates the process of data flow and information interaction, which can be mainly divided into data generation, caching and sampling, neural network design and calculation, gradient calculation and parameter update, and action generation and selection [7]. (1) Data generation, caching and sampling. Under the state t at a certain time, when the environment receives an action t , according to the Markov decision process, a reward value χt will be generated and the next state t+1 will be obtained, thus generating the data tuple (t , t , χt , t+1 ). This group of data is further stored in the experience cache, and the array is randomly extracted from the experience cache when needed, so as to reduce the correlation between the data and facilitate the research of smooth switching control methods for PV grid parallel and off grid; (2) Design and calculation of neural network. The neural network in the DQN algorithm is the Q network of the calculated value function and the Q network of the same structure. The Q network accepts the current state and calculates the Q value from the sampled data in the experience buffer and copies it to the Q network regularly. In this way, the model for calculating the target Q value will be fixed for a period of time, which can reduce the volatility of the model; (3) Gradient calculation and parameter update [8]. The current R value Q(t , t ; ϑ) and target Q value χt + γ max Q(t+1 , t+1 ; ϑ) are calculated by Q network and Q network respectively. Where, ϑ represents the learning rate of the deep reinforcement learning method; γ represents the discount factor of the deep reinforcement learning
148
X. Liu et al.
method. The gradient is calculated by the mean square error function and the parameters of the Q network are updated to realize the iteration of the Q network; (4) Action generation and selection. In the DQN algorithm, actions are generated through max Q(t+1 , t+1 ; ϑ) calculation. Since the same batch will sample multiple arrays during the sampling process, multiple actions will be calculated. In the action selection process, the depth reinforcement learning strategy is generally used, that is, the best strategy is selected with a greater probability, and the actions are selected randomly with a smaller probability to ensure sufficient exploration. The main parameters of DQN algorithm, an in-depth reinforcement learning method, are set according to the operation characteristics of the distributed photovoltaic power grid, as shown in Table 2. Table 2. Main Parameters of DQN Algorithm Parameter name
Symbol
Numerical value
Iterations
ϒ
1000
scale
32
Cache capacity
O
100000
Greed coefficient
ε
0.0001
Discount factor
γ
0.9
Learning rate
ϑ
0~1
The above process completed the building of the energy management model of the distributed photovoltaic grid, and analyzed the components of the distributed photovoltaic grid, providing support for the subsequent transient analysis of the parallel and off grid switching of the distributed photovoltaic grid. 2.2 Transient Analysis of Parallel and Off Grid Switching of Distributed Photovoltaic Power Grid Based on the above energy management model of the distributed photovoltaic power grid, in-depth analysis of the transient phenomenon of the parallel off grid switching of the distributed photovoltaic power grid is carried out to make sufficient preparation for the subsequent design of the parallel off grid controller. In the process of switching the distributed photovoltaic power grid from grid connection to island mode, there are various transient phenomena, mainly the instability of voltage, frequency and power, and the control of the distributed photovoltaic power grid is different due to the difference between the two modes. Therefore, necessary methods must be taken to ensure smooth switching and stable operation. In the process of switching the distributed photovoltaic power grid from grid connected operation mode to island operation mode, the energy storage controller timely switches from three loop control to double loop operation mode. Since the filtered capacitor voltage loop and the filtered inductor current loop remain unchanged in the
Smooth Switching Control Method
149
two operating modes, it can ensure the smooth and fast mode switching process of the system. When the load changes, it is necessary to further study how to ensure the power balance. As the energy storage device generally uses the battery, the energy density of the battery is high, but the dynamic response, power density and cycle life are far lower than those of the super capacitor [9]. It can be seen that the storage battery is an energy storage device, which is used to supplement the power shortage. The super capacitor has a fast response. In order to reduce the transient phenomenon in the switching process for a short time, the super capacitor uses VF control to quickly support the voltage and frequency of the distributed photovoltaic power grid system. This is very suitable for the switching of distributed photovoltaic power grid from grid connection to island operation mode, that is, during the switching process, the voltage and frequency support are quickly responded by the super capacitor, while the main power supply can be undertaken by the battery when the island operates stably to compensate for the power shortage. This not only avoids the performance defects of using one energy storage alone, but also avoids the additional configuration of power or capacity to achieve smooth switching when using a single energy storage, thus reducing the cost. In the distributed photovoltaic power grid, the energy storage unit is composed of energy storage components, rectifier bridge and inverter bridge, and control system. When the distributed photovoltaic grid is connected to the grid, the energy storage unit can absorb excess energy for storage: when the distributed photovoltaic grid is operating in island mode, the dynamic response speed of the distributed photovoltaic grid needs to be improved by controlling the output energy of the energy storage device, which is used to adjust the active and reactive power balance of the distributed photovoltaic grid, and maintain the constant voltage and frequency of the distributed photovoltaic grid, Ensure the stable operation of distributed photovoltaic power grid. PQ control is adopted for micro power supply to make the energy storage device in charging standby state when connected to the grid, and improved VF droop control is adopted for the battery when off grid to ensure power balance, thus maintaining the stability of voltage and frequency in the distributed photovoltaic grid. When the supercapacitor is connected to the grid and turned into an island, the VF control that tracks the grid voltage is used to temporarily provide voltage and frequency reference for the distributed photovoltaic grid. When the distributed photovoltaic power grid is operating in isolated islands, its voltage will deviate from the voltage of the large power grid due to the droop control effect. If the grid is connected to the grid after direct reclosure, huge impulse current may be caused, causing equipment damage and potential safety hazards. Therefore, before the grid connection of the distributed photovoltaic power grid, certain pre synchronization control measures must be taken to ensure the synchronization of the voltage of the distributed photovoltaic power grid and the voltage of the large power grid. The pre synchronization unit is used to control the output voltage of the inverter to track the external voltage, so as to reduce the impact of the parallel reclosing process and ensure the smooth completion of the parallel connection. It includes amplitude tracking and phase (frequency) synchronization, also realized by adjusting voltage and frequency [10].
150
X. Liu et al.
The above process completes the analysis of the transient phenomenon of the parallel and off grid switching of the distributed photovoltaic power grid, and provides a basis for the subsequent design of the parallel and off grid controller. 2.3 Design of Distributed Photovoltaic Grid Parallel and Off Grid Controller Based on the above transient analysis results of parallel and off grid switching of distributed photovoltaic power grid, the grid connection controller and off grid controller are designed to provide a tool for the implementation of smooth parallel and off grid switching control of distributed photovoltaic power grid. The essence of PQ control principle applied to grid connected controller is to realize independent control of active and reactive power through coordinate transformation and decoupling control. In the distributed photovoltaic power grid, the PQ control of distributed generation and energy storage can ensure that the active and reactive power transmitted by the distributed photovoltaic power grid remains unchanged. The PQ control principle is shown in Fig. 2.
b
0 Fig. 2. PQ Control Schematic Diagram
As shown in Fig. 2, assuming that the PQ control is stable at a point, the photovoltaic plus energy storage operation is in the set state, that is, the photovoltaic power output P and Q are Pref and Qref respectively, and remain unchanged. When PQ control is applied to grid connected operation mode, the support voltage and frequency of the system are provided by the large power grid, which will not change arbitrarily, but will only change slightly, and the output P and Q will only change slightly around Pref and Qref (the area is limited in the figure). For photovoltaic power supply, in order to ensure constant output power, when the optical storage output is less than the set power, the energy storage releases electrical energy, and when the optical storage output is greater than the set power, the energy storage absorbs power. For the power module of grid connected distributed photovoltaic power grid, the photovoltaic power supply under PQ control outputs a fixed power.
Smooth Switching Control Method
151
When the load power in the grid is equal to the set power, the internal load of the grid is balanced and the power is self-sufficient; When the load power in the grid is greater than the set power, there will be a power gap in the grid, and the external distribution network will supply power to the distributed photovoltaic grid; When the load power in the grid is less than the set power, the distributed photovoltaic grid will generate surplus power, which can be transmitted to the external distribution network. PQ control generally adopts double closed loop control. Measure the three-phase instantaneous voltage Uabc and three-phase instantaneous current Iabc of the control point, convert the three-phase voltage and current a, b, c into Ud , Uq and Id , Iq in the d-q coordinate through Park transformation, calculate the output power P and Q of the control point at this time, and provide them to the external loop power control module. The outer loop power control module compares P and Q with the set reference powers Pref and Qref , and obtains the current reference values Idref and Iqref required for the inner loop current control through PI regulation. Compare the difference between Id , Iq and Idref , Iqref , after PI regulation, after coaxial voltage compensation and cross coupling compensation, the inner loop current control gets the voltage control signal Usd , Usq . The Usd , Usq signal is then inverted by Parker to form a three-phase voltage modulated wave signal, which is transmitted to the IGBT control terminal of the converter, thus realizing the control function of the entire controller. Wherein, the PI module of the outer loop controller uses the power difference to obtain the current reference values Idref and Iqref as the formula (4): ⎧ ⎨ Idref = δpd + δid Pref − P S (4) ⎩ Iqref = δpq + δiq Qref − Q S In Formula (4), δpd , δid , δpq and δiq represent the auxiliary coefficients under d-q coordinate respectively; S is the reference standard factor. The inner loop control link is regulated by PI, the coaxial voltage is compensated, and the cross coupling compensation outputs the voltage control signals Usd and Usq , as shown in Formula (5): ⎧ ⎨ Usd = δpd + δid Idref − Id + Ud − Iq S (5) ⎩ Usq = δpq + δiq Iqref − Iq + Uq − Id S The off grid controller applies the VF control principle, that is, the control strategy that controls the output voltage and frequency of the converter and maintains them at the corresponding reference value. When using this control method, the distributed power supply can change the output power to adapt to the changes in the load power in the system, and the system voltage and frequency are stable near the reference value to maintain the stable operation of the system. The VF control principle is shown in Fig. 3. As shown in Fig. 3, VF control operates stably at a point. At this time, photovoltaic energy storage operates in the set state, that is, photovoltaic power makes bus voltage and frequency stable near the reference values Uref and Fref , and fluctuates only within the acceptable range. When the internal load power of the power grid fluctuates, the
152
X. Liu et al.
b
0 Fig. 3. Schematic Diagram of VF Control
VF control operates away from the a point. Assuming that the load power inside the power grid increases, the voltage and frequency inside the power grid decrease. The VF controller maintains stable operation at the b point by increasing the power supply power, so that the voltage and frequency inside the power grid remain unchanged. In the off grid operation mode of the power grid, the main control power supply is generally controlled by VF, which is equivalent to the balance node of the traditional distribution network system [11]. VF control adopts double closed loop control as PQ control, and the inner loop current control mode is the same. Finally, Usd and Usq signals are obtained. After signal conditioning, the control of the entire controller is realized. The difference is that the outer loop control link obtains the current reference values Idref and Iqref through voltage instead of power. Therefore, it is called outer loop voltage control. The PI module of the outer loop controller uses the voltage difference to calculate the current reference values Idref and Iqref , as shown in Formula (6): ⎧ ⎨ Idref = δpd + δid Udref − Ud S (6) ⎩ Iqref = δpq + δiq Uqref − Uq S The above process completed the design of grid connected controller and off grid controller, and elaborated its control principle in detail, providing support for the subsequent experiment of smooth switching control of parallel and off grid of distributed photovoltaic power grid. 2.4 Smooth Switching Control of Parallel and Off Grid of Distributed Photovoltaic Power Grid With the above designed grid connected controller and off grid controller as tools, develop a smooth switching control strategy for the parallel and off grid of distributed photovoltaic power grid, and implement the strategy to achieve smooth switching control for the parallel and off grid of distributed photovoltaic power grid.
Smooth Switching Control Method
153
The switching of two operation states of distributed photovoltaic power grid has always been the focus and difficulty of the research on key technologies of distributed photovoltaic power grid. Based on the problems of parallel and off grid switching, a dual mode switching strategy of parallel and off grid is proposed. The strategy adds a state pre synchronization link. By selecting the switch control, it synchronizes the control signal measurement state, which can suppress the sudden change of voltage and current when switching between parallel and off grid, and achieve smooth switching function. Difficulties in parallel and off grid switching of distributed photovoltaic power grid can be considered from two aspects: grid connection switching to off grid switching and off grid switching to grid connection. Problems that are easy to occur under the two switching modes can be analyzed, and whether this problem will lead to the decline of power supply quality or grid collapse of distributed photovoltaic power grid and distribution network [12]. It can be seen that if the output frequency, voltage amplitude and phase of the controller are not completely consistent, then impulse current will be generated during grid connection. Therefore, before grid connection, the control system needs to take measures to eliminate the impact of the above factors and achieve stable grid connection. For the problem of switching from off grid to grid connected, the state pre synchronization method can be used to adjust the voltage amplitude and phase angle of the main control power supply system of the distributed photovoltaic power grid to synchronize with the distribution network through the controller. Because the main control power supply system of the distributed photovoltaic power grid contains energy storage devices, and the control of the energy storage devices is simpler and more effective than the control of the photovoltaic array, the state of the main control power supply system can be pre synchronized by using the energy storage devices and the reference voltage and phase angle of the nodes. According to the above description, a smooth switching control strategy for parallel and off grid of distributed photovoltaic power grid is developed, as shown in Fig. 4. As shown in Fig. 4, the proposed switching control strategy has two characteristics. One is that two sets of internal loop current control PI, PQ control and VF control are equipped with one set of internal loop control PI respectively. Although one set of internal loop control PI is used more, it can prevent the actual current output value from continuously decreasing during switching, resulting in a smaller port voltage, and maintain the continuity of reference current; The second is to add the state pre synchronization link between the grid connected PQ control and the off grid VF control, which can conduct the state pre synchronization before the switching, so that the switching can proceed smoothly. The switching process of the parallel off network switching strategy can be described from three switching modes: planned grid connection, planned off network and unplanned off network. When the grid is connected to the grid for stable operation, PQ control strategy is adopted for grid control. The distributed photovoltaic grid controller based on energy storage receives PWMa signals and outputs constant power. When the power grid operates stably off the grid, the distributed photovoltaic power grid control adopts VF control strategy. The distributed photovoltaic power grid controller based on energy storage receives PWMa signals and outputs constant voltage amplitude
154
X. Liu et al. control circuit
Voltage drops significantly Automatic action SVPWM
SVPWM
PQ controller
Status pre synchronization
VF controller
The voltage drops slightly Automatic action abc/dq Uabc
abc/dq Iabc
Fig. 4. Schematic diagram of smooth switching control strategy for parallel and off grid of distributed photovoltaic power grid
and frequency. When the distributed photovoltaic power grid is planned to be connected to the grid, make the PQ control link of the distributed photovoltaic power grid controller act first, and conduct the state pre synchronization control to complete the state pre synchronization. When the distributed photovoltaic grid is planned to be off grid, the VF control link of the distributed photovoltaic grid controller will be activated first. When the distributed photovoltaic grid is unplanned off grid (fault off grid) and the voltage is detected to drop slightly, the VF control link is preheated at this time. If the voltage remains stable and does not drop continuously to the critical voltage, the temporarily closed switch is disconnected. The above process completes the formulation of smooth switching control strategy for parallel and off grid of distributed photovoltaic power grid. The smooth switching control for parallel and off grid can be achieved by implementing the formulated strategy, which can provide more effective help for the stable operation of distributed photovoltaic power grid.
3 Experiment and Result Analysis 3.1 Experiment Preparation Stage In order to verify the effectiveness of the proposed parallel and off grid switching control method, a simulation model of parallel and off grid switching control is established based on MATLAB/Simulink simulation platform. The total simulation time is set as 4 s, and the load power in the grid is 50 kW. At 1 s, the distributed photovoltaic power grid
Smooth Switching Control Method
155
switches from grid connected operation to off grid operation, and at 3 s, the distributed photovoltaic power grid starts to be connected from off grid operation. At the same time, the proposed method applies the deep reinforcement learning DQN algorithm, and the learning rate parameter ϑ directly affects the control efficiency of the parallel off network switching. Therefore, it is necessary to determine the optimal value of the learning rate parameter ϑ before the experiment. The relationship between the learning rate parameter ϑ and the control efficiency of parallel and off network switching obtained through the test is shown in Fig. 5.
Parallel off grid smooth switching control efficiency /%
90
75
60
45
30
15
0
0
0.2
0.4 0.6 Parameter value
0.8
Fig. 5. Relation between learning rate parameter ϑ and control efficiency of parallel and offline switching
As shown in Fig. 5, when the learning rate parameter ϑ is 0.46, the control efficiency of parallel off network switching reaches the maximum value of 82.5%. Therefore, the best value of learning rate parameter ϑ is determined to be 0.46. The above process has completed the task of experiment preparation, providing convenience for the smooth implementation of the subsequent experiment. 3.2 Analysis of Experimental Results In order to clearly display the application performance of the proposed method, the mutation coefficient of voltage and current during the smooth switching process of parallel and off grid is selected as the evaluation index. The specific experimental results are analyzed as follows: The sudden change coefficient of voltage and current can directly display the smoothness of parallel off grid switching of distributed photovoltaic power grid, and the value range is 0–1. The closer the mutation coefficient is to 0, the smoother the parallel off grid switching of distributed photovoltaic power grid is. The method based on voltage feedback in reference [8] and the grid connected photovoltaic system design with controllable total harmonic distortion and three maximum
156
X. Liu et al.
power point tracking in reference [9] are selected as comparison methods 1 and 2. The abrupt coefficients of voltage and current obtained through experiments are shown in Fig. 6. Proposed method
Comparison method 1
Comparison method 2
1 Voltage catastrophe coefficient
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
1
2
3
4 5 6 7 8 Test condition No
9
10
(1) Voltage jump coefficient Proposed method
Comparison method 1
Comparison method 2
1 Current catastrophe coefficient
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
1
2
3
4 5 6 7 8 Test condition No
9
10
(2) Current mutation coefficient Fig. 6. Schematic Diagram of Sudden Change Coefficient of Voltage and Current
As shown in the data in Fig. 6, compared with comparison method 1 and comparison method 2, the voltage and current mutation coefficients obtained by the proposed method are smaller, with the minimum values of 0.1 and 0.2 respectively, which fully proves that the proposed method has better control effect of parallel and off grid switching.
Smooth Switching Control Method
157
4 Conclusion This study introduces deep reinforcement learning and proposes a new control method for parallel and off-grid smooth switching of distributed photovoltaic power grid. Based on the construction of distributed photovoltaic power grid energy management model, the transient situation of off-grid switching is analyzed. Design a distributed PV grid parallel and off-grid controller to achieve smooth switching control. The experimental results show that this method can effectively reduces the sudden change coefficient of voltage and current during the parallel and off grid smooth switching process, provides more effective method support for the parallel and off grid switching control, and also provides reference and reference for the related research on parallel and off grid switching.
References 1. Wang, Z., Zhang, J., Zhang, Y.: Research on photovoltaic output power prediction method based on machine learning. Comput. Simul. 37(4), 71–75,163 (2020) 2. Tévar-Bartolomé, G., Gómez-Expósito, A., Arcos-Vargas, A., Rodríguez-Montañés, M.: Network impact of increasing distributed PV hosting: A utility-scale case study. Solar Energy 217, 173–186 (2021) 3. Qu, Y., Zeng Kai, O., Hong, Z., et al.: Island detection and seamless switching control of pvbes microgrid with phase-locked loop based on improved Gaussian filter. Adv. Power Syst. Hydroelectr. Eng. 38(7), 38–46 (2022) 4. Shiwei, X., Qingquan, J., Pan, L., et al.: Efficiency improvement control strategy for photovoltaic generation through dc dynamic reconfiguration. Trans. China Electrotech. Society 36(9), 1761–1770 (2021) 5. Chang, W., Wu, L., Wu, C., et al.: Coactive design of explainable agent-based task planning and deep reinforcement learning for human-UAVs teamwork. Chinese J. Aeronaut. 33(11), 110–125 (2020) 6. Liu, Q., Liu, Z., Xiong, B., et al.: Deep reinforcement learning-based safe interaction for industrial human-robot collaboration using intrinsic reward function[J]. Adv. Eng. Inform. 49(12), 101360 (2021) 7. Patel, N., Gupta, N., Kumar, A., Babu, B.C., et al.: Pseudo affine projection assisted multitasking approach for power quality improvement in grid-interactive photovoltaic (PV) system. IET Power Electron. 13(13), 2905–2916 (2020) 8. Bakhshi-Jafarabadi, R., Sadeh, J.: New voltage feedback-based islanding detection method for grid-connected photovoltaic systems of microgrid with zero non-detection zone. IET Renew. Power Gener.Gener. 14(10), 1710–1719 (2020) 9. Hussein, A.A., Chen, X., Alharbi, M., et al.: Design of a grid-tie photovoltaic system with a controlled total harmonic distortion and tri maximum power point tracking. IEEE Trans. Power Electron. 35(5), 4780–4790 (2020) 10. Ai, C., Gao, W., Chen, L., et al.: Bivariate grid-connection speed control of hydraulic wind turbines. J. Franklin Inst. 358(1), 296–320 (2020) 11. Du, W., Wang, Y., Wang, X., et al.: Magnifying effect of weak grid connection for a PMSG to induce torsional sub-synchronous oscillations under the condition of open-loop modal resonance. IET Renew. Power Gener. Gener. 14(4), 580–590 (2020) 12. Babu P.N., Guerrero, P., Siano, P., et al.: An improved adaptive control strategy in grid-tied PV system with active power filter for power quality enhancement. IEEE Syst. J. 15(2), 2859–2870 (2021)
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability Based on Multi Source Data Fusion Fanghui Zhu1(B) and Shu Fang2 1 School of Education, Xi’an Siyuan University, Xi’an 710038, China
[email protected] 2 College of Art and Design, Guangdong Industry Polytechnic, Guangzhou 510300, China
Abstract. Massive teaching ability data leads to a large difference between the evaluation index and the actual index. Therefore, a classification evaluation method for innovative teachers’ teaching ability based on multi-source data fusion is proposed. Integrate innovative teachers’ teaching data feature level multi-source data to generate scene data that can accurately describe learners’ learning characteristics. The model of teachers’ practical teaching ability is constructed, the information flow expressing the constraint parameters is obtained, and the convergent solution of teaching ability evaluation is calculated. Use the analytic hierarchy process to calculate the data similarity, and use the quantitative recursive analysis method to describe the form of evaluation data. Integrate the five dimensional characteristic data of learner situation, time situation, location situation, equipment situation, event situation and learning scene, build an evaluation model based on multi-source data fusion, and achieve the classified evaluation of innovative teachers’ teaching ability. The experimental results that the maximum values of the teaching skill index, learning input index and learning harvest index of this method are 9.8, 9.6 and 9.2 respectively, which shows that the classification evaluation results are accurate. Keywords: Multi-Source Data Fusion · Innovative Teachers · Teaching Ability · Classification Evaluation
1 Introduction In the construction of college faculty, the construction of the teaching staff is an indispensable part. In these aspects, the most fundamental indicator for creative teaching ability is the storage capacity of creating words and the ability to apply vocabulary. This data can also serve as an important indicator to evaluate the quality of innovative teaching, which means that the language learning ability is the application ability of vocabulary. However, in the traditional evaluation method for innovative teaching ability, it adopts a subjective scoring method. The advantage of manual evaluation lies in its flexibility, but it also has a great deal of subjectivity. If there are some complex © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 158–173, 2024. https://doi.org/10.1007/978-3-031-50571-3_12
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
159
factors added, it will lead to significant evaluation deviations. At the same time, there is no clear evaluation standard in the evaluation process, which results in differences in evaluation criteria for the same person. This greatly affects the accuracy of evaluation. When evaluating innovative teaching ability, it should not be judged from a single perspective, but should be comprehensively judged from multiple dimensions and levels. Moreover, a certain evaluation standard should be established to facilitate unified measurement under different variable conditions. In the current research methods, a manually constructed evaluation method has been proposed. This method uses subject syllabus, textbooks, teaching aids, and other materials as carriers, with subject knowledge as the foundation. It adopts methods such as “ontology analysis”, “data collection”, “entity extraction”, “relationship extraction”, and “homogeneous data fusion” to conduct hierarchical evaluation of the teaching ability of creative teachers. [2]. Although this method can ensure the professionalism of the knowledge map in the education field, it is time-consuming and laborious. Different experts may have different understandings of the same knowledge point. For basic mathematics, preciseness and consistency are difficult to be guaranteed; Later, a semi-structured data classification evaluation method was proposed, which directly extracts entities from semi-structured texts on the network to evaluate the classification results of innovative teachers’ teaching abilities. Although such methods can save a lot of construction costs and quickly build an assessment map, which is not an appropriate choice for mathematical disciplines with strict logic. Aiming at the above problems, a classification evaluation method of innovative teachers’ teaching ability based on multi-source data fusion is designed. In order to ensure the rationality and effectiveness of the method design, a comparative experiment is conducted by simulating the innovative teaching environment. The effectiveness and accuracy of the proposed method are verified through the effective proof of experimental data.
2 Innovative Teachers’ Teaching Data Feature Level Multi-source Data Fusion This project plans to use multi-source information fusion technology to mine the information attributes of different types of information, and achieve the fusion of different types of information to enhance the reusability of information resources. The process of extracting features essentially involves extracting learning behaviors from learners from different sources and extracting characteristic features from them. Looking back at the past descriptions of information attributes in learning scenarios, different scholars have categorized the information attributes of scenarios into various types from different perspectives [3]. The first is learner situation, including name, age, class, learning period, school and other information; The second is time situation, including classroom learning time, recess time, entertainment and leisure time, physical exercise time, evening selfstudy time, etc.; The third is location situation, including inside and outside school; The fourth is the event situation, including various courses; The fifth is the device scenario, including hardware and software. The learning context information characteristics of the above five dimensions well represent the data characteristics of various heterogeneous data sources. Through these
160
F. Zhu and S. Fang
five characteristics, it is possible to accurately describe the real learning and living scenarios of learners in each heterogeneous data source, and thus bridge the gap between different heterogeneous data sources [4]. These five shared data attributes form five data dimensions, which together constitute the learner’s authentic learning context: learner context, temporal context, spatial context, device context, and event context. On these five data dimensions, feature-level data fusion methods are used to integrate teaching data. The feature-level data fusion mainly goes through several stages: hierarchical extraction of semantic features of each data dimension, feature-level fusion of hierarchical semantics, and feature-level fusion of cross-dimensional and cross-layer associated semantics fusion processes. The specific content is shown in Fig. 1. Dimension of learners
Time dimension
Spatial dimensions
Equipment dimensions
Event dimension
Semantic feature extraction
Semantic feature extraction
Semantic feature extraction
Semantic feature extraction
Semantic feature extraction
Hierarchical semantic feature integration
Hierarchical association semantic feature-level data fusion across dimensions
Decision making
Fig. 1. Innovative teacher teaching data feature level multi-source data fusion process
It can be seen from Fig. 1 that the hierarchical extraction of the semantic features of each data dimension mainly refers to the hierarchical extraction of the semantic features, the determination of the semantic attributes, and the determination of the multi-level semantic logical relationship between the semantic same level, superior and subordinate of each data dimension. For example, the semantic attributes of the temporal dimension can be divided into two categories: weekdays and holidays. Weekdays can further be divided into different granularity semantic information such as classroom learning time and self-study time. [5]. The feature level data fusion of hierarchical semantics is mainly to classify the education data of various heterogeneous data sources according to these hierarchical semantics, adopt the corresponding fine-grained fusion strategy for feature level data fusion, generate scene data that can accurately describe the learning characteristics of learners, and eliminate the inconsistent and redundant relationship between the data structure and the same semantics gathered at the same granularity [6]. The feature level data fusion of cross hierarchical association semantics is mainly to describe learners’ learning characteristics more objectively and accurately. According to the similar semantics of different dimensions and levels, these data with association semantics are further feature level fusion to generate scene data with deep semantic knowledge.
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
161
3 Classifying Evaluation of Innovative Teachers’ Teaching Ability 3.1 Analysis Model of Innovative Teachers’ Teaching Ability According to the structure of in-service teachers’ practical teaching ability, the relationship between the ability to adjust teaching, practical teaching implementation ability, practical teaching evaluation ability, practical teaching research ability, and the ability to cooperate with enterprise teachers is analyzed and combed to form the relationship connotation of teachers’ practical teaching ability [7, 8]. Based on the analysis of whether the category of teachers’ practical teaching ability can cover the practical teaching, a model of teachers’ practical teaching ability is constructed, as shown in Fig. 2. Practical teaching and research ability Dynamic adjustment of teaching ability
Ability to implement
Practical teaching design ability
Evaluation of ability
Ability to work with enterprise teachers Fig. 2. Model of teachers’ practical teaching ability
In order to accurately evaluate teacher quality, it is necessary to establish a samplebased model for teacher quality constrained parameters. Based on this, a new evaluation indicator system called the Teacher Quality Evaluation Indicator System is proposed. On this basis, this project aims to characterize the parameterized index of teaching ability evaluation by establishing a parameterized distribution model based on high-dimensional specificity of teaching ability. According to (1), an information flow model representing the constraint parameters of educational ability, which represents the differential equation, is constructed. an = f (t0 + nt) + ω
(1)
In formula (1), f (·) is the multiple value function; ω is the measurement function; n is the number of teaching ability evaluation indicators; t is the evaluation cycle. This project plans to adopt a correlation fusion method to calculate the solution vector for teacher ability evaluation within the high-dimensional feature distribution space and obtain the feature training subset for teacher ability evaluation. Based on the statistical
162
F. Zhu and S. Fang
data obtained in the previous stage, a data flow model for teaching ability evaluation is constructed, focusing on the statistical features distribution sequence of a set of multivariate variables. That is, the teaching ability evaluation has a convergent solution, and the constraint condition is Formula (2): 1 φa (ω) = − ω2 σ 2 2
(2)
In Formula (2), σ represents the evaluation solution vector. Based on this, a data information flow model for teaching ability assessment is constructed, and a set of scalar sampling sequences is generated to generate a big data distribution model, providing accurate data input basis for teaching ability assessment. 3.2 Similarity Calculation of Teaching Ability Data Based on Analytic Hierarchy Process Because there is no connection between the unstructured data of textbooks using traditional methods and semi-structured data such as Baidu Encyclopedia and Interactive Encyclopedia, the data sources are scattered, not authoritative and not systematic [10]. Based on this, considering the existing tags, textbooks and other unstructured texts, this paper proposes a method to calculate the similarity of teaching ability data based on AHP. Step 1: Pretreatment layer. The encyclopedia data obtained by the crawler generally contains a large amount of redundant information, which cannot be directly used for entity alignment, and the subsequent work depends on the description information of each entity. Therefore, this layer mainly preprocesses the data. For encyclopedic data, it specifically includes: using the redirection mechanism and synonym tags in the encyclopedia to remove irrelevant entities, expanding synonymous entities, and obtaining a complete set of encyclopedic entities [11]; Then, the description information in the entity set is preprocessed by word segmentation; Finally, align entities among encyclopedias according to the processed entities in the encyclopedia (for text data, preprocessing includes regular text preprocessing operations such as word segmentation, etc.). Step 2: String matching layer. It is mainly used to match entities with exactly the same name and description information in the encyclopedia space. For example, the “unary linear equation” in Baidu Encyclopedia and the “unary linear equation” in Interactive Encyclopedia are two entities that match exactly [12]. At this time, only one of them is retained. Step 3: semantic matching layer. Entity alignment can generally be converted into judging the semantic similarity of entities. In most cases, the Chinese semantic knowledge base will be used to calculate the semantic similarity between entity pairs. Based on the semantic similarity calculation, the semantic similarity of two entities in HowNet is considered, as shown in the formula (3): Simi (Ai , Bi ) =
λ l+λ
(3)
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
163
In formula (3), Ai , Bi represents the name of an entity in Interactive Encyclopedia and Baidu Encyclopedia respectively; i represents the number of adjustments; λ represents an adjustable parameter; l represents the distance between two semaphores in the semaphore hierarchy [13]. Step 4: Morphological matching layer. Morphological matching layer mainly calculates the similarity between attribute tags and abstract text on the basis of semantic correlation. The attribute tag refers to the description content of an entity information on Baidu Encyclopedia and Interactive Encyclopedia, and the abstract text refers to the content of a concept introduced in relatively refined words on Baidu Encyclopedia and Interactive Encyclopedia. The similarity of attribute tags is calculated by editing distance algorithm. Considering the influence of the number of attribute matches on the similarity, the formula is normalized, such as Formula (4): A B se h i , h i i∈n (4) g hAi , hBi = l In formula (4), se hAi , hBi represents the edit distance between two attribute values; hAi , hBi represents the entity attribute set in Baidu Encyclopedia and Interactive Encyclopedia respectively. Text similarity is calculated by cosine similarity algorithm based on TF-IDF [14, 15]. First, convert the keywords extracted from Interactive Encyclopedia and Baidu Encyclopedia into vector form, and then calculate the cosine similarity between the two vectors, as shown in Formula (5): l
Simi Ai , Bi =
(ai × bi )
i=1 l
(ai ) × 2
i=1
l
(5) (bi )
2
i=1
In formula (5), Ai , Bi represents the vector form converted from interactive encyclopedia and Baidu encyclopedia data respectively; ai , bi represents vector data of Interactive Encyclopedia and Baidu Encyclopedia respectively. 3.3 Classification of Evaluation Data Based on Quantitative Analysis The method of quantitative recursive analysis was applied to analyze the big data information model of teacher quality evaluation, with the control objective function for predicting and estimating teacher quality set as formula (6): Q = max
5
uj an
(6)
j=1
In formula (6), uj = u1 , u2 , u3 , u4 , u5 represents five data dimensions, namely learner situation, time situation, location situation, equipment situation and event situation.
164
F. Zhu and S. Fang
Using the grey model, a quantitative recursive evaluation of teacher’s educational ability level is conducted, assuming historical data of the distribution of teacher’s educational ability. When the initial value of the disturbance feature is fixed, the probability density functional for predicting and estimating teacher’s educational ability can be obtained as formula (7): ρ(t) = kan (t)
(7)
In formula (7), k represents the number of iterations. Based on this, the evaluation results of teacher quality are obtained through the prediction and estimation of teacher quality, as well as the quantification and regression analysis of teacher quality. In order to build an educational data model with strong flexibility and high reusability [16], to reduce the amount of data exchange between different data, and to meet the needs of different data sources for different data views, it is also necessary to formalize the data. This link is based on the calculation results of the model data attribute and semantic uniqueness operation link. It formally describes the learner situation data in the model, and further combines these independent data dimensions in multiple dimensions to build a multi-dimensional education data model. For the five independent data dimensions that make up the data model, since the learner dimension is static information data, its formal description is not marked in the model, thus, the learning situation data description of the model can be represented by five tuples, each of which is independent and interconnected to form a multi-dimensional education data model with different combinations, which is represented by characters in turn as P1 , P2 , P3 , P4 , P5 , and each tuple is composed of an even pair P, C, where, C represents the classification semantics of each attribute. The formal description of the shared education data model formed by F is Formula (8): ⎡ 1 1 1 1 1 1 1 1 1 1 ⎤ P ,C P ,C P ,C 1 1 2 2 3 3 P4 , C4 P5 , C5 ⎢ P2 , C 2 P2 , C 2 P2 , C 2 P2 , C 2 P2 , C 2 ⎥ ⎢ 1 1 2 2 3 3 4 4 5 5 ⎥ ⎢ 3 3 ⎥ 3 3 3 3 P, C = ⎢ P1 , C1 P2 , C2 P3 , C3 P43 , C43 P53 , C53 ⎥ (8) ⎢ 4 4 ⎥ 4 4 4 4 ⎣ P1 , C1 P2 , C2 P3 , C3 P44 , C44 P54 , C54 ⎦ 5 5 5 5 5 5 5 5 5 5 P1 , C1 P2 , C2 P3 , C3 P4 , C4 P5 , C5 Formula (8) takes the time context dimension as an example, P, C can be a set consisting of < 18:30, dinner time ≥ 20:30, evening self-study time ≥ 22:30, and night sleep time > 22:30. The model can not only form a single dimension data model view such as location situation and event situation, but also form a multi-dimensional data view, as shown in Fig. 3. Through Fig. 3, it is convenient for each data system to dynamically obtain personalized data model views according to its own data analysis needs. The shared education data model constructed in this way can not only reduce the amount of data exchanged and transmitted between heterogeneous data systems, but also reduce the complexity of data analysis, effectively reduce the lack of data information, and help to deeply mine the real learning needs of learners.
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
165
Time dimension
location dimension
Event dimension
Fig. 3. Data Model View of Different Dimension Combinations
3.4 Classification Evaluation Based on Multi-source Data Fusion Based on the formal description results of the evaluation data, the operation mechanism of standardizing the 5-dimensional feature fusion data. In order to build a highly shared educational data model, it is also necessary to analyze the data standard format of the 5-dimensional feature fusion data, and generate a general and standard data exchange format. According to the data structure of the 5-dimensional feature fusion data obtained from the previous analysis: learner situation, time situation, location situation, equipment situation, and event situation, combined with the standardized description of the specification statement structure, the general data specification format of the 5-dimensional feature fusion data is obtained. This universal data format maps out the learner’s status as: {learner} attached with learner’s personal semantic tag | {a certain time point} attached with time classification semantic tag | {a certain place} attached with place classification semantic tag | {what device is used} attached with device classification semantic tag | {something has been done, what results are achieved, and how long it takes} attached with topic event classification semantic tag. Based on this, an evaluation model is constructed. As shown in Fig. 4. The evaluation model constructed by multi-source data fusion method, as shown in Formula (9): (9) Ja0 = Ta0 − T0 η0 In formula (9), Ja0 represents the effective value; Ta0 , T0 represents the value of the evaluation criteria without and after participation; η0 is the variable weighting coefficient. After the evaluation, it needs to be identified. After the evaluation, certification needs to be conducted, and the evaluation data obtained through certification is regarded as valid values, following the process as described in formula (10). J =
η A fA + ηB fB ηA + ηB
(10)
In formula (10), ηA , ηB represents high-order parameter and low order parameter respectively; fA , fB represents high-quality data of continuous variables respectively.
166
F. Zhu and S. Fang Unified Data Interface data integration Heterogen eous data source interface Situation of learner
Data access Data interchange data sharing
Time situation
Location situation
One dimensional data model The equipment situation
Event situation
Multidimensional data model Feature-level data fusion
Fig. 4. Evaluation model
Through the introduced feedback calculation, the data features in the educational process can be comprehensively represented, and by inputting them into the feedback model, a comprehensive evaluation of students can be achieved.
4 Simulation Experiment Based on this, a hierarchical evaluation model for the teaching ability of innovative teachers is constructed through the fusion of multiple sources of information to verify the correctness of the model. Different innovative classrooms are selected for simulated experiments, and conventional evaluation methods are used for comparative evaluation to ensure the effectiveness of the experiments. 4.1 Parameter Setting In order to ensure the effectiveness of the evaluated method, the parameters are set, the reference data used for network feedback is set outside the range [10001250], and the innovative teaching content is set as follows: English grammar, vocabulary explanation, sentence pattern analysis, oral practice, writing practice, and English dialogue. The teaching duration is 15 min, 30 min, 35 min, 40 min, 45 min, and 50 min respectively. 4.2 Experimental Indicators SPSS was calculated to ensure the effectiveness of the experiment. The computation of SPSS is performed to ensure the effectiveness of the experiment. The calculated values
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
167
obtained from SPSS are greater than or equal to 0.95, so the values obtained from SPSS are effectively fitted to obtain Formula (11): Z(k) =
z[u(g − 1)] ≥ 0.95 1 + z 2 (g − 1)
(11)
In Formula (11), z represents reference data; g represents a high-order parameter; u represents the matching data. Due to the comparison between modes and methods, a significant error may occur. To ensure the accuracy of the experimental results, the measured results were permutation with the actual results, and a measuring proportion method was adopted to eliminate random bias. The proportion factor changes with the number of experiments, but the range of the proportion factor value chosen is between 2.45 and 6.55. A method that can exclude non-zero errors was used to prevent the occurrence of non-zero errors by calculating zero-point values. 4.3 Analysis of Experimental Data Choose English grammar, vocabulary explanation and English dialogue as standard data, which can be directly downloaded from the school homepage. The experimental data set is shown in Fig. 5.
4 5 3
2 8 8
3 0 4 5 2 84 9 5
Fig. 5. Experimental data set
It can be seen from Fig. 5 that the data type is characterized by permutation and dispersion. After factor analysis of the selected data, it is found that the variance contribution rate of each factor after rotation is balanced, and the load of each index on each factor is greater than 0.5, which indicates that the correlation between each index and the corresponding factor is relatively close, so the three main factors are extracted. For the convenience of discussion, the extracted main factors are named F1, F2, and F3. The indicators with high load on F1 factor are teaching attitude, classroom mental state, teaching ideas, language explanation, after-school guidance, and interaction between students and teachers, which represent teachers’ personal teaching skills. Therefore, F1
168
F. Zhu and S. Fang
is named as teaching skills, corresponding to the basic teaching ability level of ideological and political teachers. The indicators with higher load on factor F2 are classroom participation, learning interest, after-school input and learning enthusiasm, which to some extent reflects the students’ learning input, and also reflects the ideological and political teachers’ classroom teaching guidance ability. Therefore, factor F2 is named as learning input, which corresponds to the higher teaching ability level of ideological and political teachers. The indicators with high load on F3 factor are overall learning gains, classroom evaluation and far-reaching influence, which reflect students’ classroom gains. Therefore, F3 factor is named learning gains, reflecting teachers’ high-level teaching ability. 4.4 Experimental Results and Analysis The evaluation method based on manual construction, the classification evaluation method based on semi-structured data and the classification evaluation method are respectively used to compare and analyze the teaching skill index F1, the learning input index F2 and the learning harvest index F3. The values of the indices range from 1 to 10, and the higher the value, the more accurate the evaluation results. 4.4.1 Teaching Skill Index The comparative analysis results of the three teaching skill indexes are shown in Fig. 6. From Fig. 6 that the teaching skill index of the classification evaluation method can reach 9.8 at most, while the teaching skill index of the evaluation method based on manual construction and the classification evaluation method based on semi-structured data can reach 6.5 and 7 at most, which is quite different from the ideal index. This shows that the assessment result of teaching skill index using the classification assessment method is accurate. The main reason is that the proposed method integrates the characteristic level multi-source data of innovative teachers’ teaching data, and generates the scene data that can accurately describe learners’ learning characteristics, which can better optimize the evaluation effect of teaching skill index. 4.4.2 Learning Input Index The comparative analysis results of learning input indexes of the three methods are shown in Fig. 7. From Fig. 7 that the learning input index of the evaluation method based on manual construction and the classification evaluation method based on semi-structured data can reach 5 and 7 respectively, while the learning input index of the classification evaluation method can reach 9.6. This shows that the evaluation result of learning input index using the classification evaluation method is accurate. The main reason is that the proposed method uses analytic hierarchy process to calculate the similarity of teaching ability data, and describes the form of evaluation data through quantitative recursive analysis, which improves the evaluation performance.
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability Teaching attitude
Interaction between teachers and students
After class to guide
State of precision in class 10 9 8 7 6 5 4 23 1 Teaching ideas
Language interpretation
(a) Evaluation method based on manual construction Teaching State of precision attitude in class 10 9 8 7 6 5 4 Interaction 23 1 between Teaching teachers and ideas students
After class to guide
Language interpretation
(b) Classification and evaluation method based on semi-structured data Teaching attitude
Interaction between teachers and students
After class to guide
State of precision in class 10 9 8 7 6 5 4 23 1 Teaching ideas
Language interpretation
(c) Classification and evaluation method Fig. 6. Comparison results of teaching skill indexes of three methods
169
170
F. Zhu and S. Fang Class participation
Input after class 10 9 8 7
6 5
4 3 1
2
Interest in learning
Enthusiasm for learning
(a) Evaluation method based on manual construction Class participation
Input after class 10 9 8 7
6 5
4 3 1
2
Interest in learning
Enthusiasm for learning
(b) Classification and evaluation method based on semi-structured data Class participation
Input after class 10 9 8 7
6 5
4 3 1
Interest in learning
2
Enthusiasm for learning
(c) Classification and evaluation method Fig. 7. Comparison Results of Learning Input Index of Three Methods
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
171
4.4.3 Learning Harvest Index The three methods learn the harvest index, and the comparative analysis results are shown in Fig. 8.
10 8 6 4 2
(a) Evaluation method based on manual construction
10 8 6 4 2
(b) Classification and evaluation method based on semi-structured data
10 8 6 4 2
(c) Classification and evaluation method
Fig. 8. Comparison Results of Learning Harvest Index of Three Methods
172
F. Zhu and S. Fang
From Fig. 8 that the learning harvest index of the evaluation method based on manual construction and the classification evaluation method based on semi-structured data can reach 5 and 6.8 respectively, while the learning harvest index of the classification evaluation method can reach 9.2. This shows that the evaluation result of learning harvest index using the classification evaluation method is accurate. The proposed method integrates the five-dimensional characteristic data of learner situation, time situation, location situation, equipment situation, event situation and learning situation, and constructs an evaluation model, which improves the accuracy of learning harvest index evaluation.
5 Conclusion In view of the poor evaluation effect, this paper proposes a research on the classification evaluation method of innovative teachers’ teaching ability. Based on the characteristic level multi-source data fusion of innovative teachers’ teaching data, this paper implements the classification evaluation of innovative teachers’ teaching ability from four aspects: the construction of innovative teachers’ teaching ability analysis model, the calculation of teaching ability data similarity based on hierarchical analysis, the classification of evaluation data based on quantitative analysis, and the classification evaluation based on multi-source data fusion. The experimental results show that the proposed method has a good classification effect. Aknowledgement. 1. This paper is the research result of the 2022 Shanxi Provincial Teacher Education Reform and Teacher Development Research Project “Research on the Path to Improve the Teaching Innovation Ability of Teachers in Private Colleges under the Background of Education Digital Transformation” (subject number: SJS2022YB067); 2. The first batch of collaborative education projects of the Ministry of Education in 2022: “Training of young teachers’ teaching innovation ability based on the” rain classroom “intelligent teaching platform” research results 3. The construction achievements of "History of Chinese and Foreign Educational Thoughts", a high-quality online open course of Xi’an Siyuan University in 2021; 4. The construction results of the school-level teaching team of Xi’an Siyuan University "Teaching team of core pedagogy curriculum group"; 5. Construction achievements of the scientific research innovation team of Xi’an Siyuan University "Shaanxi regional basic education scientific research innovation team".
References 1. Fubing, F., Ling, S., Lei, S., et al.: Exploration of three-stage cultivation of “double-qualified” teachers’practical teaching ability under the background of “double-high plan.” Vocat. Tech. Educ. 43(02), 27–30 (2022) 2. Nguyen, H., Santagata, R.: Impact of computer modeling on learning and teaching systems thinking. J. Res. Sci. Teach. 58(5), 661–688 (2021) 3. Zhang Yong, X., Wenbin: Research on teaching ability and evaluation of teachers in local agricultural colleges and universities. Vocat. Tech. Educ. 07, 18–22 (2022) 4. Shiyu, Y., Liyan, L., Shuo, L.: The construction of assessment indicator system of university teachers’teaching competence. High. Educ. Explor. 12, 66–73 (2021)
Classifying Evaluation Method of Innovative Teachers’ Teaching Ability
173
5. Qisheng, Z., Yani, Z., Bichun, L.: The semand analyzing for the teaching ability of teachers in online courses based on the"student satisfaction model”. China Poultry 43(03), 118–120 (2021) 6. Hanxiao, Q., Bin, H.: Entity relation retrieval and simulation of low redundancy knowledge atlas. Comput. Simul. 39(6), 469–472 (2022) 7. Patjas, M., Vertanen-Greis, H., Pietarinen, P., et al.: Voice symptoms in teachers during distance teaching: a survey during the COVID-19 pandemic in Finland. Europ. Arch. Oto-RhinoLaryngol.: Off. J. Europ. Fed. Oto-Rhino-Laryngol. Soc. (EUFOS) 278(11), 4383–4390 (2021) 8. Zhang, S., Deng, S.: Construction of the model of core teaching competence of ideological and political course teachers in institutions of higher education. J. Sichuan Normal Univ. (Social Sciences Edition) 48(06), 11–20 (2021) 9. Wu, F., Huang, .S.: Research on shared education data model based on multi-source data fusion. E-educ. Res. 41(05), 59–65+103 (2020) 10. Behling, F., Hennersdorf, F., Schittenhelm, J.: Epigenome-wide data collection in a case of gliofibroma. Folia Neuropathol. 59(2), 212–218 (2021) 11. Faraj, G., Micsik, A.: Persons, GLAM institutes and collections: an analysis of entity linking based on the COURAGE registry. Int. J. Metadata, Semantics Ontol. 15(1), 39–49 (2021) 12. Rašmane, A., Goldberga, A.:. The potential of IFLA LRM and RDA key entities for identification of entities in textual documents of cultural heritage: the runa collection. Catalog. Classification Quart.: CCQ, 58(5/8), 705–727 (2020) 13. Liu, S., Lu, M., Liu, G.: A novel distance metric: generalized relative entropy. Entropy 19(6), 269 (2017) 14. Zhou, Z., Qin, J., Xiang, X., Tan, Y., Liu, Q., Xiong, N.N.: News text topic clustering optimized method based on TF-IDF algorithm on spark. Comput, Mater. Continua 62(1), 217–231 (2020) 15. Bounabi, M., Elmoutaouakil, K., Satori, K.: A new neutrosophic TF-IDF term weighting for text mining tasks: text classification use case. Int. J. Web Inform. Syst. 17(3), 229–249 (2021) 16. Ganesh Karthikeyan, V., Thangaraj, P., Karthik, S.: Towards developing hybrid educational data mining model (HEDM) for efficient and accurate student performance evaluation. Soft. Comput.Comput. 24(24), 18477–18487 (2020). https://doi.org/10.1007/s00500-020-05075-4
Web Based Adaptive Integration Method of College Students’ Comprehensive Quality Evaluation Data Wenjing Liu1(B) and Haidi Yuan2 1 Student Affairs Office, Fuyang Normal University, Fuyang 236000, China
[email protected] 2 Anhui Sanlian University, Hefei 230601, China
Abstract. In order to quantitatively analyze the comprehensive quality of college students, an adaptive integration method of college students’ comprehensive quality evaluation data based on Web is proposed. Build a Web network model to simulate the evaluation process of college students’ comprehensive quality and obtain quantitative evaluation data. After the evaluation data is cleaned and transformed, the random forest algorithm is used to determine the evaluation data type, and the adaptive integration result of the comprehensive quality evaluation data of college students is obtained. The experimental results show that the integrity coefficient of the integration results of the proposed method is about 0.047 higher than that of the comparison method, while the redundancy coefficient is reduced and the reliability coefficient is significantly improved. Applying it to the retrieval of evaluation data can effectively speed up the data retrieval. Keywords: Web Network · Comprehensive Quality of College Students · Quality Evaluation Data · Adaptive Integration
1 Introduction The comprehensive quality evaluation of college students refers to an evaluation task organized by secondary schools all over the country to evaluate the overall comprehensive quality and ability of all students at the end of each semester or academic year [1]. Due to the large number of college students and the large number of comprehensive quality indicators to be evaluated, the evaluation results are characterized by large amount of data and complex types. In order to achieve rapid retrieval and efficient management of the comprehensive quality evaluation data of college students, an adaptive integration method for the comprehensive quality evaluation data of college students is proposed. Data integration is a data integration method that collects, sorts, cleans and converts data from different data sources into a new data source, providing data consumers with a unified data view. At this stage, more mature data integration methods include: data integration methods based on SOA software architecture, data integration methods based on
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 174–189, 2024. https://doi.org/10.1007/978-3-031-50571-3_13
Web Based Adaptive Integration Method
175
sample weight update, and data integration methods based on wavelet transform. However, when applying the above traditional data integration methods to the integration of college students’ comprehensive quality evaluation data, there are problems of poor integration quality, mainly reflected in incomplete evaluation data integration The poor redundancy of integrated data will ultimately affect the retrieval efficiency of comprehensive quality evaluation data. In order to solve the problems of the above integration methods, the Web network is introduced. The hypertext transfer protocol in the Web network can complete data integration with higher quality. Therefore, this paper proposes Liu Wenjing’s adaptive integration method of college students’ comprehensive quality evaluation data based on Web. Apply Web network and related technologies to the adaptive integration of college students’ comprehensive quality evaluation data in order to improve the data integration effect and application performance..
2 Design of Adaptive Integration Method for Comprehensive Quality Evaluation Data Users have different requirements for real-time, consistency, query efficiency, etc. of different data information acquisition, so different data integration methods need to be used to meet users’ needs. Figure 1 shows the principle block diagram of data integration of college students’ comprehensive quality evaluation.
Fig. 1. Principle block diagram of comprehensive quality evaluation data integration
Data exchange and service invocation are adopted to realize data integration, and different application requirements adopt different methods. 2.1 Building a Web Network Model The purpose of the Web network model is to provide equipment support for the collection and transmission of college students’ comprehensive quality evaluation data. The
176
W. Liu and H. Yuan
Web network is deployed in the storage and management environment of college students’ comprehensive quality evaluation data, and the corresponding model is generated according to the operation mode and composition structure of the network. The adopted Web network is based on the three-tier structure of the browser server, which provides a unified interface for most end users. The workload of the browser side is less. Most of the processing of the system is carried out on the server side, making full use of the computing resources on the server side [2]. The composition structure of the Web network is shown in Fig. 2.
Fig. 2. Web network composition structure
The generating node of the comprehensive quality evaluation data of college students is taken as the Web network node, and each unit will have energy consumption. The energy consumption generated by the Web network node in the working state can be expressed as Formula 1: Ei = Epro (i) + Etran (i) + Eperc (i)
(1)
where, Epro (i), Etran (i) and Eperc (i) respectively represent the energy consumption of the processor unit, data transmission unit and data sensing unit in the Web network. In the Web network environment, TCP/IP protocol is used as the transmission protocol for data transmission. The working principle of this protocol is shown in Fig. 3. It can be seen from Fig. 3 that under the constraint of TCP/IP protocol, the collection of college students’ comprehensive quality evaluation data needs to go through three handshakes. The first handshakes: the client sets the flag bit SYN to 1, randomly generates a value seq = J, and sends the data packet to the server. The client enters SYN_ SENT status, waiting for server confirmation. The second handshake: After the server receives the data packet, the flag bit SYN = 1 knows that the client requests to establish a connection. The server sets the flag bit SYN and ACK to 1, ACK = J + 1, randomly generates a value seq = K, and sends the data packet to the client to confirm the connection request. The server enters SYN_ RCVD status [3]. The third handshake: After receiving the confirmation, the client checks whether the ack is J + 1 and ACK is 1. If it is correct, set the flag ACK to 1, ACK = K + 1, and send the data packet to the server. The server checks whether the ACK is K + 1 and ACK is 1. If it is correct, the connection is established successfully. The client and server enter the ESTABLISHED state, complete the three handshake, and then the client and server can start to transmit data. The working
Web Based Adaptive Integration Method
_
177
1 _ 1
1
1
Fig. 3. Working principle of web network transmission protocol
mechanism of each node of the Web network is substituted into the network structure to complete the construction of the Web network model. 2.2 Simulate the Comprehensive Quality Evaluation Process of College Students As the comprehensive quality of college students is a qualitative indicator, in order to obtain the comprehensive quality evaluation data of college students, quantitative test indicators need to be set, as shown in Table 1. Among the above comprehensive quality indicators of college students, except for professional course scores and physical education scores, other indicators are evaluated by scoring. The evaluation results of professional course scores are shown in Formula 2: n
τachievement =
i=1
(Fi × g) n
(2) Fi
i=1
where, F and g represent course credits and grade points respectively. In the same way, we can get the calculation results of sports performance, and use the subjective scoring method to get the evaluation results of other indicators, which are marked as τi , and calculate the corresponding weight value of the evaluation indicators, which are marked as ωi . Use Formula 3 to check the consistency of the initial calculated weight value. ⎧ CI ⎪ ⎪ ⎨ CR = RI (3) λ ⎪ max − nEvaluation ⎪ ⎩ CI = nEvaluation − 1 In the formula, CI is the relative consistency indicator, RI is the integrity indicator, nEvaluation is the set number of quality evaluation indicators, and λmax is the maximum
178
W. Liu and H. Yuan
Table 1. Setting Table of Comprehensive Quality Evaluation Indicators for College Students Level I indicators
Level II indicators
Level III indicators
Ideological and moral quality
Political quality
Political attitude Political theory competition The course of joining the party and its cultivation
Ideological quality
Diligent and dedicated Style of study Voluntary service activities
Moral quality
Family morality Professional ethics Compliance with disciplines and laws
Professional quality
Classroom quality
Public basic courses Professional basic courses Elective courses
Physical and mental health
Second classroom quality
Social practice
Physical quality
Status of sports reaching the standard Physical condition
Psychological quality
Self control ability Setback attitude
Humanistic quality
Quality of humanistic theory Culture and art education Literary contests and works publishing
Competence
Humanistic practice quality
Aesthetic quality
Scientific and technological innovation ability
Scientific research activities
Logical expression ability
Logical thinking ability
Science and technology competition Oral expression ability
characteristic value of the evaluation indicators. Finally, CR is calculated with satisfactory consistency, which proves that the calculated weight is objective and acceptable [4]. The final result of the comprehensive quality evaluation of any student in the university is Formula 4: p=
nEvaluation i=1
τi · ωi
(4)
Web Based Adaptive Integration Method
179
Put the relevant data into Formula 4 to complete the evaluation of college students’ comprehensive quality, and store the evaluation data in the storage database. 2.3 Mining the Comprehensive Quality Evaluation Data of College Students According to the evaluation process of college students’ comprehensive quality, in the Web network environment, the evaluation data is mined according to the flow in Fig. 4.
Fig. 4. Data mining flow chart of student comprehensive quality evaluation
For fixed information, when writing code, write keywords into the program. When extracting quality evaluation data, just intercept the content near the keywords. 2.4 Use the Web Network to Transmit Quality Evaluation Data The real-time collected student comprehensive quality evaluation data is transmitted through the Web network to ensure that the collected evaluation data can be stored in the same device [5]. The evaluation data sender always keeps a variable β, which is used to evaluate the marking of data packets and is updated in each RTT, as shown in Formula 5: β = (1 − ) × β + × κproportion
(5)
where, is the transmission channel weight, and κproportion is the proportion of marked evaluation data in all evaluation data. The receiver of the Web network adjusts the
180
W. Liu and H. Yuan
congestion window according to the data marking. The adjustment result is shown in Formula 6: δ=δ×
1−β 2
(6)
where, δ is the congestion coefficient of the data transmission channel. The calculation result of Formula 5 is substituted into Formula 6, and the congestion coefficient of the Web network data transmission channel is adjusted to ensure that the real-time collected college students’ comprehensive quality evaluation data can be stably input to the implementation equipment of the integration method. 2.5 Cleaning and Conversion of College Students’ Comprehensive Quality Evaluation Data The data cleaning process is different according to different task requirements and environmental characteristics. According to the summary of general cleaning tools, the general process of data cleaning can be divided into four parts, and its implementation principle is shown in Fig. 5.
Dirty data
Professional knowledge
Manual cleaning Cleaning rules Automatic cleaning
Output quality evaluation data meeting quality requirements
Fig. 5. Schematic Diagram of Data Cleaning for Student Comprehensive Quality Evaluation
The principle of data cleaning is to analyze the causes and existing forms of “dirty data”, use existing technical means and methods to clean “dirty data”, and convert “dirty data” into data meeting data quality or application requirements, so as to improve the data quality of the dataset [6]. Data cleaning mainly uses the idea of backtracking to analyze data from the source of “dirty data”, inspect each process of data set, and extract rules and strategies for data cleaning. Finally, apply these rules and policies to the dataset to discover “dirty data” and clean “dirty data”. The strength of these cleaning rules and strategies determines the quality of data after cleaning. There are two main types of data quality problems. One is related to data patterns, that is, pattern level data quality
Web Based Adaptive Integration Method
181
problems. The other is related to data itself, that is, instance level data quality problems. The objects and methods of data cleaning are also different at the schema level and instance level. Schema level data quality problems are mainly caused by unreasonable structure design and lack of integrity constraints between attributes. It mainly includes naming conflict and structure conflict. Naming conflict mainly refers to homonymy and synonymy. The former refers to that the same name represents different objects, while the latter refers to the opposite. Different names represent the same object. Structure conflicts are mainly caused by different object representations in different data sources. For example, type conflict, dependency conflict, keyword conflict, etc. The instance level dirty data cleaning method is mainly divided into three parts: null value cleaning, error data cleaning, and duplicate data cleaning. Null value is a common problem in data cleaning [7]. The general null value problem can be divided into two types: one is missing value, the other is null value. Missing value means that the value actually exists but is not stored in the field to which the value belongs. Incorrect data refers to the inconsistency between the comprehensive quality evaluation data of college students and the original evaluation data in the process of transmission or collection, while duplicate data refers to the data repeatedly collected or transmitted. The compensation result of missing data in the comprehensive quality evaluation of college students is shown in Formula 7: x=
xt−1 + xt+1 2
(7)
In the formula, xt−1 and xt+1 represent the collected data at the last time and the next time of missing data respectively. If the missing evaluation data is not unique, you need to supplement the missing data one by one until all evaluation data are supplemented. The abnormal data in the comprehensive quality evaluation data of college students is mainly caused by the noise in the data acquisition and transmission environment, so wavelet denoising is used to process the abnormal data in the evaluation data. The wavelet domain denoising method uses the different characteristics of the wavelet coefficients of data and noise in the wavelet domain. The amplitude of the wavelet coefficients of data and noise varies with the scale. With the increase of the scale, the wavelet coefficients of noise decay quickly, while the wavelet coefficients of data are basically unchanged. It can also be described as follows: the noisy data is decomposed on multiple scales, and then according to the characteristics of the wavelet coefficients of data and noise on different scales, corresponding rules are constructed to deal with the wavelet coefficients nonlinearly, The purpose is to retain the wavelet coefficients of the data to the maximum extent, eliminate the wavelet coefficients of the noise, and finally reconstruct the processed wavelet coefficients [8] using the inverse wavelet transform to restore the effective data. The filtering process of abnormal data can be quantified as Formula 8: |x| x, ≥ σ 2 ln xmax (8) x = 0, |x| < σ 2 ln xmax where, σ is the standard deviation of the collected data, and xmax is the maximum value in the comprehensive quality evaluation data. In the process of processing repeated quality
182
W. Liu and H. Yuan
evaluation data, first use Formula 9 to measure the similarity between any two evaluation data.
2 xi 2 xj
s= (9)
2 xi xj
where, xi and xj are the i and j evaluation data in the collection results respectively. If the calculated value of s is higher than the set threshold value s0 , then xi and xj are considered to be duplicate data, and either of them is deleted; otherwise, they are considered to be non duplicate data, and the similarity measurement of the next group of data is performed until all the data collected are measured and filtered. Finally, the finished student comprehensive quality evaluation data will be converted and processed, as shown in Fig. 6.
Fig. 6. Comprehensive Quality Evaluation Data Conversion Process
After the processing in Fig. 6, ensure that the collected comprehensive quality evaluation data format is the same. 2.6 Self Adaptively Divide the Data Type of Students’ Comprehensive Quality Evaluation The random forest algorithm is used to classify and process the collected and processed comprehensive quality evaluation data of students. The random forest is a method proposed to solve the problem of over fitting of decision trees. This method adopts the
Web Based Adaptive Integration Method
183
Bagging integrated learning idea and the advantages and characteristics of the random subspace algorithm. The random forest algorithm [9] includes two steps: training and decision-making. First, it needs to preprocess the given sample data, and then divide the processed samples into two parts according to a certain proportion for model training and result testing. In the training process, the Bootstrap sampling technology is used to extract equal large k-subsets from the total sample set, and the decision tree is constructed for each subset. The decision tree needs to select the optimal attribute for each node according to some rules to classify. This step requires that all training samples repeated to the node are of the same type, and the training process is completed at this time. The decision-making formula is Formula 10: f (x) = arg max C
k
h(ri (x) = C)
(10)
i=1
In the formula, x refers to the collected and processed evaluation data samples, C refers to the output variable, that is, the classification label, and h(x) and ri correspond to the indicative function and a single decision tree. In the test process, the data set without type is input, and the classification results of k decision trees can be obtained through the trained random forest algorithm. Vote the k results to complete the classification prediction of samples. Stochastic forest algorithm optimizes the overall performance of the classification system mainly through the comprehensive optimization of several weak classifiers. The advantages of random forest algorithm lie in its fast algorithm speed, strong anti noise ability, difficulty in over fitting, etc. It can generalize error, balance error input, quickly maintain classification accuracy, and can be used in unsupervised learning tasks. Determine the type of students’ comprehensive quality evaluation data according to the above formula. 2.7 Realizing Adaptive Integration of Students’ Comprehensive Quality Evaluation Data According to the classification results of students’ comprehensive quality evaluation data, use Formula 11 for adaptive integration of evaluation data: W =
γ × ςStatistics ⊕ Aintegration ⊕ gintegration f (x) × ωstate
(11)
where, γ refers to the correlation between integrated data, f (x) refers to the data classification processing result, ςStatistics and ωstate correspond to the statistical characteristics and status weights between evaluation data, Aintegration refers to the data integration range, and gintegration refers to the data integration rules [10]. By substituting the relevant data into Formula 11, the adaptive integration result of the comprehensive quality evaluation data of college students can be obtained.
184
W. Liu and H. Yuan
3 Experimental Analysis of Integrated Performance Test In order to test the data integration performance of the Web based adaptive integration method of college students’ comprehensive quality evaluation data and the application performance of the integration method in data retrieval, the comparative test method is adopted, that is, the traditional data integration method based on SOA software architecture and the data integration method based on sample weight update are set as the experimental comparison method The analysis of redundancy and the statistics and comparison of the evaluation data retrieval speed after the application of different integration methods reflect the advantages of the optimization design method in the integration performance and application performance. Among them, the data of the data integration method based on SOA software architecture is accessed through the integration layer, and the service implementation itself is responsible for defining and supporting the verification, storage and retrieval rules for special data sets. The data integration method based on sample weight update indirectly increases the weight of minority samples by SMOTE on minority samples according to the prediction results of the base classifier during the boosting process, and uses the updated FocalBoost algorithm to achieve data integration. 3.1 Configuration Integration Method Operation and Test Environment The optimized Web based adaptive integration method of college students’ comprehensive quality evaluation data is developed under Windows 7 operating system. The basic functions are implemented in Java language to meet the portability and good cross platform performance of the system. Eeclipse 3.6.2, an open source Java based extensible development platform, and its visual programming plug-in are used. The Eeclipse development environment supports Java development well. The database management system used is Oracle 11G, which uses JDBC to access Oracle database. Use Java’s XML API (DOM4J) to complete XML document parsing. In order to ensure the normal operation of the design integration method, it is also necessary to configure the hardware environment, embed 2 GHz CPU and 256 M memory in the main test computer, and finally complete the operation and debugging of the integration method on Jbuilder 2005. 3.2 Prepare Data Samples for Comprehensive Quality Evaluation of College Students The experimental data used in this experiment is the test data generated by the data generator compiled by the Febrl system. Febrl provides an open source data connection framework based on the teaching field, which is mainly used to standardize and clean the data in the field of students’ ideological and moral education. The data samples of students’ daily behavior tracks generated in it are shown in Fig. 7. In addition, the data generated by the test data generator includes name, address, age, contact number, postal code, ID number, college students’ Ideological and political education, daily behavior data, mental health education data, etc. the original data information is from the student information management platform of a college. The program can control various situations in which the generated duplicate records introduce errors,
Web Based Adaptive Integration Method
185
2022 6 15
2022 6 16
2022 6 17
2022 6 18
2022 6 19
2022 6 20
2022 6 21
Fig. 7. Sample Generation Results of Students’ Daily Behavior Trajectories
including the number of errors introduced by each record and the number of errors introduced by each field. In addition, the size of the original data generated, the number of duplicate records, the maximum number of duplicate records generated for each record, the proportion of introducing a single character error, setting a field value to null, and inverting a field value can also be controlled. The advantage of this generator is that the location and type of errors introduced are designed according to the research on real world typesetting and related errors, which can simulate real world data well. The test data generated by the generator is an excel file. In the experiment, SQL Server 2000 is used to import it as a data table. 3.3 Setting Integration Performance Test Indicators Set data integration integrity coefficient, redundancy coefficient and reliability coefficient as quantitative test indicators to evaluate data integration performance. The numerical results of data integration integrity coefficient are shown in Formula 12: μcomplete =
Nintegration Nsample − 21 Nredundancy
(12)
where, Nintegration , Nsample and Nredundancy respectively represent the integrated output of comprehensive quality evaluation data, the amount of sample data prepared and the amount of redundant data in the set samples. In addition, the test results of data redundancy coefficient and reliability coefficient are shown in Formula 13: ⎧ Nrepeat ⎨ 2 μredundancy = Nintegration (13) N −k ⎩μ = integration l × 100% reliable
ϑdifference
186
W. Liu and H. Yuan
where, Nrepeat is the amount of duplicate data in the integration result, and kl and ϑdifference correspond to the data integration number and difference. Finally, the higher the integrity coefficient and reliability coefficient, the lower the redundancy coefficient, which proves that the better the integration performance of the adaptive integration method of the corresponding college students’ comprehensive quality evaluation data. In addition, the data retrieval delay is set as the test index of the application performance of the integration method, and the numerical result is Formula 14:
(14)
t = toutput − tinput In the formula, tinput represents the input time of the student’s comprehensive quality evaluation data sample, and toutput represents the output time of the integrated results. The higher the value of calculation index t, the better the application performance of the corresponding integration method. 3.4 Experimental Process and Result Analysis 3.4.1 Adaptive Integration Performance Test of Evaluation Data Under the configured experimental environment, complete the development and operation of the optimization design method, and substitute the prepared data samples of college students’ comprehensive quality evaluation into the integration program to obtain the integration results, as shown in Fig. 8.
01
94 5
76
82 3
02
92 3
95
75 2
03
97 4
89
73 5
04
95 2
93
79 2
05
98 5
99
80 4
01
06
87 7
84
75 2
02
07
80 2
88
83 3
03
08
93 6
90
90 4
04
09
85 5
83
86 1
01
05
Fig. 8. Adaptive Integration Results of Comprehensive Quality Evaluation Data
Similarly, the development and operation of the comparison method can be completed, and the corresponding integration results can be obtained. Through the volume of relevant data, the test results of data integration integrity coefficient indicators are obtained, as shown in Table 2.
Web Based Adaptive Integration Method
187
Table 2. Data Table of Data Integration Integrity Factor Test Number of evaluation data samples/GB
Number of redundant data samples/GB
SOA software architecture Integrate data volume/GB
Sample weight update Integrate data volume/GB
Web network method to integrate data volume/GB
68.4
0.8
62.8
63.4
66.8
75.2
0.4
72.6
72.8
75.0
55.6
0.2
54.4
54.2
55.3
38.1
0.6
35.3
35.5
37.6
44.5
0.4
43.0
43.2
44.1
50.6
0.6
47.2
47.6
50.2
37.4
0.5
33.8
35.2
37.0
65.7
0.5
61.4
61.7
65.3
By substituting the data in Table 2 into Formula 12, the average data integration integrity coefficients of the comparison method are 0.945 and 0.954, respectively, and the average integrity coefficients of the optimization design method are 0.996. In addition, the test comparison results of data redundancy coefficient and reliability coefficient are obtained through the calculation of Formula 13, as shown in Fig. 9.
09
0 25
08 0 20
07 06
0 15
05 04
0 10
03 02
0 05
01 0 1
2
3
4
5
6
7
Fig. 9. Test results of data integration redundancy and reliability factor
188
W. Liu and H. Yuan
It can be seen intuitively from Fig. 9 that compared with the traditional integration method, the optimization design integration method has lower redundancy coefficient and higher reliability coefficient, that is, the optimization design method has higher integration performance. 3.4.2 Evaluate the Application Performance of Data Adaptive Integration Method Apply the integration method of comprehensive quality evaluation data to data retrieval, record the input time of search words and the output time of search results, and obtain the test results reflecting the application performance of the integration method through the calculation of Formula 14, as shown in Fig. 10. 9 8 7 6 5 4 3 2 1 0 1
2
3
4
5
6
7
Fig. 10. Application performance tetst results of evaluation data integration method
Through the average value calculation, it can be concluded that the average value of data retrieval delay under the two comparative integration methods is 7.4 s and 6.8 s respectively, while the average data retrieval delay obtained by the adaptive integration method of Web based comprehensive quality evaluation data of college students using optimal design is 1.7 s, that is, the optimal design integration method has higher application performance.
4 Conclusion The comprehensive quality of college students is the basis for students to establish correct values and world outlook, and also lays the foundation for students’ learning. Through the evaluation of the comprehensive quality of college students and the analysis of the evaluation data, it can not only reflect the basic quality of students, but also provide reference for the development of ideological and political education of college students, with high research value. The next step is to apply this method to more colleges and universities for verification, and further analyze the integration effect of the proposed
Web Based Adaptive Integration Method
189
Web-based adaptive integration method for college students’ comprehensive quality evaluation data. And find out the deficiencies in the integration in the experimental test to further improve the application scope of this method.
References 1. Zavalina, O.L., Burke, M.: Assessing skill building in metadata instruction: quality evaluation of dublin core metadata records created by graduate students. J. Educ. Library Inform. Sci. 62(4), 423–442 (2021) 2. Vitali, E., Gadioli, D., Palermo, G., et al.: An efficient monte carlo-based probabilistic timedependent routing calculation targeting a server-side car navigation system. 9(2), 1006–1019 (2021) 3. Durgadevi, P., Srinivasan, S.: Resource allocation in cloud computing using SFLA and cuckoo search hybridization. Int. J. Parall. Programm. 48(3), 549–565 (2018). https://doi.org/10.1007/ s10766-018-0590-x 4. Li, L., Dong, J., Zuo, D., et al.: SLA-aware and energy-efficient VM consolidation in cloud data centers using host state 3rd-order markov chain model. Chin. J. Electron. 29(6), 1207–1217 (2020) 5. Guo, T., Shi, S., Yuan, J.: Missile data acquisition storage device design. J. Ordnance Equip. Eng. 41(6), 160–163 (2020) 6. Hao, G., Haibin, L., Jiao, F., et al.: Ship AIS data quality evaluation algorithm based on big data processing. Comput. Simul. 39(2), 298–303 (2022) 7. Rahul, K., Banyal, R.K.: Detection and correction of abnormal data with optimized dirty data: a new data cleaning model. Int. J. Inform. Technol. Decision Making 20(02), 809–841 (2021) 8. Nagaveni, K.: Haar wavelet collocation method for solving the telegraph equation with variable coefficients. Int. J. Appl. Eng. Res. 15(3), 235–243 (2020) 9. Bedolla-Ibarra, M.G., del Carmen, M., Cabrera-Hernandez, M.A., Aceves-Fernández, S.-A.: Classification of attention levels using a random forest algorithm optimized with particle swarm optimization. Evol. Syst. 13(5), 687–702 (2022). https://doi.org/10.1007/s12530-02209444-2 10. Mekala, M.S., Jolfaei, A., Srivastava, G., et al.: Resource offload consolidation based on deep-reinforcement learning approach in cyber-physical systems. IEEE Trans. Emerg. Topics Comput. Intell. 6(2), 245–254 (2020)
A Method of Mining Abnormal Data of College Students’ Physical Fitness Test Based on Deep Learning Liyi Xie1(B) and Hui Liu2 1 Shandong Sport University, Jinan 250014, China
[email protected] 2 College of Information Engineering, Fuyang Normal University, Fuyang 236041, China
Abstract. In order to provide effective reference data for the improvement of college students’ physique, the depth learning algorithm is used to optimize the design of abnormal data mining method for college students’ physique test. Use the hardware equipment to obtain the college students’ physique test data samples, according to the designed student physique test anomaly detection criteria, use the deep learning algorithm to extract the physical test data features, and determine whether the current data is the mining target. After the mining target is obtained from the data sample, the association rules of abnormal data mining are generated, and the final abnormal data mining results of college students’ physique test are obtained through the steps of missing data interpolation and repeated data filtering. Through the comparison with traditional mining methods, the conclusion is drawn that the accuracy and recall of the optimized design of outlier data mining methods have been significantly improved. Keywords: Deep Learning · Physical Fitness Test of College Students · Test Abnormal Data · Data Mining
1 Introduction Constitution test is based on the concept of constitution, and the test content should include four major qualities, namely body shape, body function, physical quality and psychological quality [1]. At present, besides enterprises and institutions, the physical fitness test is also gradually extended to the college environment. Through the physical fitness test of college students, we can timely understand the basic physical conditions of students, and provide basic guarantee for students’ learning in colleges and universities. Although the Ministry of Education has repeatedly emphasized the importance of college students’ physical health, according to the data of the recent three years’ research report on Chinese students’ physical fitness and health, the height of college students continues to grow, but other indicators show a downward trend. In order to judge whether the physical health of college students is abnormal, it is necessary to mine the abnormal data of college students’ physical fitness test. The abnormal data of college students’ © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 190–205, 2024. https://doi.org/10.1007/978-3-031-50571-3_14
A Method of Mining Abnormal Data of College Students’ Physical Fitness
191
physique test mainly refers to the data that is not in the normal range in the students’ physical test results. Through the mining and analysis of abnormal data, it provides data reference for the determination of students’ health status. Data mining is the process of extracting potentially useful and credible information and knowledge hidden in a large number of incomplete, noisy, fuzzy, random original data that people do not know in advance. At this stage, more mature data mining methods include: data mining methods based on rough set theory, data mining methods based on genetic algorithms and data mining algorithms based on association rules. Supported by the above different algorithms, they provide reference rules for data mining and obtain data mining results that meet the needs. However, the above traditional data mining methods are applied to the mining of abnormal data of college students’ physical fitness test. Because the traditional methods are difficult to judge the abnormal data in the massive data, the data mining results obtained have abnormal data missing, data mining errors and other phenomena. Therefore, the depth learning algorithm is introduced. Deep learning plays a huge role in the field of data processing with the advantages of strong ability to learn data characteristics, wide data coverage, etc., and shows a trend of replacing traditional machine learning methods. The deep learning method can not only automatically optimize the loss function to learn the features of scRNA seq data, but also effectively extract the key features in the data. With the development of college students’ physique testing technology, the scRNA seq data is growing exponentially. Due to its powerful computing power, deep learning just meets the needs of computing huge data sets. Deep learning algorithm includes artificial neural network algorithm, classical self coding network algorithm, convolutional neural network algorithm and other branches. In the actual application process, specific algorithms can be selected according to actual needs. In this paper, the deep learning algorithm is applied to the data mining of college students’ physique test anomalies. The abnormal data mining of college students’ physique test is realized from the following aspects: setting up the abnormal detection criteria of college students’ physique test, obtaining the sample of college students’ physique test data, using the deep learning algorithm to extract the characteristics of the physical test data, detecting the abnormal data of college students’ physique test, and generating the association rules of abnormal data mining. The experimental results show that this method can improve the mining performance of abnormal data of college students’ physique test.
2 Design of Abnormal Data Mining Method for College Students’ Physique Test The process of data mining can be roughly divided into four steps: the first and most critical step is data preprocessing; The second step is to use algorithms for data mining, which mainly uses some machine learning algorithms; The third step is to evaluate the data mining algorithm model and results; Because different data sets and different mining algorithms will produce different results, the fourth step needs to display the results in an appropriate form. Figure 1 shows the basic operation process of the data mining method.
192
L. Xie and H. Liu
Fig. 1. Data mining flow chart
In the design process of the abnormal data mining method of college students’ physique test, we need to use the depth learning algorithm to judge the abnormal conditions in the sample data, and then determine the mining target of the abnormal data, so as to obtain the mining results of the abnormal data of college students’ physique test. 2.1 Setting the Detection Criteria for Abnormal Physical Fitness Test of College Students According to the actual situation of college students and the validity, reliability and objectivity of the test indicators, the height, weight, body composition, vital capacity, cardiopulmonary function, step test, bone density, and 10 m are finally selected × 4 Round trip running, grip strength, standing with eyes closed on one foot, reaction time, sit ups, push ups, vertical jump and sitting forward bending are indicators of physical fitness test [2]. The normal range of some fitness test indexes is shown in Table 1. Table 1. Data table of normal range of college students’ physical fitness test Physical indicators
Normal range for boys
Normal range of girls
BMI
[17.9, 23.9]
[17.2, 23.9]
Vital capacity
[3100 mL, 5040 mL]
[2000 ml, 3400 mL]
Sitting forward flexion
[3. 7 cm, 24.9 cm]
[6 cm, 25 cm]
Standing long jump
[208 cm, 273 cm]
[151 cm, 207 cm]
Pull up
[10,19]
-
50 m dash
[6.7 s, 9.1 s]
[7.5 s, 10.3 s]
800/1000 m long run
[3.17 min, 4.32 min]
[3.18 min, 4.34 min]
abdominal curl
-
[26, 56]
A Method of Mining Abnormal Data of College Students’ Physical Fitness
193
The test and calculation process of BMI index can be expressed as Formula 1: βBMI =
m h2
(1)
In Formula 1, m and h represent weight and height respectively. The normal range of other body measurement indicators can be obtained according to the above method. In the actual data mining process, first judge whether the current data is within the normal range set in Table 1. If it is, the current data is the data mining target. Otherwise, the data mining is abandoned. 2.2 Obtaining Physical Fitness Test Data Samples of College Students In the actual process of college students’ physique test, intelligent equipment can be used to obtain the initial data samples of college students’ physique test. Taking the data collection of vital capacity test indicators as an example, XGZP6847 gas pressure sensor is used to collect the exhaled gas pressure of the tested person, and some pins of the sensor are connected to the data transmission components. After each module is powered on, the transmission components will conduct ad hoc network communication according to the set components, and the data processing components will also establish communication with the PC through the wireless communication module. When the subject exhales to the sensor input through the trachea, the voltage value output by the sensor will change [3]. As the output of the sensor is analog signal, analog to digital conversion is required, while the data transmission element itself has analog to digital conversion function. Select a register to control the reference voltage and decimation rate of a single conversion channel and store the conversion result. The conversion result is in the form of binary complement, and its high two bits are zero, so it needs to be shifted. The transmission chip of the wireless sensor node is directly connected to the PC through the serial port. From the data displayed on the PC, it can be seen that the sensor has collected the output voltage change caused by the test, which is the result of the vital capacity test data samples of college students. In addition, for the data stored in the college students’ physique test management platform, the heuristic crawler tool can be used for data acquisition. The implementation process of the heuristic crawler tool is shown in Fig. 2. According to Fig. 2, data acquisition is the first step in the process. The data acquisition is completed by the web crawler, and the data set that can be mined is finally obtained. Two rounds of crawling are required. These two rounds of crawling process include all the data in the college student physique test management platform. 2.3 Extracting the Features of Body Measurement Data with Depth Learning Algorithm Taking the acquired college students’ physical fitness test data samples as the processing target, the deep learning algorithm is used to extract the features in the data, which lays the foundation for the determination of data abnormal state. The data mining algorithm for optimal design uses artificial neural networks, that is, mathematical models simulating biological neural networks. Biological neural networks refer to networks composed
194
L. Xie and H. Liu
Fig. 2. Flow chart for obtaining physical test data samples of college students
of biological brain neurons, cells, contacts, etc., which are used to generate biological consciousness and help biological thinking and action. Its important component is neurons [4]. Artificial neural network is to use computers and other equipment to simulate the basic functions of biological neurons and learn the ability of human independent thinking. In order to simulate the nonlinear operation of neurons, artificial neurons add some nonlinear activation functions to the weighted sum, so that the depth neural network has nonlinear learning ability. Figure 3 shows the structure diagram of the mathematical model of a single artificial neuron. 1
n
Fig. 3. Structure of neuron model
In Fig. 3, γi is the action threshold of the neuron, factivation () is the activation function of the function, and y is the actual output value of the neuron node. Abstract and summarize with mathematical expressions. There are two states of neurons, active state or inhibitory state, which are represented by ±1 [5]. The output of any neuron i in the artificial neural network can be expressed as Formula 2: n net yi = sgn ji xj − γi (2) i=1
In Formula 2, ji is the weight value between the nodes in the input layer and the hidden layer. The size, positive and negative of the weight value are used to represent the two states of synaptic strength and excitation and inhibition. nnet is the number of
A Method of Mining Abnormal Data of College Students’ Physical Fitness
195
neurons in the constructed artificial neural network. sgn() is called a step function or symbolic function, and is expressed as Formula 3: −1, x < 0 sgn(x) = (3) 1, x ≥ 0 In Formula 3, x is the input value of the artificial neural network. The hyperbolic tangent function is used as the activation function of neurons, which is expressed as Formula 4: factivation (x) = tanh(x) =
ex − e−x ex + e−x
(4)
In the process of feature extraction of the body measurement data, the forward propagation and backward propagation of the depth learning algorithm are used. The specific propagation principle is shown in Fig. 4.
Ă
Fig. 4. Schematic diagram of forward and backward propagation of deep learning algorithm
The weights on the connection between the neural network layers form two weight matrices W1 and W2 . I, G and O are defined as input vectors, hidden layer vectors and output vectors respectively. Then the process of forward propagation can be expressed as Formula 5: G = factivation (W1 · I) (5) O = factivation (W2 · G) Backpropagation aims to update the weight of the connection by returning the network error. Assuming that each training sample is (α, χ ), the forward propagation output
196
L. Xie and H. Liu
is u, and each node error is ε, the node error of the output layer is expressed as Formula 6: εoi = ui (1 − ui )(χi − ui )
(6)
The node error of hidden layer is expressed as Formula 7: εgi = gi (1 − gi )
nnet
ki · εoi
(7)
i=1
No matter how many hidden layers there are, the calculation methods of forward propagation and back propagation are based on the above formula. The update of each weight value is represented by Formula 8: ij ← ij + γ εj xji
(8)
The emergence of back propagation algorithm solves the problem that the original multi-layer perceptron error cannot be returned, making the deep neural network possible [6]. In the process of actual data feature extraction, set the average value and maximum value of data as feature vectors, input the acquired initial college students’ physique test data into the constructed artificial neural network, and the result of feature vector extraction is Formula 9: ⎧ n net ⎪ ⎪ xi ⎪ ⎨ i=1 δavg = (9) ⎪ Nc ⎪ ⎪ ⎩ δmax = max(x) In Formula 9, Nc is the sampling value of college students’ physique test data, max() is the maximum solution function, and xavg and xmax are the extraction results of data average and maximum eigenvectors. Finally, formula 9 is used to fuse the extracted feature vector, and the fusion result is expressed as formula 10: δcom =
nt
δi
(10)
i=1
In Formula 10, δi is the extracted feature vector, and nt is the extracted number of feature vectors from abnormal data of college students’ physique test. According to the above process, the feature extraction of abnormal data of college students’ physical fitness test is completed. 2.4 Detect Abnormal Data of College Students’ Physique Test According to the feature extraction results of college students’ physical fitness test data output by the artificial neural network algorithm, we can judge whether the current data
A Method of Mining Abnormal Data of College Students’ Physical Fitness
197
is abnormal through similarity measurement. The measurement process of similarity can be expressed as Formula 11: |δcom ∩ δset | ϕ=√ |δcom ||δset |
(11)
In Formula 11, δcom and δset are the extracted comprehensive data features and the set normal range data features respectively. If the calculation result of Formula 11 is higher than the threshold value ϕ0 , it means that the current college students’ physique test data is within the normal range, that is, the current college students’ physique test data is normal; otherwise, the current data is considered as abnormal data, which is the target of abnormal data mining. 2.5 Generating Association Rules for Abnormal Data Mining The main function of association analysis is to discover an association rule, through which the hidden association relationship between different things can be found. Association rules can be expressed as implication expressions in the form of X → Y. Association rules can reflect the interaction between itemset X and itemset Y. Through association analysis, you can find interesting associations or correlations between items in large transactions or relational data sets, and know which transactions frequently occur at the same time, thus helping people make better decisions. Association rules mean that there may be a strong relationship between two items [7]. Association rule mining is a rule based machine learning algorithm, which can find interesting relationships in large databases. Its purpose is to use some indicators to distinguish the strong rules in the database. The commonly used measurement methods of association rules are confidence between association rules and support of association rules. The final strong association rule is based on the minimum support and minimum confidence set by the experiment. That is, when the support of an association rule is greater than the currently set minimum support and its confidence is greater than the currently set minimum confidence, it can be said that this association rule is a strong association rule. Support is the probability of occurrence, that is, the proportion of occurrences of the current item in all item sets. The support is calculated according to Formula 12: μSUPPORT (X → Y ) = P(X ∪ Y )
(12)
In Formula 12, P(X ∪ Y ) represents the probability of simultaneous occurrence of itemset X and itemset Y. Confidence is a conditional probability, that is, the probability of condition Y when condition X occurs. The calculation of confidence is shown in Formula 13: μConfidence (X → Y ) = P( Y |X )
(13)
In Formula 13, P( Y |X ) represents the probability that itemset X is included in itemset Y. In the process of generating association rules for actual mining, frequent itemsets are generated first, and support is set to get frequent patterns. That is, the number of times a pattern appears in the established database divided by the number of records of this type
198
L. Xie and H. Liu
of data in the database must not be less than the set support. Otherwise, it is considered as a non frequent pattern, and it is considered that the occurrence of this pattern is accidental and of no value [8]. On the setting of support, because the physique test data is obviously highly consistent with the normal distribution, if the support is set too high, there will be few rules that can be discovered. Therefore, this paper attempts to have multiple support degrees, and then screen the rules found. The confidence level is set to 0.8, that is, if A frequent mode is A, then A is not less than B’s probability of occurrence in all frequent modes A is greater than 0.8, then the generated strong rule is considered reliable. 2.6 Realization of Abnormal Data Mining in College Students’ Physique Test In order to achieve accurate mining of abnormal data of college students’ physical fitness test, a data mining framework is constructed. The abnormal data mining model is divided into four layers, the first layer is the site layer, where each site corresponds to a site agent, and each site agent corresponds to a site information database; The second layer is the management layer, in which the management agent manages all the agent information and coordinates the work between agents in each layer; The third layer is the cooperative mining layer, which is composed of data preprocessing agent, data mining agent and evaluation agent; The fourth layer is the algorithm layer. The data preprocessing agent can call m preprocessing algorithms, and the data mining agent can call n mining algorithms. According to the characteristics of the application, the communication between the site layer and the management layer adopts the point-topoint message transmission mode of establishing a channel between agents. Based on this communication channel, two-way and peer-to-peer message transmission is carried out. The communication between the management layer and the cooperative mining layer adopts blackboard mode, that is, a group of agents participating in communication share a common area, and information exchange is realized by writing and reading information to the area. Agents can understand and exchange information with each other and process messages through message protocols. In the implementation of the system, the message protocol uses a data mining oriented communication language based on the KQML language definition. 2.6.1 Mining Abnormal Data of College Students’ Physical Measurement With the support of generating association rules for abnormal data mining, the Apriori algorithm is used to mine the abnormal data of college students’ physical fitness test. Apriori algorithm is an important algorithm to calculate the frequency of occurrence of both indicators to determine whether there is correlation. In Apriori algorithm, support is obtained based on the frequency of occurrence of indicators in big data. However, in order to solve the important problem of data redundancy, the sort method is often used to disrupt the data set, and the corresponding logic code is used to obtain the training set and the optimal set. The support and confidence are calculated by analyzing the data to further explain the corresponding relationship between some indicators in the data set. In the actual mining process, traverse each indicator in the data set, establish a 1-item set, output all itemids to record the list of each indicator, and judge whether the indicator is in the list by traversing each record and the indicator in each record. If the indicator is not in
A Method of Mining Abnormal Data of College Students’ Physical Fitness
199
the list, it will be added to the list. After this process, all indicators will be sorted, the list elements will be mapped to frozenset(), and the list will be returned. The candidate set is further obtained by inputting dataset D, candidate set Ck, and minimum support. The main steps to obtain the candidate set are described as follows: obtain the candidate set Ck by combining the upper frequent item set Lk-1, and filter the candidate set Ck using the minimum support minSupport. After the above process, output the frequent item set Lk of this layer and the support of each item, establish a dictionary to calculate the number of occurrences of each item in the candidate set Ck and in all indicator records, and calculate the number of occurrences by comparing each item in the candidate set with the original indicator records. And traverse each indicator record to traverse each item in the candidate set Ck for comparison. If the items in candidate set Ck appear in the indicator record, the current item is a subset of the current indicator record [9]. If the item in Ck is selected for the first calculation, the number is counted as 1; Otherwise, the total number of records in the dataset and the total number of indicator records are used to calculate the support and record the frequent item set filtered by the minimum support. Generate candidate k-itemsets from upper frequent k-1 itemsets, obtain the number of new candidate set elements k, and output candidate sets. Save the new candidate set and enter the number of frequent itemset records to cycle through. Each project in the more frequent project set is the same as the other projects. Each project is compared to other project elements by using two for loops. Traverse other items in the candidate set except the previous item. Compared with the current item, k-1 elements of the current item in the candidate set. There are k-1 elements of the remaining items in the candidate set. Each time, there is only one item in the remaining items. Convert the tuple into a list for sorting. Merge to generate a k + 1 itemset, return the last k + 1 itemset, which is the same, and then merge the two items. Find L2 and L3 according to L1 through the while loop, which will create a large list containing a larger itemset until the next larger itemset is empty. The indicator combination length of the candidate set exceeds the maximum indicator record length of the original data set. If the maximum indicator record length of the original data set is 4, the candidate set is at most 4-item set, and k-item candidate sets are generated from frequent k−1 item sets. Frequent k-item sets are generated from k-item candidate sets and filtered through minimum support. Update the support dictionary to add new support, add the new frequent k itemset to the list of existing frequent itemsets, and add k plus 1 to generate the next itemset. Scan the entire data set to obtain all the data of one setting that frequently appears as a candidate. K = 1, frequent 0 itemsets are empty sets. Enter the apriori function to generate frequent union list L, which supports lists and minimum confidence. The output contains a list of trust rules used to generate association rules. The last item of the rule list that meets the minimum confidence requirement traverses all the items in H. It is used as the confidence calculation of the last item of the association rule, and the set subtraction operation is used. If the confidence level is greater than the minimum confidence level minConf, output the association rule preamble freqSet condeq, and output the association rule condeq. Save the association rules that meet the conditions, save the previous item of the association rules, return the latter part and the rule items whose confidence level meets the minimum confidence requirement, and return the last item that meets the conditions, so as to realize abnormal data mining in all college students’ physical test data samples.
200
L. Xie and H. Liu
2.6.2 Interpolate Missing Data There may be some missing data in the initially mined abnormal physical measurement data of college students, so the missing data needs to be interpolated. The specific interpolation process can be expressed as Formula 14: xdefect =
xq − xh 2
(14)
In Formula 14, xq and xh represent the previous data and the next data of missing data respectively. According to the above method, the method of median solution is used to obtain the interpolation results of missing data from college students’ physical measurement anomaly mining. 2.6.3 Duplicate Data Identification and Filtering Use Formula 15 to identify whether there is duplicate data in the college students’ body measurement abnormal data mining results.
2 xi − xj s= (15) In Formula 15, xi and xj represent any two data in the abnormal data mining results respectively, and the calculation result of s represents the coincidence coefficient between the abnormal data mining results [10]. If the value of s is 1, xi and xj are considered as duplicate data, and one of the abnormal data needs to be deleted; otherwise, it is considered that there is no duplicate data between them, and no filtering is required. Finally, the processed abnormal data of college students’ physical tests are clustered to obtain the data mining results of college students’ physical tests that meet the quality requirements, and finally the data mining results are output in a visual form.
3 Performance Test Experiment Analysis This data mining performance test experiment is conducted on Intel (R) Core (T1Vl) i3 2.4 GHz and 2G memory Windows? Microsoft Visual C++ 6.0 is used as the data mining platform of this experiment, and MySQL is used as the database management system. 3.1 Prepare Data Samples for Physical Fitness Test of College Students The experimental data used in this experiment was provided by the physical education institute of all colleges and universities in a city. The initial data included the physical fitness test items of college students in all colleges and universities in the city from 2020 to 2022, specifically including height, weight, vital capacity, step test, grip strength, sitting precursor and standing long jump. Then, the database technology was used to generate the data mining database of this paper. After removing some useless records, the amount of experimental data was 8900. Before the experiment, considering the inconsistency and incompleteness of the experimental data, it is necessary to process the data to make it
A Method of Mining Abnormal Data of College Students’ Physical Fitness
201
meet the requirements of mining. The data samples of college students’ physical fitness test prepared include normal physical fitness test data and abnormal physical fitness test data, and the data samples are randomly divided into multiple groups. The specific data sample division and settings are shown in Table 2. Table 2. Sample setting of college students’ constitution test data Experimental group
Normal data volume/MB
Abnormal data volume/MB
1
357.8
307.2
2
465.2
355.9
3
383.9
321.8
4
374.6
289.7
5
452.1
316.4
6
479.6
353.8
7
398.2
279.8
8
407.5
323.7
The setting of abnormal data volume in Table 2 is the mining target value of abnormal data of college students’ physical fitness test. 3.2 Input Operation Parameters of Deep Learning Algorithm Because the optimized design of the abnormal data mining method of college students’ physique test uses the deep learning algorithm, it is necessary to set the operation parameters of the algorithm. The network of the artificial neural network algorithm used has six layers. The input layer is the sample of the physique test data prepared. The number of settings of the input layer, output layer and hidden layer is 10, and the network size of the input and output layers is 27 × 23. The hidden layer network size is 54 × 46 in 1 step × 1. The number of neurons in the artificial neural network is 180, and the number of nodes in the output layer is 60. The learning rate of the artificial neural network in the propagation learning process is 0.0001, and the maximum number of iterations is 2000. 3.3 Setting Performance Evaluation Indicators for Abnormal Data Mining In order to quantitatively evaluate the performance of the method, the accuracy and recall of outlier data mining are set as the quantitative test indicators of mining performance. The numerical results of the accuracy index are shown in Formula 16:
202
L. Xie and H. Liu
ϑAcc =
Numabnormal × 100% Numexcavate
(16)
In Formula 16, Numexcavate is the output data volume of abnormal data mining results, and Numabnormal is the volume of body side abnormal data in the output results of mining methods. In addition, the test result of the recall evaluation index can be expressed as Formula 17: ϑRec =
Numabnormal × 100% Numset
(17)
In Formula 17, Numset is the total number of abnormal data samples of college students’ physical fitness test. The final test shows that the higher the mining accuracy and recall, the better the mining performance of the corresponding method. 3.4 Performance Test Process and Result Analysis In order to reflect the advantages of optimizing the mining performance of the abnormal data mining method of college students’ physical fitness test based on deep learning, the experiment sets the traditional data mining method based on rough set theory and the data mining method based on genetic algorithm as the comparison method of the experiment. Input the prepared college students’ physique test data samples into the corresponding mining method to obtain the corresponding mining output results, as shown in Fig. 5. Based on the statistics of relevant data, through the calculation of Formula 16 and Formula 17, the test comparison results reflecting the mining performance of abnormal data mining methods for college students’ physique test are obtained, as shown in Fig. 6. It can be seen intuitively from Fig. 6 that, compared with traditional data mining methods, the accuracy and recall rate of the optimized design of college students’ physical fitness test exception data mining methods based on deep learning have been significantly improved, that is, the optimized design mining methods have obvious advantages in performance.
A Method of Mining Abnormal Data of College Students’ Physical Fitness
203
50 40
30
20 10
(a) Data mining method based on rough set theory 50 40
30
20 10
(b) Data Mining Method Based on Genetic Algorithm 50 40
30
20 10
(c) Optimizing the design of data mining methods for students' body measurement anomalies Fig. 5. Abnormal data mining results of college students’ constitution test
204
L. Xie and H. Liu
90
90
80
80
70
70
60
60
50
50
40
40
30
30
20
20
10
10
01
02
03
04
05
06
07
Fig. 6. Comparison results of body testing abnormal data mining performance test
4 Conclusion In order to accurately analyze the situation of college students’ physique test, this paper proposes a data mining method for college students’ physique test anomalies based on deep learning. The in-depth learning method is used to collect and mine the data of college students’ physique test, so as to accurately find out the students with abnormalities in the process of physique test. The experimental results show that the proposed method provides powerful help for the research of college students’ physical health. Through the application of the deep learning algorithm, the optimization design of the data mining method for college students’ abnormal physique test is realized, so as to accurately grasp the change trend and take effective measures in time, which is conducive to continuous improvement and improvement of college students’ physique. The next research will break through the technical barriers, realize the cross-integration of data mining technology and professional sports data analysis system, broaden its depth and breadth in sports practice, and truly realize the transformation from data to value.
References 1. Lopes-Silva, J.P., Panissa, V.L.G., Julio, U.F., Franchini, E., et al.: Influence of physical fitness on special judo fitness test performance: a multiple linear regression analysis. J. Strength Condition. Res. 35(6), 1732–1738 (2021) 2. Wang, N., Liu, Y.: A novel most valuable player algorithm for optimization problems. Comput. Simul. 37(6), 273–282,327 (2020) 3. Lin, S.: Data mining artificial intelligence technology for college English test framework and performance analysis system. J. Intell. Fuzzy Syst. 40(2), 1–11 (2020)
A Method of Mining Abnormal Data of College Students’ Physical Fitness
205
4. Tomasevic, N., Gvozdenovic, N., Vranes, S.: An overview and comparison of supervised data mining techniques for student exam performance prediction. Comput. Educ. 143, 103676.1– 103676.18 (2020) 5. Yin, X.H.: Construction of student information management system based on data mining and clustering algorithm. Complexity 2021(2), 1–11 (2021) 6. Liu, S., Li, Y., Fu, W.: Human-centered attention-aware networks for action recognition. Int. J. Intell. Syst. 37(12), 10968–10987 (2022) 7. Wan, H., Tang, S.: Sentiment analysis of students in ideological and political teaching based on artificial intelligence and data mining. J. Intell. Fuzzy Syst. 3, 1–10 (2021) 8. Li, J., Lei, H., Tsai, S.B.: Online data migration model and ID3 algorithm in sports competition action data mining application. Wirel. Commun. Mob. Comput. 2021(7), 1–11 (2021) 9. Yin, Z., Cui, W.: Outlier data mining model for sports data analysis. J. Intell. Fuzzy Syst. 40(2), 1–10 (2020) 10. Liu, S., Xiyu, Xu., Zhang, Y., Muhammad, K., Weina, Fu.: A reliable sample selection strategy for weakly supervised visual tracking. IEEE Trans. Reliab. 72(1), 15–26 (2023). https://doi. org/10.1109/TR.2022.3162346
Low Resolution 3D Image Enhancement Based on Artificial Neural Network Yingjian Kang1(B) , Lei Ma1 , Jianxing Yang1 , and Shufeng Zhuo2 1 Beijing Polytechnic, Beijing 100016, China
[email protected] 2 The Internet of Things and Artificial Intelligence College, Fujian Polytechnic of Information
Technology, Fuzhou 350003, China
Abstract. In order to improve the quality of low resolution 3D images, this study proposes a low resolution 3D image enhancement method based on artificial neural network. First of all, the adjustable filter is used to divide the image categories, and the multi-angle mesh model of the machine vision system is constructed. Then, the low resolution image is decomposed into multiple scales by filtering method. The white balance method is used to eliminate the color deviation of low resolution 3D images and realize the color correction of low resolution 3D images. Finally, the atmospheric scattering model is used to de blur the low resolution 3D image. Combining the advantages of color model transformation algorithm and artificial neural network, the low resolution 3D image enhancement algorithm is designed. Experimental results show that this method can improve the quality of low resolution 3D images and enhance the image enhancement effect. Keywords: Artificial Neural Network · Low Resolution · 3D Image · Image Enhancement · Deblurring
1 Introduction As a technology in the sub-field of digital signal processing, digital image processing refers to the technology of converting traditional image signals into digital signals and using software or hardware devices to process image information. With the rapid development of modern science and technology, people gradually began to use image data to spread some information, so digital image processing technology came into being [1]. After long-term development, digital image processing technology has been widely used in many fields such as daily life, production, and work, and has driven the progress and development of many related fields [2]. The early digital image processing aims to improve the image quality, but in the actual digital image processing process, the acquisition quality of the acquisition equipment will be reduced due to the impact of the environment. Therefore, it is necessary to carry out some image processing operations to produce clear and high-quality images. Nowadays, with the rapid development of artificial intelligence and deep learning, more © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 206–219, 2024. https://doi.org/10.1007/978-3-031-50571-3_15
Low Resolution 3D Image Enhancement
207
and more experts and scholars realize that image processing and computer vision will have great influence potential and important position in the future social progress. Reference [3] proposed a hyperspectral and multispectral image fusion algorithm based on the coupled nonnegative matrix decomposition of the minimum volume constraint of a single object. In the process of unmixing mixed pixels, the algorithm considers the physical meaning of the image and adds the minimum volume constraint of end element simplex. This algorithm can effectively overcome the shortcomings of existing fusion algorithms, and achieve accurate matching of end elements and abundance of hyperspectral and multispectral images. Reference [4] proposed a low illumination image enhancement algorithm using a hybrid implementation strategy of depth learning and image fusion. First, the best exposure component of the image is quickly estimated using the exposure component prediction model, and a moderately exposed image as a whole is obtained under the framework of Retinex model. Then, the low illumination image itself and its overexposure image are used as the correction and supplement image of the moderate exposure image to participate in the fusion. Finally, the local structural fusion and chrominance weighted fusion mechanism are used to fuse the three prepared images to obtain the final enhanced image. Reference [5] proposed an image enhancement algorithm based on illumination adjustment and depth of field difference. This algorithm is based on Retinex theory, uses the dark channel principle to obtain the depth of field, and uses SVD algorithm to cluster the image depth. After sub images are segmented, local haze concentration is estimated according to depth of field, and sub images are enhanced and fused adaptively. The gray distribution of low resolution images is dense, and there is no obvious boundary between gray levels. Therefore, if you want to improve the visual effect of low resolution images, first convert the images to make them more suitable for human eye observation and equipment processing, so that it is more convenient to observe and extract information. Therefore, it is still important to enhance the useful information of low resolution images obtained under different conditions. Based on the above research background, this paper applies the artificial neural network to the design of low resolution 3D image enhancement methods to improve the quality of low resolution images.
2 Design of Low-Resolution 3D Image Enhancement Method 2.1 Acquisition of Low-Resolution 3D Images The machine vision system is used to collect low resolution 3D images, and then the adjustable filter wave is used to automatically separate the collected images according to the behavior characteristics. The machine vision system is built to evenly divide the multi angle grid model, and the filtered method is used to multi-scale decompose the processed low resolution 3D images. The specific operations are as follows: N −1 1 X (k), n = 0, 1, · · · , N − 1 x(n) = √ N k=0
(1)
Among them, X (k) is the orderly arrangement of the behavioral feature pixels of the machine vision system, and N is the distance feature between the two queues.
208
Y. Kang et al.
The change gradient value of the low-resolution 3D image represents the quality of the image. The smaller the change gradient value, the higher the quality of the lowresolution 3D image. The change gradient value of the low-resolution 3D image can be calculated by the following formula: N M 1 f (i, j) g˜ = M ×N
(2)
i=0 j=0
In the above formula, M is the total number of rows of low resolution 3D images; N is the total number of columns of low resolution 3D images, and f (i, j) is the contrast value of low resolution 3D images. The entropy value of the low resolution 3D image refers to the effect of the low resolution 3D image after enhancement processing. The higher the entropy value, the more information can be extracted from the image. The entropy value of the image is calculated as follows: J =
m
ρ(m)
(3)
m=0
In the above formula, ρ(m) is a single pixel of a low-resolution 3D image. The filtering method is used to separate the features of the collected low-resolution 3D images [6], and the low-resolution 3D image data extracted by the machine vision system is collected and analyzed. The low-resolution 3D image acquisition and analysis process is as follows: x(k + 1) =
f (x) + Ox,y V (k)
(4)
Among them, Ox,y represents the grid model coordinates of the low resolution 3D image acquired by the machine vision system, f (x) represents the grid model of the original image, and V (k) represents the characteristics of the low resolution 3D image. Multi angle template matching is performed according to the contrast difference of the collected image to obtain the intersected low resolution 3D image feature unit: ⎧ ∂u ∂u 1 ⎪ ⎪ i − j e = ⎨ |∇u| ∂y ∂x (5) ⎪ ∂u ∂u 1 ⎪ ⎩f = i+ j |∇u| ∂x ∂y Among them, ∇u represents the multi-angle matching template. By adjusting the contrast and feature correction, the contrast deviation equation of the low-resolution 3D image is obtained as: I (x, y; t) I (x, y; t) I (x, y; t) = + c2 2 ∂t ∂ξ ∂η2
(6)
where, ξ represents the adjusted contrast, and η represents the characteristic correction value.
Low Resolution 3D Image Enhancement
209
The enhanced low resolution 3D image is processed by multi-scale optimization through the machine vision system to obtain the information space characteristics of the low resolution 3D image as follows:
x cos θ − sin θ ξ = (7) y sin θ cos θ η Set the matching value of the enhanced image model to K, the height of the enhanced image to be H , and read the pixel decomposition spectral information of the enhanced image to obtain the high pixel information feature c. The calculation formula is as follows:
1 (8) c = exp − |∇u(x, y; t)| K When |∇u(x, y; t)| reaches the maximum value, the contrast space feature value of the low resolution 3D image is mapped through the high pixel information feature. Set |∇u(x, y; t)| = 0 to decompose the 2D data of the enhanced image through the machine vision machine. Output formula is obtained according to the image information feature decomposition of direction f :
x cos α − sin α p = (9) y sin α cos α q According to the above process, the acquisition of a low-resolution three-dimensional image is realized. 2.2 Correcting the Color of Low-Resolution 3D Images The absorption of light by the medium leads to the decline of the color of low resolution 3D images. Because red light and orange light have been completely absorbed under strong light, generally, the acquired low resolution 3D images show a blue-green tone. In order to eliminate the color deviation of low resolution 3D images, it is necessary to correct the color of low resolution 3D images. The color correction of low-resolution 3D images is very mature. There are various white balance methods such as Gray Word method, max-RGB method, Shades of Gray method and Gray Edge method. The principle is to correct the color deviation of the image according to the color temperature of the image. [7]. The application scenarios of these methods are generally ordinary color casts, and the processing results for severe color casts are not ideal. The white balance method in this paper is proposed based on the max-RGB method, and the Shadesofgray method. The mathematical formula is as follows: Iwb (x) =
I (x) χ ×δ
(10)
where, Iwb (x) represents the color corrected image, I (x) represents the original low resolution 3D image, χ represents the constant, and δ represents the illumination color.
210
Y. Kang et al.
The white balance method in this paper adds a gain factor in the process of solving the illumination color, which can be expressed as: 1 (11) χ δ = max I (x) × I (x) p + φ x
Among them, max I (x) represents the maximum value of each color channel of the x original low-resolution 3D image, and the value of φ varies in the range of [0, 0.5]. The closer the value of φ approaches 0, the brighter the processed image will be. On the contrary, the greater the value of φ, the lower the brightness. The white balance method proposed in this paper effectively eliminates the color deviation of low resolution 3D images, and obtains more natural color correction images. However, the color correction image still has low contrast and blurring. In order to get a better enhanced image, the image scene depth model is used to de blur the color correction image. 2.3 Deblurring of Low-Resolution 3D Images From the perspective of scattering, the low-resolution 3D image is very similar to the atmospheric foggy image, both of which are affected by the scattering of suspended particles, and both will have the phenomenon of contrast reduction and detail blurring. Therefore, people often apply the atmospheric scattering model to the low-resolution environment, and use this model to eliminate image blur and improve its contrast. On the basis of removing the color deviation of the low-resolution 3D image, this paper also uses the atmospheric scattering model to deblur the low-resolution 3D image [8], and believes that the color correction step is complete for the color compensation of the low-resolution 3D image. At this time, the color correction map can be regarded as an atmospheric foggy image for further processing. The deblurring process of low resolution 3D images is shown in Fig. 1. As shown in Fig. 1, based on the color correction map, two important parameters in the atmospheric scattering model are solved: transmittance map e(x) and atmospheric spectrum G, and then the atmospheric scattering model is used to deblur the low-resolution 3D image. In this paper, the solution of the transmittance map e(x) is different from the dark channel prior method. In this paper, the scene depth artificial neural network model is used to estimate the scene depth of the color correction map, and the scene depth map g(x) is obtained, while the scene depth map g(x) and the transmittance map are obtained. There is an exponential relationship between e(x), so the scene depth map g(x) is transformed into a transmittance map e(x). Finally, combining color correction map Iwb (x), atmospheric light G and transmittance map e(x), we can get the deblurring map of low resolution 3D image. The color correction image of the low resolution 3D image is treated as a fogged image, that is, the color correction step completely compensates the color fading of the low resolution 3D image [9]. In this paper, the atmospheric scattering model is used in the image deblurring process. The mathematical expression of the atmospheric scattering model is as follows: I (x) = Y (x)e(x) + G(1 − e(x))
(12)
Low Resolution 3D Image Enhancement
211
Start Color correction results of lowresolution 3D images Scene artificial neural network model Get scene depth map Atmospheric light
Acquire a transmittance map Constructing atmospheric scattering model Deblurring processing
Output image end
Fig. 1. Deblurring process of low-resolution 3D images
Among them, Y (x) represents the fog-free image, and I (x) represents the foggy image. After image conversion, you can get: Y (x) =
I (x) − G +G e(x)
(13)
In the low resolution application scenario, the above model can be expressed as: Iab (x) =
Iwb (x) − G +G e(x)
(14)
In the formula, Iab (x) represents the deblurring map. It can be seen from the above formula that the atmospheric light G and the transmittance map e(x) can be estimated, and the deblurring map can be obtained. For the atmospheric light G, this paper obtains the average value of the three channels of the color correction map R, G and B. For the transmittance map e(x), this paper uses the scene deep artificial neural network model to estimate it. Considering the exponential relationship between the transmittance map A e(x) and the scene depth map g(x), it is shown as follows: e(x) = e−βg(x)
(15)
where β is the atmospheric scattering coefficient. Considering the difficulty in estimating the value of the atmospheric scattering coefficient β, the maximum visible scene depth lmax is introduced. When the visible
212
Y. Kang et al.
scene depth reaches the maximum, the transmittance will reach the minimum tmin , the mathematical expression is as follows: tmin = e−βlmax
(16)
According to formula (15) and formula (16), the following formula for calculating transmissivity diagram e(x) is obtained: g(x)
e(x) = (tmin ) lmax
(17)
In the formula, tmin represents the minimum transmittance. After comparison, this paper assumes that tmin is 0.2. The above derivation transforms the problem of estimating the transmittance map into the problem of estimating the scene depth map. Obviously, the estimation of the scene depth map becomes a key step in the deblurring process of low-resolution 3D images. In this paper, the scene depth artificial neural network model is used to estimate the scene depth map corresponding to the color correction map. 2.4 Design Low-Resolution 3D Image Enhancement Algorithm Based on Artificial Neural Network In view of the defects of the current low resolution 3D image enhancement algorithm, such as excessive enhancement or unnatural effects, artifacts and strong saturation, this paper combines the advantages of color model transformation algorithm and artificial neural network [10] to enhance the low resolution 3D image. The processing flow of low resolution 3D image enhancement algorithm is shown in Fig. 2. For low-resolution 3D images, the expression for converting from RGB color space to HSI color space is as follows:
[(R − G) + (R − B)]/2 −1 (18) θ = cos (R − G)2 + (R − B)(G − B) θ B≤G H= (19) 360 − θ B > G S =1− I=
3 · min(R, G, B) R+G+B
(20)
R+G+B 3
(21)
For HSI space of low resolution 3D images, the values of each color component are ◦ ◦ 0 ≤ H ≤ 360 , 1 ≤ S ≤ 1, and 1 ≤ I ≤ 1. Because the hue H is different, the conversion from HSI color space to RGB color space needs to change according to the ◦ ◦ change of H . . For RG sector (0 ≤ H < 120 ), the conversion formula is: S cos H ◦ R=I 1+ (22) cos 60 − H
Low Resolution 3D Image Enhancement
213
Start
Input original low-resolution 3D image
Get RGB color space
Get HIS color space
Color conversion
Hue
Brightness
Saturability
Artificial neural network Enhanced low-resolution three-dimensional image
Color inverse conversion
Output image
End
Fig. 2. Flow chart of low-resolution 3D image enhancement algorithm
G = 3I − (R + B)
(23)
B = I (1 − S)
(24)
The artificial neural network in this section performs a zero-padding operation before the color space conversion to ensure that the size of the image remains unchanged during the transmission of the artificial neural network. The specific steps are: Step1: Input layer and output layer. The input layer obtains the luminance component of the low-resolution 3D image, and the output is the luminance component enhanced by the artificial neural network. Step2: Feature extraction. In order to learn the relationship between the low resolution 3D image and its illumination map, firstly, the overlapped pixel blocks are extracted from the low resolution 3D image, and each pixel is represented by a high-dimensional vector through m1 filters, whose mathematical expression is as follows: F1 (p) = max(0, ∂1 × p + 1 )
(25)
where p is the input image patch of size m × m, and ∂1 and 1 are the weights and biases of the kernel function in the artificial neural network. The size of ∂1 is m1 × f1 × f1 , where f1 is the size of a single kernel function, and m1 is the number of kernel functions. 1 is an m1 -dimensional vector, each element of which is associated with a filter, and “x”
214
Y. Kang et al.
represents the convolution operation. max(0, x) is a rectified linear unit used to speed up training convergence and improve network performance. Step3: Feature enhancement. Inspired by the use of feature enhancement layers in reducing compression artifacts, feature enhancement layers are used to map “noise” features to a relatively “clean” feature space, because low resolution 3D images are usually affected by noise. It is expressed as: F2 (p) = max(0, ∂2 × F1 (p) + 2 )
(26)
Step4: Nonlinear mapping. Nonlinear mapping is to map each high-dimensional vector to another high-dimensional vector, i.e. convert F2 (p) to F3 (p): F3 (p) = max(0, ∂3 × F2 (p) + 3 )
(27)
where, ∂3 contains m3 filters of size m3 × f3 × f3 , and 3 is an m3 dimensional vector. Step5: Image reconstruction. A convolutional layer is designed to aggregate image patches to generate the learned 3D image, F3 (p) is transformed into F4 (p), the expression is: F4 (p) = ∂4 × F3 (p) + 4
(28)
where, ∂4 contains m4 filters of size m4 × f4 × f4 , and 4 is an m4 dimensional vector. These unknown artificial neural network parameters
= {∂1 , ∂2 , ∂3 , ∂4 , 1 , 2 , 3 , 4 } are achieved through supervised learning to minimize the mean squared error loss function. Minimizing the mean squared error loss function is expressed as: L( ) =
n 1 F(Pi ; ) − Wi 2 N
(29)
i=1
Among them, N is the number of image blocks in the artificial neural network training batch, Pi is the low-resolution 3D image block, Wi is the low-resolution 3D image block corresponding to Pi , and F(·) is the learned mapping function. To sum up, a low resolution 3D image enhancement algorithm is designed by using color model transformation algorithm and artificial neural network to achieve low resolution 3D image enhancement.
3 Experimental Analysis 3.1 Experimental Environment The software environment of this experiment is Windows 10 operating system, the CPU processor is Intel Core i7–3770 8-core processor, the memory is 48 GB, the graphics card is NVIDIA GeForce GTX 1080, and the video memory is 8 GB. The experiment is mainly based on the artificial neural network framework Tensorflow, version 1.5. CUDA version is 9.0 and cuDNN version is 7.0.
Low Resolution 3D Image Enhancement
215
3.2 Training Method The input of the artificial neural network is a low resolution 3D image, and the output is a corresponding normal 3D image. The training data is transferred to the artificial neural network in batch as a unit for training. The width and height of the input and output are consistent. The input size is m × n, and the output size is m × n. The optimizer uses the gradient descent method with momentum, the momentum coefficient is 0.9, the initial learning rate is 0.001, and the learning rate decays to 10% of the original every 20 epochs. The loss functions of the decomposition network, the low-resolution map augmentation network, and the emissivity map refinement network are minimized using the optimizer Adam. 3.3 Experimental Comparison The enhancement algorithm based on the minimum volume constraint of a single object and the enhancement algorithm based on the combination of depth learning and image fusion are compared to complete the performance verification together with the method in this paper. 3.4 Image Enhancement Take the low resolution 3D image as the research object, as shown in Fig. 3. Three methods are used to enhance them, and the results shown in Fig. 4 are obtained.
Fig. 3. Low-resolution 3D image
According to the results in Fig. 4, compared with the two comparison methods, the method in this paper has a better enhancement effect on low resolution 3D images and can improve the quality of low resolution 3D images.
216
Y. Kang et al.
An Enhancement Method Based on the Minimum Volume Constraint of Simplex
Enhancement method based on deep learning and image fusion hybrid
Method in the text
Fig. 4. Image enhancement experiment results
3.5 Performance Test 3.5.1 Experimental Indicators In the experimental test, the signal-to-noise ratio, structural similarity and enhancement time are used as the indicators for evaluating the low-resolution 3D image enhancement method. The calculation formula is: φ = 10 × lg
2552 d − g22
4χd χg χdg S= χd2 + χg2 δd2 + δg2
(30) (31)
Among them, φ is the signal-to-noise ratio index of the image, the common unit is dB, S is the structural similarity index, the common unit is %, d is the initial depth of the low-resolution 3D image, g is the enhanced depth of the low-resolution 3D image, χ represents the enhanced depth mean, and δ represents the enhanced depth variance. 3.5.2 Analysis of Results In the experiment, 300 3D images with resolution were enhanced. Due to the space limitation, 12 images were selected from them for analysis. The SNR test results of three low resolution 3D image enhancement methods are shown in Table 1. From the results in Table 1, it can be seen that the signal-to-noise ratio indicators obtained by the two comparison methods in the experimental test process are relatively close, but far lower than those obtained by the method in the paper. The average signalto-noise ratio obtained by the method in this paper is 31.49 dB, because the method in this paper can make the image converge in space by correcting the color in the low resolution 3D image, so it has better enhancement effect.
Low Resolution 3D Image Enhancement
217
Table 1. Test results of signal-to-noise ratio of low-resolution 3D images Image number Low-resolution 3D image SNR/dB An Enhancement Method Enhancement method based Method in the text Based on the Minimum on deep learning and image Volume Constraint of fusion hybrid Simplex 4.1.01
32.38
32.56
35.79
4.1.03
32.16
32.45
35.69
4.1.04
31.09
31.50
37.02
4.1.05
29.91
30.59
34.73
4.1.06
25.71
26.19
27.75
4.1.08
32.04
32.31
36.13
4.2.03
23.76
23.88
24.56
4.2.05
29.62
30.27
31.62
5.1.10
23.78
24.09
27.08
5.1.11
29.20
29.87
34.80
5.1.12
27.11
27.50
30.53
5.1.13
18.66
18.80
22.22
Average value
27.95
28.33
31.49
The structure similarity test results of three low resolution 3D image enhancement methods are shown in Table 2. From the results in Table 2, it can be seen that in the structural similarity index test, the results obtained by the three low-resolution 3D image enhancement methods are relatively close, but the method in this paper is obtained in the 4.2.05th and 5.1.10th images. The enhancement effect is better because the method in this paper improves the resolution of the low-resolution 3D image by deblurring the low-resolution 3D image, so that the low-resolution 3D image obtained by this algorithm has a better enhancement effect. Based on the above test results, in order to further verify the effectiveness of the method in this paper in terms of image enhancement time, experimental analysis was conducted and the experimental results as shown in Fig. 5 were obtained. It can be seen from the results in Fig. 5 that as the low resolution 3D images become larger and larger, the enhancement time of the three enhancement methods for the low resolution 3D images is gradually increasing. However, compared with the two comparison methods, the time required for the image enhancement process of the methods in this paper is the shortest, which proves that they can effectively improve the enhancement speed of the low resolution 3D images.
218
Y. Kang et al. Table 2. Structural similarity test results
Image number Structural similarity/% An Enhancement Method Enhancement method based Method in the text Based on the Minimum on deep learning and image Volume Constraint of fusion hybrid Simplex 4.1.01
0.97
0.96
0.95
4.1.03
0.95
0.98
0.94
4.1.04
0.90
0.95
0.97
4.1.05
0.97
0.96
0.94
4.1.06
0.95
0.97
0.96
4.1.08
0.90
0.90
0.97
4.2.03
0.86
0.88
0.85
4.2.05
0.93
0.97
0.98
5.1.10
0.98
0.95
0.98
5.1.11
0.98
0.90
0.96
5.1.12
0.93
0.94
0.94
5.1.13
0.90
0.92
0.97
Average value
0.935
0.94
0.95
40
Method in text A hybrid enhancement method based on deep learning and image fusion
35
Image enhancement time/s
30 25 20 15 10 Enhancement method based on minimum volume constraint of single shape
5 0 0
64
128
256 512 1024 Image size/dB
2048
4096
Fig. 5. Image enhancement time test results
Low Resolution 3D Image Enhancement
219
4 Conclusion In this paper, artificial neural network is applied to the design of low-resolution 3D image enhancement method. In the process of research, this paper classifies the image characteristics intelligently by using the adjustable filter, and establishes the multi-angle mesh model of the machine vision system. On this basis, the low-resolution image is decomposed by multi-scale, and the color deviation is eliminated, and the image quality is improved by color correction. Finally, the atmospheric scattering model is used to complete the image deblurring processing, and the self-learning function of the artificial neural network and the ability to find the optimal solution at high speed are used to effectively enhance the low-resolution 3D image. Experimental results show that this method can improve the quality of low resolution 3D image and has better performance. However, there are still many deficiencies in this research. In the future research, we hope to introduce clustering algorithm to classify low resolution 3D images, and further improve the image enhancement performance.
References 1. Yang, W., Zhang, Z., Cheng, H.: PNet:multi-level low-illumination image enhancement network based on attention mechanism. Appl. Res. Comput. 39(05), 1579–1585 (2022) 2. Jia, W., He, Q., Lu, R., Liang, H.: Image enhancement based on grayscale adjustment and histogram reconstruction. J. Taiyuan Univ. Sci. Technol. 42(06), 449–455 (2021) 3. Wang, Y., Zhu, R., Liu, B., et al.: Hyperspectral image resolution enhancement algorithm with minimum volume constraint. Electron. Opt. Control. 26(1), 38–42 (2019) 4. Xu, S., Lin, Z., Zhang, G., et al.: A low-light image enhancement algorithm using the hybrid strategy of deep learning and image fusion. Acta Electron. Sin. 49(1), 72–76 (2021) 5. Li, D., Bao, J., Yuan, S., et al.: Image enhancement algorithm based on depth difference and illumination adjustment. Sci. Program. 2021(1), 1–10 (2021) 6. Sima, Z., Hu, F.: Low-light image enhancement method based on simulating multi-exposure fusion. J. Comput. Appl. 39(6), 1804–1809 (2019) 7. Cheng, Y., Deng, D., Yan, J., et al.: Weakly illuminated image enhancement algorithm based on convolutional neural network. J. Comput. Appl. 39(4), 1162–1169 (2019) 8. Qin, Z., Yang, J., Wang, H., et al.: Low illumination transmission line image enhancement method and application based on the Retinex theory. Power Syst. Prot. Control 49(03), 150– 157 (2021) 9. Guan, Y.: Research on graphics texture rendering algorithm based on image enhancement. Inform. Technol. (11), 66–70 (2021) 10. Wan, F., Lei, G., Xu, L.: Edge enhancement algorithm of low illumination image based on step filter. Comput. Simul. 39(05), 220–224 (2022)
Adaptive Slot Allocation Method for Data Link of UAV Transmission Network Zhijun Liu(B) , Xin Zhang, and Mingfei Qu College of Aeronautical Engineering, Beijing Polytechnic, Beijing 100176, China [email protected]
Abstract. In order to avoid unordered data occupying the data link organization in UAV transmission network for a long time, a more reasonable network slot layout environment should be built. Therefore, an adaptive time slot allocation method for UAV transmission network data link is proposed. In the FH-OFDM network template, the carrier interference conditions and node interference conditions are solved respectively, and UAV transmission network networking model is constructed. Then set the dynamic TDMA protocol text, and define the expression form of adaptive allocation requirements according to the value range of timeslot parameters. The comparison results show that the maximum transmission duration of unordered data in the data link organization does not exceed 25 ms after the application of this method, which can promote the maintenance of the rationality of the slot layout environment of UAV transmission network. Keywords: UAV Transmission Network · Data Link · Slot Allocation · FH-OFDM Network Template · Carrier Interference · Node Interference · TDMA Protocol
1 Introduction The data link network is used to realize the transmission of data collected by information collection devices such as sensors to the cloud, and it mainly includes two parts: a plurality of nodes and a gateway. Among them, the nodes and the sensors are connected in one-to-one correspondence, so as to ensure that the data collected by the sensors can be aggregated into the gateway through the nodes. In this way, a communication path can be formed between each sensor and the gateway through at least one node. In order to ensure that all nodes can send corresponding data to the gateway, the gateway allocates corresponding time slots for transmission and time slots for reception to nodes. The data link adopts a non-central network structure, and performs protocol control in a time division multiple access mode, and network members perform message sending and receiving operations based on time sequence.In order to give consideration to the principle of fairness and allocate a certain amount of time to each network member, this time division multiple access method evenly cuts time along the axis, and evenly allocates slot resources with time slots as the basic unit of measurement and time frames © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 220–234, 2024. https://doi.org/10.1007/978-3-031-50571-3_16
Adaptive Slot Allocation Method for Data Link
221
as the basic cycle. The ultimate goal is to make full, reasonable and effective use of slot resources. Because users will cause a certain delay when they reserve time slots autonomously, which is far greater than the delay of message transmission in the channel, reducing their time slot reservation delay can effectively improve the performance of self-organized time division multiple access. In addition to the idle time slot selection algorithm, the slot reservation conflict also directly affects the slot reservation delay of self-organized time division multiple access. Due to the characteristics of high mobility, flexible deployment, and high-quality line-of-sight channels, UAVs can provide high-quality on-demand services for IoT devices at the edge of the network. For example, as an air base station or a communication relay station, it provides communication or computing services for intelligent equipment on the ground. In this case, UAVs can cooperate with each other to form a self-organizing network with good mobility and flexible topology to provide efficient services for IoT devices. In UAV network, UAV carrying wireless energy transmitter to provide more efficient wireless energy transmission for low-power intelligent devices on the ground has become a research hotspot. By taking advantage of its flexible mobility, UAV equipped with wireless energy transmitter can adjust its flight path at any time to shorten the distance from the target ground intelligent equipment, thereby improving the efficiency of wireless energy transmission. At the same time, the high-speed mobility of UAV can also timely meet the sudden service demand of ground intelligent equipment and solve the problem of high cost caused by ground deployment equipment. However, UAV-assisted wireless energy transmission networks have security and privacy issues: 1. Potential energy attacks, such as ground equipment refusing to receive energy and falsifying energy states, may lead to inefficiencies in UAV wireless energy transmission Even a large amount of energy is lost; 2. The traditional centralized energy transaction management method has problems such as scalability, single point of failure, high transaction costs and leakage of user privacy. In order to solve the above problems, this study proposes a data link adaptive slot allocation method for UAV transmission network.
2 UAV Transmission Network Networking Model The construction of UAV transmission network networking model is based on FH-OFDM network template, and can combine carrier interference conditions and node interference conditions to determine the specific layout of data link organization. This chapter will study the above contents. 2.1 FH-OFDM Network Template The FH-OFDM mask for UAV transmission network is developed from multi-carrier modulation. The serial-to-parallel conversion of the data stream makes the symbol period
222
Z. Liu et al.
of each subcarrier longer, reduces the inter-symbol interference caused by the time dispersion of the wireless channel, and reduces the complexity of equalization in the receiver. Eliminate the adverse effects of intersymbol interference. The physical layer application structure of the FH-OFDM network template supports the dynamic allocation of subchannels. In order to enable the physical layer to support asymmetric high-speed data transmission, the OFDM hierarchical structure can allocate different numbers of subcarriers to achieve different transmission rates in the uplink and downlink, so the FH-OFDM network template is feasible. The data transmission rate can be significantly improved by increasing the number of UAV transmission subcarriers and using high bit rate modulation. However, appropriate carrier numbers and modulation methods should be selected, taking into account the limitations of the actual OFDM system: (A) Transmission network carrier behavior is susceptible to frequency deviations. For example, the Doppler frequency shift caused by the motion behavior of UAV destroys the orthogonality of the sub-carriers, resulting in inter-carrier interference. Since the theoretical model considers the collision interference mechanism of multiple access is relatively simple, further research on channel division through simulation shows that in the frequency hopping system, the number of concurrent transmissions is limited, and the transmission capacity does not vary with the channel division. Unlimited upgrades. Factors such as the limited size of network nodes, frequency hopping crossover overhead, and sub-channel capacity limitations will limit the number of concurrently transmitting node pairs that actually exist. The formula for solving the normalized frequency offset of the FH-OFDM template is: χ=
p |T |2
(1)
where, p represents the frequency offset vector of UAV transmission subcarrier, and T represents the interval time of the network data link subcarrier. (B) The FH-OFDM template has a high peak to average power ratio. When multiple subcarriers have the same phase, the instantaneous power of the superimposed signal will be far greater than the average power of the signal, which may exceed the dynamic range of the amplifier in UAV transmitter, causing signal distortion. Adaptive frequency hopping technology is to build a local frequency hopping set by sensing the spectrum. In order to avoid conflicts with external networks and improve the success rate of channel access at the time of intersection, select channels with good quality according to a certain strategy through the historical channel state to form a frequency hopping set of future time slots. The construction of UAV transmission network needs to exchange a large amount of multi-dimensional sensor data, and it needs to obtain high data rate on limited spectrum resources. The establishment of a wide-band high-speed transmission network in the data link organization needs to solve: (a) Frequency selective fading: in the unknown operating environment where a large amount of sensing information needs to be obtained, it is often non-empty, so the transmission signal diffraction will produce multipath.
Adaptive Slot Allocation Method for Data Link
223
(b) Doppler frequency offset: When performing channel equalization on fading signals, the computational complexity of using multi-carrier modulation is smaller than that of single-carrier modulation. In the complex and changeable physical environment, UAV traveling node needs to quickly switch its own formation configuration, and the relative movement speed is large. Due to the Doppler effect, a non-negligible frequency offset is generated on the carrier, which destroys the orthogonality of the subcarriers. (c) Node competition interference: Due to the high dynamics of the topology, the scheduling-type time slot allocation mechanism cannot guarantee low latency, and the non-scheduling-type allocation mechanism needs to deal with the interference of concurrent transmissions of adjacent nodes, such as carrier waves. However, the uncertainty of UAV’s track makes the interference still occur after the backoff. Usually, a safe distance is maintained between the flying nodes to reduce the impact of competing interference, but it also limits the optimization of the topology to higher connectivity. The layout of FH-OFDM template for the complete UAV transmission network is shown in Fig. 1. The design principle of FH-OFDM template is to allocate UAV behavior data resources in frequency and time slots. The channel bandwidth is divided into multiple mutually exclusive subbands, and each subband contains multiple continuous subcarriers. Each subband can transmit independently in the same time slot. To apply the FH-OFDM template to the distributed UAV transmission network, the synchronization problem without a central coordinator must be solved first. Unless the system achieves an ideal clock synchronization, the pre negotiated frequency hopping pattern will not be used due to time synchronization. It is very important to solve the synchronization problem of distributed systems or support asynchronous FH protocols.
Fh-ofdm template for UAV transmission network
Uav motion behavior
Network carrier behavior
Subcarrier phase
Doppler shift
Orthogonal carrier behavior
Instantaneous power
UAV transmitter
Signal amplifier
Data link structure
Network data acquisition
Network application structure
Fig. 1. UAV transmission network FH-OFDM template
224
Z. Liu et al.
2.2 Carrier Interference Conditions In UAV transmission network, the channel switching overhead is much smaller than the time slot length, and the confluence channel serves as the control channel and the data transmission channel at the same time. Therefore, the result of the carrier interference condition definition will directly affect the data link organization’s carrying capacity for time slot parameters. The usual frequency hopping transmission is divided into two scenarios: after UAV transceiver completes time synchronization, both parties use the same frequency hopping sequence at all times. This method is mainly used between the ground station and a single UAV, which has a better anti-interference effect; the default listening channel of the device during the time slot is designated by the local frequency hopping sequence.The transmitting end randomly switches the channel until the listening channels of the potential receiving end are consistent, that is, frequency hopping intersection, and connection is established. The transceiver completes the transmission of small load in the same time slot of channel intersection, and no longer maintains synchronization in the next time slot. This mode is mainly applied to the secondary users of cognitive radio network. Only the time slot of the next intersection can continue to transmit new data or retransmit. The data intersection expression in UAV transmission network is: √ I˙ (2) O= +∞ √ 2 q˙ + χ (iε − iδ ) α=1
where, ε and δ B represent two unequal data intersection coefficients, and their values belong to the numerical range of (0, 1], iε represents UAV behavior data transmission vector based on the coefficient ε, iδ represents UAV behavior data transmission vector based on the coefficient δ, α represents UAV behavior data coding parameter, I˙ represents the carrier characteristic value of UAV behavior data in the transmission network, and q˙ represents the frequency characteristic value of the behavior data. Considering the dynamic change behavior of the data link of UAV transmission network, the time consumed by both communication parties to complete the channel intersection in the worst case will also affect the solution result of the carrier interference condition. The channel hopping sequence of both communication parties can ensure that when all the channels of the network can become confluence channels within a sequence period, the diversification of the confluence channels reaches the maximum, which is called complete confluence; otherwise, it is called partial confluence. The precondition for the existence of the carrier interference condition is that UAV transmission network intersection strategy supports the complete intersection behavior. Multi-channel access can better cope with external interference, ensure the concealment of sub-channel data, and the spectrum utilization rate is significantly improved when the device supports multi-packet reception. The solution result of channel crossing time consumption is: t=
q˙ β uˆ χ
2
ϕ ε−δ
(3)
Adaptive Slot Allocation Method for Data Link
225
where, β represents the definition coefficient of channel jump sequence, uˆ represents the index parameter of UAV behavior data transfer, and ϕ represents the frequency hopping weight value. Simultaneous formulas (2) and (3) can define the carrier interference condition expression of UAV transmission network as: γ 1 − ln γ −ln 2Y˙ y U =√ φOt
(4)
Among them, φ represents the carrier transmission coefficient of the data samples in UAV transmission network, y represents the average cumulative amount of the carrier vector in the data link, γ represents the carrier behavior marker coefficient, γ represents the supplementary description condition of the coefficient γ , and Y˙ represents no Transcoded eigenvalues of human behavior data. The transmitted unit data packet will be divided into smaller burst data packets, which will be allocated to consecutive different time slots to switch the channel transmission according to the frequency hopping sequence of the receiver. If the carrier interference condition is valid, the transmission and intersection of UAV behavior data must be guaranteed in a limited time, and the lower the upper limit is, the better. There can be many indicators to measure whether UAV transmission network has carrier transmission capacity, but the solution results under each indicator condition are different. 2.3 Node Interference Conditions The carrier interference condition can better cope with the external interference of UAV transmission network, ensure the concealment of sub-channel data, and the spectrum utilization rate is significantly improved when the device supports multi-packet reception. Time slot priority statistics belong to the multi-channel competitive multiple access protocol, which has certain requirements for network equipment: the transmitter supports frequency hopping switching, and each channel is equipped with a receiver, and the transmission and reception can be carried out at the same time, that is, the node interference conditions. In terms of time resource allocation, carrier sense/collision avoidance is enabled on each channel to resolve packet collisions caused by random frequency hopping.The competitive access time delay of this access control technology is far less than that of single channel access, and it can transmit multiple packets on different channels without mutual interference, and the throughput is also greatly improved. The detailed design of the protocol includes adaptive channel coding rate, priority scheduling of data packets, et al. For UAV transmission network, the data link adaptive slot allocation method hopes to maximize the amount of data that can be transmitted concurrently at the same time, and then the throughput of the network is maximum when the topology connectivity is the same. The interference diversity effect of multi-channel can bring gain to the concurrent transmission density, but the frequency hopping intersection waiting delay limits the actual number of concurrent transmissions. Let R represent the total gain of UAV traveling data in unit transmission time, E˙ represent the eigenvalue of data transmission behavior gain, e1 , e2 , · · · , en represent the
226
Z. Liu et al.
adaptive coding parameters of n different data samples, and η represent the data link The data transfer efficiency of the road organization for UAV traveling data, w˜ represents the data transfer characteristics, with the support of the above physical quantities, the formula (4) can be combined to express the data throughput solution result in UAV transmission network as: ⎞ ⎛ R2 E˙ × e12 + e22 + · · · + en2 ⎠ (5) Q =U ·⎝ η ˜ 2 2 · (n!w) On the basis of formula (5), let y represent the slot allocation parameter of data samples, yˆ represent the slot allocation feature of data samples, y represent the transmission step value vector of information samples in the data allocation process, ι represent the slot allocation condition, ι represent the supplementary value result of coefficient ι, A represent the node calibration value, S represent the unit accumulation of UAV traveling data in the transmission network, a represent the link allocation index related to coefficient A, s represents the link ratio index related to the coefficient S. For the above physical quantities of the link, the expression of the node interference condition of UAV transmission network is derived as follows: ⎞ ⎛ ι 2y ι2 y ˆ y ⎠ (6) W = 2Q⎝ √ A×|S| a2 −s2
The node interference condition considers that the transmission capacity analysis theory is an effective tool to analyze the concurrent transmission density of the multichannel network system. In addition, the asynchronous frequency hopping convergence algorithm of cognitive UAV ensures that the transmitting node and the potential receiving node can achieve access within a limited time. Subsequent research will carry out the optimization of transmission capacity from the theoretical transmission capacity and the number of concurrent transmissions of the frequency hopping system, which may provide an important reference for the development of UAV transmission network access technology.
3 Data Link Adaptive Time Slot Allocation Processing On the basis of UAV transmission network architecture, the design and application of data link adaptive slot allocation method are realized according to the processing flow of dynamic TDMA protocol connection, slot parameter calculation and adaptive allocation demand analysis. 3.1 Dynamic TDMA Protocol The dynamic TDMA protocol is a dynamic time slot allocation algorithm based on a fixed data link organization. A complete frame consists of three subframes. The Claim subframe and the Response subframe are used to exchange 1-hop and 2-hop neighborhood node information, respectively. Info subframes are used to transmit services. Each
Adaptive Slot Allocation Method for Data Link
227
subframe is divided into N sub-slots, numbered from 1 to N , corresponding to nodes 1 to N respectively, and the M node is the master node of the M Info time slot, and the M Info time slot is the first node. The main time slot of node M . The frame structure is shown in Fig. 2: Claim subframe 1
2
3
N
Response subframe 1
2
3
N
M
N
Slot replacement
Info subframe 1
2
3
Empty judgment
Time slot path
Fig. 2. Dynamic TDMA protocol frame structure definition form
There are many TDMA protocols for UAV transmission network. In terms of frequency resource allocation, there are two access mechanisms: single channel and multichannel; In terms of time resource allocation, there are two access mechanisms: scheduling and competition. In order to solve the problem of time delay sensitivity, the adaptive slot allocation method proposes dynamic processing of statistical priority multiple access protocol, which is the mainstream research object of two types of UAV MANET access protocols in parallel with the time division multiple access protocol TDMA. The latter can ensure high throughput in the delay tolerance scenario. Therefore, the dynamic TDMA protocol control technology with low latency and high throughput is still developing. Let X˜ represent the frequency characteristic value of UAV traveling data resource, κ represent the dynamic proportioning coefficient of the frequency parameter, xN represent the confidence condition based on the N node, and xM represent the confidence condition based on the M node. With the support of the above physical quantities, Simultaneously formula (6), derive the dynamic TDMA protocol application capability expression to satisfy formula (7).
M √(1 − κ) · 2 (7) Z = W X˜ 2 N xN − xM N =0 M =0
Dynamic TDMA protocol combines the advantages of fixed allocation and dynamic access. It can still guarantee limited delay in a fast mobile environment, adapt to UAV
228
Z. Liu et al.
transmission network environment with large density changes, and realize space reuse of time slots in the data transmission phase. It also solves the problem that competing nodes can independently decide to occupy time slots through a small amount of information interaction, and there will be no deadlock. However, this protocol has two obvious shortcomings: First, if a node has no transmission demand in the timeslot corresponding to the subframe of the data link path, this timeslot will be wasted. If there are many nodes in the network that have no transmission demand, the overhead introduced in the node information interaction phase will be very large, which severely degrades the performance of the protocol. Therefore, the dynamic TDMA protocol is a timeslot allocation algorithm limited by the size of the network, Suitable for small-scale UAV transmission network; Second, the priority allocation table of the network is preset, and the newly added nodes have no chance to compete for time slots during the network operation, which is fatal to the rapidly changing network environment, thus limiting its application range. 3.2 Time Slot Parameters UAV transmission network data link adopts a non-central network structure, and the protocol control is performed in the time division multiple access mode. The network members perform message sending and receiving operations based on time sequence. In order to take into account the principle of fairness and allocate a certain amount of time to each network member, this time division multiple access method cuts the time evenly along the axis, takes the time slot as the basic measurement unit, and takes the time frame as the basic cycle period. The time slot resources are evenly allocated, and the ultimate goal is to fully, reasonably and effectively utilize the time slot resources. The adaptive slot allocation technology is to allocate a certain time slot to each network member in the network for message transmission. Each network member can only send and receive messages in the allocated time slot. The time slot is filled with several time slots. The time slots are arranged in order in each network cycle. When a network cycle ends, network members can only wait for the arrival of the next network cycle, Send and receive messages according to the sequence of allocated time slots. The data link allocates time slots in the form of time slot blocks, which are expressed in the form of time slot group start slot number repetition rate. The repetition rate of data samples in the data link organization is calculated as follows: +∞
c=
κ=1
λ·
|V | v˙ 2
(8)
b·Z
Among them, κ represents the minimum value result of the labeling coefficient of the time slot signal, λ represents the data circulation coefficient in UAV transmission network, v˙ represents the characteristic value of the data sequence in the cycle period,
and V represents the data sample in a cycle period. Cumulative amount, b represents the tolerance coefficient of UAV transmission network for the data sample parameters.
Adaptive Slot Allocation Method for Data Link
229
The fixed access method of timeslot indicator is the most commonly used access method at present. This method allocates timeslots to network members for use. Network members receive and send messages according to the allocated timeslots. This access method allows network members to transmit messages only in the timeslots they have allocated to avoid conflicts when sending messages. However, fixed allocation of timeslots is also easy to cause unreasonable utilization of timeslot resources, especially when timeslot resources are scarce. Let Kˆ represent the conflict characteristics of UAV traveling data samples in the transmission network, μ represents the conflict behavior vector, g represents the time slot interval parameter, G represents the cumulative amount of time slot resources, and f represents the data samples in UAV transmission network. Distribution factor. With the support of the above physical quantities, the formula (8) can be combined to define the time slot parameter calculation expression as: B=1−
c · Kˆ 2
|μ · g|2
(9) fG
The time slot parameter has certain restrictions on the allocation ability of UAV transmission network data link. Although this indicator parameter has a strict theoretical basis, it can not theoretically find the optimal solution or approximate optimal solution in a certain time. In UAV transmission network, time slot resources are distributed periodically. For each cycle, the amount of time slot is a fixed value. If the network contains a limited number of nodes, each node generates a number of random services. With the increase of node traffic, the network will generate a problem that the number of time slots required by nodes is larger than the number of time slots actually owned in the network. In a cycle, it is easy to queue for time slot resources for message transmission. 3.3 Adaptive Allocation Requirements The data link organization of UAV transmission network solves the conflict between data samples through slot allocation, so that nodes two hops apart can simultaneously transmit control information in the same control information exchange slot in parallel. It makes good use of the characteristics of wireless multi hop network space division multiplexing, reduces the protocol overhead, and improves the protocol efficiency. One constraint of dynamic TDMA protocol is that its parameters are fixed and need to be estimated in advance; This series of fixed parameters can work well only when the node density and mobility are within a certain range; This protocol confuses service priority and node priority, and can not guarantee QoS well; In some network conditions, information timeslots will be wasted. In the sub frame stage of control information exchange, each node sends the one hop neighbor information collected by itself to the cluster head through adaptive packets in its successful control slot.Since the actual network does not have only the users of the network, other devices in the external network may occupy the channel resources, so that the channel is unavailable during the frequency hopping convergence, and the access fails. In extreme cases, there is no publicly available subchannel between nodes and no crossover will occur.
230
Z. Liu et al.
Let lmin represent the minimum value of the subframe parameter in the cluster head timeslot organization, lmax represent the maximum value of the subframe parameter, θmin represent the adaptive frequency hopping coefficient matched with parameter lmin , and θmax represent the adaptive frequency hopping coefficient matched with parameter lmax . Using the above physics, it is concluded that the cluster head organization’s carrying capacity for UAV travel data is: ||lmax − lmin ||2 (10) B (θmax − θmin ) 2 This technology controls how quickly nodes can access wireless channels. The coverage of a node is limited, and the nodes outside the coverage cannot perceive the existence of communication when the sending node sends a message. Therefore, UAV transmission network adopts multi-hop shared multi-point channels, so that the nodes outside the coverage of the sending node are not aware of the existence of communication. It is not affected by the sending node, and can send messages at the same time, which is the unique multi-hop shared broadcast channel of the network system. Compared with common networks using shared broadcast channels, point-to-point wireless channels, and wireless channels controlled by base stations in cellular mobile communication systems, space division multiplexing can be achieved. It is assumed that cognitive radio equipment is configured on UAV node, which can sense the interference from the external network, and improve the crossover efficiency through adaptive frequency hopping. On the basis of formula (10), the expression for solving the adaptive allocation demand is as follows: √ j2 ρ·H (11) F= +∞ √ log |ς h0 | H=
ϑ=1
where, j represents the data sample reduction coefficient, ρ represents the distribution density of UAV traveling data samples in the transmission network data link organization, ϑ represents the slot node coverage vector, h0 represents the initial assignment of adaptive matching parameters, and ς represents the slot sharing parameters. The adaptive allocation requirement means that time slot users can access the channel at any time, determine channel access rights through direct competition, and may not consider whether other users are transmitting, and solve the collision problem through random retransmission. Typical contention protocols are completely random multiple access protocols and carrier sense-based multiple access. With the exception of slotted dynamic TDMA protocols, most competing protocols use an asynchronous communication mode. It is a very important issue for the contention-type protocol how to resolve collisions so that all collision users can transmit successfully. Avoidance is also a key design issue, which needs to be achieved through some form of control signaling.
4 Case Analysis In order to highlight the application performance of UAV transmission network data link adaptive slot allocation method designed in this study, and compare the practical differences between other convolutional network based link allocation methods and
Adaptive Slot Allocation Method for Data Link
231
frequency hopping channel allocation methods, the following comparison experiments are designed. 4.1 Experimental Environment Construct UAV transmission network as shown in Fig. 3 as the experimental environment. Select WBSn-2400-O-UN equipment as the base station element of the body, and HTBGE-S1 equipment as the data switch element. Connect the base station element and the data switch with the help of wide area twisted pair. After the data sample output behavior remains stable, connect 10 Phantom 4 Pro V2.0 UAV equipment to the transmission network system, and record the specific values of relevant experimental data without changing other experimental conditions. Base station equipment
Data switch Acceptor UAV equipment 2
5
1 3
7
4
10
6 8
9
Fig. 3. UAV transmission network layout
The specific experimental process of this experiment is as follows: Step 1: The WBSn-2400-O-UN base station element is controlled by using the adaptive slot allocation method of UAV transmission network data link to make 10 receiver UAV devices reciprocate in the experimental area, intercept a certain amount of unordered data as the experimental object, and record the transmission duration of these data samples in the data link organization; Step 2: Select the traditional link allocation method based on convolutional network and frequency hopping channel allocation method as the comparison method to control WBSn-2400-O-UN base station elements respectively. Repeat the above experimental steps and record the transmission time of the selected unordered data samples in the data link organization; Step 3: Compare the variable data obtained and summarize the experimental rules. 4.2 Results and Discussion Table 1 records the change of the transmission time of unordered data in UAV transmission network after applying different slot allocation methods.
232
Z. Liu et al. Table 1. Duration of out-of-order data transmission (ms)
Receiver UAV number
Adaptive time slot allocation method for data link
Link assignment method Frequency hopping based on convolutional channel allocation network method
1
16
37
40
2
19
39
43
3
23
36
38
4
21
39
37
5
20
38
39
6
22
37
40
7
25
40
40
8
18
41
37
9
19
35
39
10
23
38
38
After applying the method in this paper, the average transmission time of unordered data is 20.6 ms; When the seventh UAV equipment transmits disordered data, it takes up to 25 ms; When the first UAV equipment transmits disordered data, the minimum time required is 16 ms; After applying the link allocation method based on convolution network, the average transmission time of unordered data is 38 ms, which is significantly higher than the method in this paper; When the 8th UAV equipment transmits disordered data, it takes up to 41 ms; When the 9th UAV equipment transmits disordered data, the minimum time required is 35 ms; After applying the frequency hopping channel allocation method, the average transmission time of unordered data is 39.1 ms, which is also significantly higher than the method in this paper; When the second UAV equipment transmits disordered data, it takes up to 43 ms; When the 8th UAV equipment transmits disordered data, the minimum time required is 37 ms. To sum up, compared with the link allocation method based on convolutional network and the frequency hopping channel allocation method, the adaptive slot allocation method of UAV transmission network data link can effectively control the transmission time of unordered data in the data link organization, which can make the network slot layout environment more reasonable and meet the actual application requirements.
5 Conclusion UAV transmission network is increasingly used in military, industrial and other fields. In the unknown and complex work environment, the data transmission interruption of the network system is affected by many factors. When UAV maintains the dense behavior mode, the density of slot nodes is large, and UAV multi-hop network faces access
Adaptive Slot Allocation Method for Data Link
233
difficulties. Due to the high dynamic nature of the topology, the non-adaptive network allocation method may not be able to cope with the changes in the neighborhood relationship in a short time, especially the coordinated transmission of control information on a single channel, which is easy to cause congestion in the preset public control channel. The anti-jamming capability of UAV link is very important to ensure the security of flight control. The internal and external interference of transmission network is the main reason for the limited network capacity of flight ad hoc network. The time slot allocation and optimization of data link is the key to build UAV transmission network. Therefore, this study proposes an adaptive time slot allocation method for UAV transmission network data link. This method is based on the FH-OFDM network template, and sets carrier interference condition constraints and node interference condition constraints respectively, so as to establish UAV transmission network networking model. Then set the dynamic TDMA protocol text, complete the adaptive allocation according to the slot parameters, and achieve good application results. Dynamic slot allocation and optimization technology will be the core technology for data link to adapt to future UAV control, which will help improve the application flexibility of data link system and further promote the supply efficiency of data link. According to the prediction results of the dynamic slot demand of the data link, the optimal slot allocation scheme is formulated. The optimal slot allocation process of the data link is regarded as the network planning process of slot resources. From the perspective of the network planning of the data link slot resources, the optimal slot allocation model is established. Then, the solution method of the optimal allocation model is analyzed, aiming at the real-time characteristics of the dynamic slot allocation of the data link. Acknowledgement. School level project of Beijing Polytechnic, Project Name: Research on control system of multi axis unmanned aerial vehicle with constant speed and variable pitch (CJGX2022-KY-010)
References 1. Song, Z.H, Zeng, G.M., Liang, J.: Dynamic time slot allocation of space vehicle Ad hoc network based on dual frequency communication. Telecommun. Eng. 62(3), 305–310 (2022) 2. Wang, R., Sun, X.W., Mao, Z.Y., et al.: Slot allocation algorithm based on genetic and tabu search. Electr. Measur. Technol. 45(10), 82–86 (2022) 3. Hassija, V., Saxena, V., Chamola, V., et al.: A parking slot allocation framework based on virtual voting and adaptive pricing algorithm. IEEE Trans. Veh. Technol.Veh. Technol. 69(6), 5945–5957 (2020) 4. Fakidis, J., Helmers, H., Haas, H.: Simultaneous wireless data and power transfer for a 1Gb/s GaAs VCSEL and photovoltaic link. IEEE Photonics Technol. Lett. 32(19), 1277–1280 (2020) 5. Cristina, A.R., Mario, M.G., Diego, I.: Sustainability-oriented efficiency of retail supply chains: a combination of life cycle assessment and dynamic network data envelopment analysis. Sci. Total. Environ. 705(25), 1–13 (2020) 6. Chebbi, J., Briere, Y.: Robust active disturbance rejection control for systems with internal uncertainties: multirotor UAV application. J. Field Robot. 39(4), 426–456 (2022)
234
Z. Liu et al.
7. Ui-Suk, S., Chan-Seok, L., Tae-Wan, K., et al.: A conceptual design of aided localization sensor for inspection UAV under Non-GPS environment. Trans. Korean Inst. Electr. Eng. Electr. Eng. 70(3), 526–534 (2021) 8. Ren, Z., Wei-Zheng, L., He, L., et al.: Fairly and low latency of directional MAC protocol for terahertz wireless personal network. J. Chin. Comput. Syst. 43(12), 2651–2656 (2022) 9. Liu, P., Su, J.X., Zhang, L.J., et al.: Simulation of balanced allocation of database MAR based on ALAD algorithm. Comput. Simul. 38(9), 420–423+443 (2021) 10. Lee, C.Y., Tsao, C.S., Lin, H.K., et al.: Encapsulation improvement and stability of ambient roll-to-roll slot-die-coated organic photovoltaic modules. Sol. Energy 213(3), 136–144 (2021)
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network Based on Improved Ant Colony Algorithm Xin Zhang(B) , Zhijun Liu, and Mingfei Qu College of Aeronautical Engineering, Beijing Polytechnic, Beijing 100176, China [email protected]
Abstract. In order to improve the quality of communication between UAVs, a cross layer reliable transmission method based on improved ant colony algorithm is proposed in this study. According to the free space attenuation model, the threshold of signal strength for receiving messages from neighbor nodes is defined. Then, the improved ant colony algorithm is used to calculate the path stability between the source node and the destination node, so as to perceive the stability of the reliable transmission path of UAV ad hoc network. The running speed of the buffer area is calculated according to the cache queue length of UAV ad hoc network, and the congestion in the transmission process is detected. Finally, through the energy consumption and transmission balance mechanism, improve the routing process of data transmission between UAV formations to achieve reliable transmission of UAV ad hoc networks. The experimental results show that this method can effectively reduce the transmission outage probability. With the increase of data transmission rate and UAV speed, lower average delay and higher delivery success rate can be obtained. Keywords: Improved Ant Colony Algorithm · UAV Ad Hoc Network · Reliable Transmission · Communication Protocol · Cross Layer Method
1 Introduction At present, the application of UAV has covered many fields such as civil, industrial production and military. Moreover, UAVs have become an indispensable and important part of modern warfare. They are mainly used for tasks such as reconnaissance and surveillance, intelligence collection, communication relay, and rapid strike on the battlefield. The modern battlefield has the characteristics of high confrontation, wide coverage, and large amount of information [1]. When multiple UAV formations perform tasks cooperatively, they can exchange mission planning, flight status, intelligence information and other data with each other through UAV self-organization network to improve UAV formation’s perception of realtime situation and achieve the overall effectiveness of multiple UAVs performing tasks independently. In UAV self-organization network, each UAV is treated as a node [2]. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 235–247, 2024. https://doi.org/10.1007/978-3-031-50571-3_17
236
X. Zhang et al.
Reference [3] proposed a multiple access control (MAC) protocol supporting mixed service transmission. Aiming at the QoS requirements of mixed services, the protocol adopts multi-channel random access strategy for the highest priority services, and multichannel free busy access strategy for the remaining priorities. It controls the access rights of non timeout packets in real time according to the threshold and channel occupancy, and provides QoS guarantee capability for each priority service through a multi priority backoff mechanism based on channel free busy awareness. Reference [4] proposes a multi priority single threshold access control (MSAC) protocol. By designing a channel load statistics time correction mechanism and a channel access control mechanism based on a single threshold, more accurate channel load statistics and load control based on data packets can be achieved, so that the system channel carrying capacity matches the actual channel load and maximizes the use of channel resources. Reference [5] proposed a cross layer optimization technology for using multiple description coding (MDC) to improve the QoS of video streaming applications in multi radio wireless mesh networks (WMN). WMN is an emerging technology used to connect various types of networks to the Internet. With the development of science and technology, the use of UAVs has evolved from the initial single-machine mission to the formation of multiple UAVs to complete tasks collaboratively. Due to the dynamic characteristics of UAV formation, the rapid changes in network topology, the rapid changes in the wireless channel environment, and the scarcity of network bandwidth have caused the traditional layered architecture to encounter new difficulties and challenges in improving network performance. Optimization provides a broad space. Introducing a cross-layer design to optimize UAV network architecture, while maintaining the separation between layers, it allows the protocols of different layers to share the state information of the network, which is conducive to the optimization of intra-layer and inter-layer operations, and achieves effective allocation of network resources. Improve the comprehensive performance of UAV self-organizing network. Based on the above research background, this paper uses the improved ant colony algorithm to design a cross layer method for the reliable transmission of UAV ad hoc network, so as to meet the quality requirements of communication between UAVs. The general research ideas are as follows: (1) According to the free space attenuation model, the threshold value of signal strength for receiving messages from neighboring nodes is defined. (2) The improved ant colony algorithm is used to calculate the path stability between the source node and the destination node in the ad hoc network. (3) The running speed of the cache area is calculated according to the length of the cache queue in the ad hoc network, and the congestion in the transmission process is detected. (4) Through the energy consumption and transmission balance mechanism, the data transmission route selection process is optimized to achieve reliable transmission of UAV in the ad hoc network.
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network
237
2 Design of Cross-Layer Method for Reliable Transmission of UAV Ad Hoc Network 2.1 Perceive the Stability of the Reliable Transmission Path of UAV Ad Hoc Network In UAV ad hoc network, due to the rapid movement speed of UAV and the different task allocation of each UAV, the network topology changes rapidly, and the phenomenon of link interruption is very common. Therefore, in order to avoid the degradation of network performance caused by link interruption, a link stability evaluation method is proposed. It can not only comprehensively evaluate the stability of a link, but also predict and delete the links that may be interrupted in advance by receiving the signal strength of Hello messages and establishing a long-term monitoring mechanism for them [6], which overcomes the shortcomings of previous methods and improves the network performance. At the same time, in order to avoid the impact of GPS positioning error or external interference on the method, this paper does not use the location information of nodes. In the reconnaissance process of UAV formation on the battlefield environment, due to the different needs of the mission, the formation is often composed of different types of UAVs, so the speed, antenna gain and maximum transmission power of each platform are different. Therefore, this paper improves the data packet of Hello message, including source node address, node congestion, antenna gain, transmission power, and moving speed. According to the free space attenuation model, the signal strength pri of the Hello message received by the current node from the neighbor node i at distance d is: pri = pt gt gr
γ 2 4π d
(1)
where, γ is the wavelength of the radio wave, gr is the gain of the receiving antenna, gt is the gain of the transmitting antenna, and pt is the Hello message transmission power of the neighbor node. Assume that the coverage of the antenna is a circular area of radius r. In a circular area of radius r, the average distance between two mobile nodes is 0.905 4r. Therefore, this paper defines the critical value of signal strength for receiving Hello messages from neighbor nodes as: pl = pt gt gr
2 γ 4π · 0.9054r
(2)
The current node can calculate the critical value of the received signal strength according to the known information of the node and the information contained in the Hello message of the neighbor node, and compare it with the measured signal strength to evaluate the stability of the reliable transmission link of UAV ad hoc network for the first time. The process is as follows: ⎧ ⎨ 0, pri pl pl (3) Ki = ⎩1 − , pri > pl pri
238
X. Zhang et al.
The value of Ki represents the stability of the reliable transmission link of the i UAV ad hoc network at the current moment. The larger the value of Ki , the better the stability of the link, but it does not exceed 1. When Ki is 0, the link is identified as an unstable UAV ad hoc network reliable transmission link, and is deleted from the storage area. In order to maintain the real-time nature of the Ki value, it needs to be updated continuously, and the update time interval T is: T=
r − 0.9054r v1 + v2
(4)
where, v1 is the maximum moving speed of the current node, and v2 is the maximum moving speed of the neighbor node. Because the types of neighbor nodes are different, the corresponding T is also different. By using this time interval to update the Ki value, it can ensure that no neighbor node in the storage area has moved out of its communication range within time T . It effectively uses the storage space of the node and provides a reliable choice for the route discovery phase. It effectively solves the defect of fixed Hello message cycle. The above process only evaluates the stability of the reliable transmission link of UAV ad hoc network at a certain moment, and does not establish a long-term mechanism to comprehensively evaluate the stability of the reliable transmission link of UAV ad hoc network. Therefore, this paper estimates the mobility of neighbor nodes according to the changes of the Hello message signal strength at different times [7], and deletes the neighbor nodes with strong mobility in the storage area of the current node. According to the above process, Biannemei-Chebyshev inequality can be obtained: p{|X − E(X )| < φ}1 −
μ(X ) φ2
(5)
where, X is a discrete variable, E(X ) is the mathematical expectation of X , μ(X ) is the variance of X , and φ is any positive number. When μ(X ) = 0, there is p{|X − E(X )| < φ} = 1, which means that the variable X is equal to its expected value, and it also means that the smaller the variance of the variable X is, the closer the variable X is to its expectation, and the smaller the variation of the variable X is. According to multiple measurements of variable X , its variance μ(X ) can be obtained as follows: 2 X2 Xk k μ(X ) = (6) − N N Taking the value of Ki at different times as the measured value of variable X multiple times, and substituting it into formula (6), we can get: n n 2 Ki (X )2 Ki (X ) (7) var(Ki ) = − N N i=1
i=1
The mobility of a neighbor node can be obtained by formula (7). The smaller the value of var(Ki ), the less obvious the neighbor node’s movement is, and the more stable
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network
239
the i link is. This method only judges the mobility of neighbor nodes in the current node’s storage area. To sum up, the specific process of judging the stability of the reliable transmission link of UAV ad hoc network is as follows: each node numbers the neighboring nodes that enter the scope of this node 0.9054r, and writes them into the storage area, and according to the formula (3), calculate the stability Ki of each link at the current moment. When the Ki value of a link is 0, the storage area will delete the link. When a link obtains two consecutive Ki s that are not 0, the node mobility of the link is judged, and as long as Ki is not 0, the node mobility of the link is always judged.When var(Ki ) of a link exceeds the preset threshold value varl three times, it indicates that the neighbor node has strong mobility, and it also indicates that the link is unstable. Therefore, the link is deleted from the storage area of the current node. At the same time, in order to save storage space, this paper defines that only the Ki values of the last five moments are stored in the node storage area. In the route discovery stage, it is usually necessary to select the most stable route to transmit data, so it is necessary to comprehensively quantify the stability of a current link. This paper proposes a comprehensive quantification method for the stability of a current link, as follows: Qi (tm ) =
Ki (tm ) evarm (Ki )
(8)
where, Qi (tm ) is the comprehensive stability of the i link at time tm , Ki (tm ) is the Ki value at time tm , and varm (Ki ) is the var(Ki ) value at time tm . At this moment, when a neighbor node enters the storage area of the current node for the first time, the value of var(Ki ) is empty. In this method, let varm (Ki ) = 1 at this moment. Moreover, the larger the Qi (tm ) value, the higher the comprehensive stability of the reliable transmission link of UAV ad hoc network. Each routing control packet contains a value of W . The source node sends a forward ant and sets the initial value of W to 1. When the forward ant moves in the network, every time it passes an intermediate node, it multiplies the link stability Qi between the intermediate node and the previous hop node by the current W value [8] to get a new W value. When the ant reaches the destination node, the stability of the path from the source node to the destination node can be obtained: W =
n
j
Qi
(9)
j=1 j
In the formula, Qi is the stability of each link passed by the forward ants in the route discovery process, and n is the number of nodes passed by the forward ants. 2.2 Congestion Detection in Reliable Transmission Process of UAV Ad Hoc Network In the process of information transmission, although UAV will continuously send the transmitted data information to its next hop UAV, UAV will also continuously receive
240
X. Zhang et al.
the transmitted data information from the previous hop UAV. When the overall rate at which the drone receives data information is much higher than the rate at which it is sent, the load on the nodes increases, which will cause congestion on the transmission path. By detecting the length of UAV’s cache queue, the speed at which its cache area is fully loaded is calculated [9].When the speed of UAV cache reaches full load is positive, and the faster the speed is, the easier it is for UAV cache to overflow, the more likely it is to cause congestion, and the higher the congestion degree of UAV; The slower the speed is, the lower the congestion of UAV; When the cache reaches full load at a negative speed, it means that the current available cache space of UAV is larger and no congestion will occur. Use Lall to represent the length of the buffer area, L0 to represent the length of the buffer queue at the current moment, and there must be Lall ≥ L0 . When Lall = L0 , the buffer area is full and congestion occurs. According to the DropTail principle, the data packets that arrive later will be discarded. Therefore, to avoid this situation, it is detected in advance that the buffer area is about to be full, and the This status information is fed back to the source drone to control the rate at which the source drone sends data packets. When Lall > L0 , the buffer area is not full. The input rate in and output rate out D of a UAV data packet can be obtained through the time information provided by the MAC layer. The calculation formula is: in =
1 Tin
out =
1 Tout
(10) (11)
Among them, Tin is the time interval between the arrival of two adjacent data packets, and Tout is the time interval from when the data packet is ready to be transmitted to when ACK packet is received to confirm the successful reception, including processes such as contention, backoff waiting, data transmission, and collision retransmission. Tin and Tout need the time information recorded by the MAC layer and get it through calculation, but the data transmission is random. If the result is recorded only once, it is obviously not convincing. Therefore, the weighted iterative average method is used to reduce the error caused by the randomness of data transmission. The specific method is as follows: Tin (n) = σ Tin (n − 1) + (1 − σ )[tin (n) − tin (n − 1)]
(12)
Tout (m) = ς Tout (m − 1) + (1 − ς ) tsck (m) − tready (m)
(13)
where, Tin (n) and Tin (n − 1) are the average time interval between the arrival of the second data packet calculated by weighted average when the n and n − 1 data packets arrive at UAV respectively, Tout (m) and Tout (m − 1) represent the average processing time calculated by weighted average when the m and m − 1 data packets arrive at UAV successfully from the time they are ready to be transmitted to the next hop, and [tin (n) − tin (n − 1)] represents time interval between the n and n − 1 the actual recorded data packets arriving at UAV, tsck (m) − tready (m) is the m data packet actually recorded.
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network
241
It is the processing time from the time when it is ready to be transmitted to the time when it is successfully received by the next hop UAV. σ and ς are weight coefficients. In order to reduce the measurement error, σ = ς = 0.2 is taken in this paper. In the process of data transmission, there may be no data transmission for a long time or due to congestion, which will cause Tin (n) and Tout (m) to approach infinity, which is not expected. Therefore, it is necessary to eliminate the infinite values in Tin (n) and Tout (m). , reducing the error, the detection value ϑ can be expressed as: tin (n) − tin (n − 1) Etin
(14)
ttck (m) − tready (m − 1) Etout
(15)
ϑ1 = ϑ2 =
where, Etin is the expected time interval between two adjacent data packets arriving at UAV, and Etout is the expected processing time from the time when they are ready to be transmitted to the time when they are successfully received by the next hop UAV. Therefore, dividing the actual value by the expected value can determine whether there is no data transmission for a long time or transmission failure due to congestion. 2.3 Realization of Reliable Transmission Across Layers of UAV Ad Hoc Network In view of the large amount of data transmission between UAVs, only the adaptive power control mechanism cannot guarantee the reliable transmission of data. It is necessary to reasonably select the route of data transmission according to the status of the data link. The following is an analysis of routing selection from the perspectives of energy consumption and transmission balance to realize cross-layer transmission. HT represents the energy consumption of UAV i transmitting α-bit data through the data link, and its expression is:
α × Eelec + α × ∂fs × d 2 , d R (16) HT = α × Eelec + α × ∂amp × d 4 , d > R HR = αEelec
(17)
The energy consumption of the source drone is Hs = HT , the energy consumption of the destination drone is Hd = HR , and the energy consumption Hp of the relay drone p can be expressed as: Hp = HR + HT
(18)
where, Eelec is the energy loss of transmitting and receiving unit bit information, α is the number of bits, and d is the distance between two UAVs. The power of electromagnetic wave will decrease with the increase of the distance between UAVs. Therefore, ∂fs and ∂amp are the power consumption coefficients of power amplification circuits for free space model and multipath fading model respectively [10].
242
X. Zhang et al.
Every time a drone is passed, an H value will be calculated and recorded in the routing table of the drone. In the N hop from the source UAV to the destination UAV, the energy factor of this link can be obtained by the following formula:
Hn (19) Hstd = Hs × Hd × n∈N −2
Hstd of the above equation is used as the energy based link selection standard. For multiple links to the same destination UAV, select the link with the lowest Hstd value as the data transmission link. Definition Uik represents the transmission factor of data k(k ∈ {A, B, C}) at UAV i, which is used to describe the load situation at UAV i. MLi represents the maximum load space at UAV i, and PLik represents the load of UAV i processing data k. where PLi is jointly determined by UAV i and neighboring UAVs, and its expression can be expressed as: PLi = SLi (x) + NLi
(20)
SLi (x) = SLi (x − 1) × ξ + SQi × (1 − ξ )
(21)
NLi = SLi+1
(22)
where, SLi (x) represents the current load of UAV i, SLi (x − 1) represents the load of UAV i at the last moment, SQi represents the MAC layer queue length of current UAV i, NLi represents the load of UAV i’s neighbor, and also the load of UAV itself at the next hop, ξ(ξ ∈ [0, 1]) [0, 1]) is the smoothing factor, and ξ is taken as 0.2 in this paper. If data A is generated at UAV i, the current communication channel is completely preempted, and other data is not selected to be transmitted through UAV at this time. Let its transfer factor UA = 1. The transfer factor of data B can be expressed as: UiB =
LiB × (1 − UiA ) Mi
(23)
Multiply 1 − UiA after LMiBi to avoid continuing to select this UAV to transmit data B if data A is generated. UiA = 1 means that data A almost completely occupies UAV i link. If UiA = 0 means that data A is not generated at UAV i and data B is transmitted normally. The total transmission factor UstdB on the link from the source UAV to the destination UAV can be expressed as:
UiB (24) UstdB = i∈N
The transfer factor of data C can be expressed as: UiC =
PLiC × (1 − UiB ) MLi
(25)
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network
243
The reason for using this form is similar to that before. If there is no data A and B transmission at UAV i, then UiA and UiB are equal to 0. The total transmission factor UstdC on the link from the source UAV to the destination UAV can be expressed as:
UstdC = UiC (26) i∈N
Ustdk considers the link conditions of data transmission. The smaller the value of Ustdk is, the smaller the transmission load of the link is. Selecting such a link can avoid congestion and reduce large delays and high packet loss rates. Under the adjustment of priority function, the priority of data C is gradually increased. When the priority of data C is higher than that of data B, only the subscripts of formula (23), (24), (25) and (26) need to be exchanged; When the priority of data C is cancelled, the subscript can be called back. The values of U and H are both 0 when no data is sent from the source drone, which can ensure that the source drone has enough energy and cache to process the message. To sum up, it can be concluded that the data k(k ∈ (A, B, C)) is transmitted on a certain link, based on the joint weight expression of energy and transmission balance: α × e−Hstd + β × e−Ustdk , k ∈ B, C (27) Wstdk = α × e−Hs ×Hd + β × e−1 , k ∈ A In the formula, α and β are the weights of the two determinants, and α + β = 1 is taken as α = 0.1 and β = 0.9. An additional area is added in the Hello message to store the U and H values. A drone loads its U and H values into the Hello message before sending Hello to the nexthop drone. When the drone receives the Hello message, it takes out the U and H values in the Hello message to update the routing table, calculates the W value, and selects the path with the largest W value to transmit data. Once the distance between two UAVs is less than the safety distance, the data A with the highest priority will be generated. At this time, the data link between them is occupied, and the data with low priority cannot be transmitted on this link. For the transmission of the other two types of data, other links are selected for transmission according to the updated transmission information of the relevant links, and priority adjustment function is used to avoid the phenomenon of “starving” of low priority data and make full use of bandwidth resources.
3 Experimental Comparative Analysis 3.1 Set the Experimental Parameters Simulate the information transmission of UAV self-organizing network, and set the simulation scenario that multiple UAVs are distributed in a 50 km x 50 km square area. Other parameter settings are shown in Table 1.
244
X. Zhang et al. Table 1. Experimental parameters
Parameter
Size
Parameter
Size
MAC protocol
IEEE802.11
RTS threshold/byte
550
Agent type
TCP
Transmission type
FTP
Data packet
[64,10000]
Channel rate (M/b)
1
SIFS/µs
10
DIFS/µs
50
Antenna type
Omnidirectional antenna
Interface queue type
Drop Tail
UAV flight speed (m/s)
20~70
Communication range
8
Simulation time/s
1000
Number of drones
30
3.2 Analysis of Results In order to verify the superiority of the method in this paper, the cross-layer method based on supporting mixed service transmission described in reference [3] and the crosslayer method based on multi-priority single-threshold access described in reference [4] are compared. After applying different methods, the performance of break probability varying with signal noise ratio (SNR) is shown in Fig. 1. It can be seen from Fig. 1 that the outage probability decreases with the increase of the signal-to-noise ratio. The outage probability of the method in this paper is lower than that of the cross-layer method based on supporting mixed service transmission and the cross-layer method based on multi-priority single-threshold access. About 42% and 63%. This is because this method uses improved ant colony algorithm to perceive the stability of the reliable transmission path of UAV ad hoc network, selects the communication mode with the best channel 100
10-1
Outage probability
10-2
10-3
10-4 Method in the text Cross-layer method based on multi-priority and single-threshold access A Cross-Layer Approach Based on Supporting Hybrid Service Transmission
10-5
10-6 0
5
10
15
20
25
30
SNR/dB
Fig. 1. Performance of outage probability as a function of signal-to-noise ratio
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network
245
conditions for data transmission, and also uses dynamic congestion detection in advance in congestion control to effectively reduce interruption. After applying different methods, the performance change of packet delivery success rate with connection data transmission rate is shown in Fig. 2. It can be seen from Fig. 2 that the packet delivery success rate of the method in this paper is about 6% and 9% higher than that of the cross layer method based on supporting mixed service transmission and the cross layer method based on multi priority single threshold access under the same conditions. In this method, congestion is predicted according to the time when packets enter the cache queue during data transmission, and the sliding window of the source UAV is controlled in advance to reduce congestion, thus improving the success rate of packet delivery to a certain extent. 100 Method in the text Cross-layer method based on multipriority and single-threshold access
Group delivery success rate/%
90
A Cross-Layer Approach Based on Supporting Hybrid Service Transmission
80
70
60
50
40
0
500
1000
1500
2000
2500
Data transmission rate per connection/(kb/s)
Fig. 2. Performance of packet delivery success rate versus connection data transmission rate
The performance change of packet delivery success rate with the maximum rate of UAV is shown in Fig. 3. It can be seen from Fig. 3 that with the increase of the maximum rate of UAV, the possibility of link interruption between UAVs also increases. The method in this paper is about 7% and 11% higher than the cross layer method based on supporting mixed service transmission and the cross layer method based on multi priority single threshold access respectively. This is because in the process of data transmission, the method in this paper will select a more appropriate data transmission mode according to the channel conditions, and make congestion prediction, which will slow down the decline of the packet delivery success rate to a certain extent. The performance change of average delay with the maximum rate of UAV is shown in Fig. 4. As can be seen from Fig. 4, with the increase of the maximum speed of UAV, the average delay of the method in this paper is respectively higher than that of the cross-layer method based on supporting mixed service transmission and the cross-layer method based on multi-priority single-threshold access under the same conditions. The layer method is about 12% and 18% lower.
246
X. Zhang et al. 90
Group delivery success rate/%
85
80
75
70 Method in the text Cross-layer method based on multi-priority and single-threshold access A Cross-Layer Approach Based on Supporting Hybrid Service Transmission
65
60
20
25
30
35
40
45
50
55
60
65
70
The maximum speed of the drone/(m/s)
Fig. 3. Performance of packet delivery success rate with the maximum rate of UAV
With the increase of the maximum speed of UAV, it is not only necessary to ensure congestion control during data transmission, but also to minimize the number of link interruptions so that data can be transmitted quickly. The method in this paper is based on this idea, taking the interruption and congestion control into account to ensure the stability of UAV self-organized network communication. 40 Method in the text Cross-layer method based on multi-priority and single-threshold access A Cross-Layer Approach Based on Supporting Hybrid Service Transmission
Average delay/ms
35
30
25
20
15
20
25
30
35
40
45
50
55
60
65
70
The maximum speed of the drone/(m/s)
Fig. 4. Performance of average delay versus maximum UAV rate
4 Conclusion In this paper, the improved ant colony algorithm is applied to the design of reliable transmission cross-layer method of UAV ad hoc network. The improved ant colony algorithm is used to calculate the path stability between the source node and the destination node of the ad hoc network. According to the calculation results and the running
Cross Layer Method of Reliable Transmission in UAV Ad Hoc Network
247
speed of the ad hoc network buffer, the congestion in the transmission process is detected. Finally, through the energy consumption and transmission balance mechanism, the route selection process of data transmission between UAV formations is improved to achieve reliable transmission. The experimental results show that this method can improve the performance of UAV ad hoc network to a certain extent and ensure the reliability of communication. However, there are still areas to be optimized in this study. In the future research, we will consider the throughput in the data transmission process, optimize cross-layer cooperation, and improve the network throughput on the basis of ensuring the original delay and delivery success rate. Acknowledgement. School level project of Beijing Polytechnic, Project Name: Research on control system of multi axis unmanned aerial vehicle with constant speed and variable pitch (CJGX2022-KY-010).
References 1. Liu, S., Wang, S., Liu, X., et al.: Fuzzy detection aided real-time and robust visual tracking under complex environments. IEEE Trans. Fuzzy Syst. 29(1), 90–102 (2021) 2. Yang, T., He, Z., Sun, Z., et al.: Research on cross-layer resource allocation algorithm for concurrent multi-service in broadband power line carrier communication. Power Syst. Technol. 45(08), 3257–3267 (2021) 3. Liu, W., Zhang, H., Zheng, B., et al.: A novel media access control protocol for multiple priority services in FANETs. Acta Armamentarii 40(4), 820–828 (2019) 4. Ren, Z., Yang, D., Hu, C., et al.: A single threshold access protocol for UAV ad-hoc network with high channel utilization rate. Comput. Eng. 47(2), 206–211 (2021) 5. Narayan, D.G., Naravani, M., Shinde, S.: Cross-layer optimization for video transmission using MDC in wireless mesh networks. Procedia Comput. Sci. 171, 282–291 (2020) 6. Ma, D., Feng, Z., Qin, Y.: Optimization simulation of communication node area based on energy balance. Comput. Simul. 39(04), 193–196 (2022) 7. Liu, S., Liu, D., Muhammad, K., Ding, W.: Effective template update mechanism in visual tracking with background clutter. Neurocomputing 458, 615–625 (2021) 8. Li, C., Wang, J., Li, M.: Data transmission cross-layer optimization of wireless sensor networks based on compressive sensing. Control Decis. 34(9), 2031–2035 (2019) 9. Liu, S., Wang, S., Liu, X., et al.: De Albuquerque, human memory update strategy: a multilayer template update mechanism for remote visual monitoring. IEEE Trans. Multimedia 23, 2188–2198 (2021) 10. Wang, R., Wang, W., Bai, L., et al.: Cooperative access protocol for UAV Ad-hoc network based on dynamic relay selection. J. Data Acquisition Process. 37(1), 228–239 (2022)
Leakage and Discharge Fault Detection Technology of Subway Electromechanical Equipment Based on Big Data Analysis Wuguang Wang1(B) and XingfeiMa Ma2 1 WuXi City College of Vocational Technology, Wuxi 214153, China
[email protected] 2 Wuxi Vocational Institute of Commerce, Wuxi 214153, China
Abstract. Due to the defects of its own components and the influence of external factors, the metro electromechanical equipment is prone to leakage and discharge faults, threatening the operation safety of the metro. Therefore, the leakage and discharge fault detection technology of metro electromechanical equipment based on big data analysis is proposed. Select appropriate sensors to integrate the operation signals of electromechanical equipment, explore the causes and specific types of leakage and discharge faults of subway electromechanical equipment, and on this basis, apply big data analysis technology, use wavelet transform algorithm to extract the operation signal characteristics and leakage and discharge fault signal characteristics of subway electromechanical equipment, combine support vector machine (SVM) algorithm, select appropriate kernel function, and obtain the optimal classification hyperplane, So as to realize the accurate detection of leakage and discharge faults of electromechanical equipment. The experimental data shows that the maximum success rate of leakage and discharge fault detection using this technology is 95.10%, which fully proves that the leakage and discharge detection performance of this technology is better. Keywords: Big Data Analytics · Mechanical And Electrical Equipment · The Subway · Leakage Discharge Fault · Feature Extraction · Fault Detection
1 Introduction The subway is composed of a variety of electromechanical equipment. Electromechanical equipment refers to the general term for electrical and mechanical equipment that includes the mutual conversion of electricity and other energies. Metro electromechanical equipment mainly includes escalators, AFC (automatic fare collection) systems, screen doors, automatic doors, vehicle air conditioners, central air conditioners, ventilation equipment, water supply and drainage equipment, fire sprinkler systems, power control Topic: A Toothbrush Front and Back Automatic Recognition Control System Based on Audio Characteristics number: WXCY-2022-KY-09. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 248–261, 2024. https://doi.org/10.1007/978-3-031-50571-3_18
Leakage and Discharge Fault Detection Technology
249
systems, etc. [1]. Electromechanical equipment is spread all over the various systems of the subway, and the safe operation of the subway is completed through cooperation. Among all subway installations, electromechanical equipment is one of the key components. The stable operation of electromechanical equipment is of great significance to the safe and economical operation of the subway. Insulation in electromechanical equipment is one of the important conditions to indicate whether it can operate stably. As one of the factors that make the insulation structure of electromechanical equipment age is leakage and discharge faults, it is of great significance and value to complete the accurate monitoring of leakage and discharge faults of electromechanical equipment to improve the economical and reliable operation of subways. Statistical data show that the metro electromechanical equipment is still dominated by insulation failure. The continuation of leakage and discharge faults will not only lead to the burning and damage of the insulation area, but also cause the circuit short circuit between turns and layers. In some metro electromechanical equipment tests, it can be found that rapid detection of leakage and discharge faults of electromechanical equipment can reflect the insulation resistance of electromechanical equipment in advance, and can quickly detect the insulation fault of electromechanical equipment to prevent accidents. Therefore, it is an urgent problem to study the leakage and discharge fault detection technology of metro electromechanical equipment. At present, researchers in related fields have made research on the leakage and discharge fault detection technology of subway electromechanical equipment. Reference [2] proposes a non-contact detection terminal design scheme based on transient voltage method for the partial leakage and discharge fault of subway underground power supply and distribution equipment. The transient ground voltage method has the advantages of simple and reliable structure, high operating speed and strong anti-interference ability. At the same time, it cooperates with the second-order digital filter to suppress circuit clutter and realizes accurate detection of partial leakage discharge. Reference [3] proposes to start from the measured data at the station site, combined with the power supply mode of each load and the internal power distribution mode of the equipment, analyze the main reasons for the leakage current, and propose corresponding improvement measures, in order to fundamentally solve the leakage current bias of the station power distribution system. It is a major problem to ensure the safe, green and efficient operation of the subway electrical system. It can be seen that the leakage and discharge fault detection is very important to the safe operation of the subway. Therefore, a research on leakage and discharge fault detection technology of subway electromechanical equipment based on big data analysis is proposed. Firstly, the causes and specific types of leakage and discharge faults of metro electromechanical equipment are analyzed, and appropriate sensors are selected to collect the operation signals of electromechanical equipment. The wavelet transform algorithm is used to extract the operation signal characteristics and leakage and discharge fault signal characteristics of metro electromechanical equipment, and the support vector machine (SVM) algorithm is used to obtain the optimal classification hyperplane to accurately classify the leakage and discharge fault characteristics of metro electromechanical equipment, so as to realize the accurate detection of leakage and discharge faults of electromechanical equipment.
250
W. Wang and X. Ma
2 Research on Leakage and Discharge Fault Detection Technology of Subway Electromechanical Equipment The overall model of leakage and discharge fault detection technology for metro electromechanical equipment is shown in Fig. 1.
Fig. 1. Overall model diagram of leakage and discharge fault detection technology for metro electromechanical equipment
2.1 Acquisition of Running Signals of Subway Electromechanical Equipment In order to accurately detect leakage and discharge faults of subway electromechanical equipment, the first step is to use appropriate sensors to obtain complete and accurate operating signals of electromechanical equipment, which will lay a solid foundation for the subsequent screening of operating signal characteristics of subway electromechanical equipment. In the process of acquiring the operating signal of the subway electromechanical equipment, the performance of the sensor receiving antenna is the key to affecting the integrity of the operating signal transmission. Therefore, the Archimedes helical antenna is used in this study. The Archimedes helix antenna is a planar ultra-wideband antenna made of PCB. The signal attenuation on the helix is not obvious in the effective radiation area, and it does not become significantly smaller outside the effective radiation area, resulting in the terminal truncating the antenna structure., its performance deteriorates, so the Archimedes antenna is not a frequency-invariant antenna. By appropriately changing the termination structure or adding resistance at the termination, the termination effect can be reduced to approximate a frequency-invariant antenna [4]. The polar coordinate
Leakage and Discharge Fault Detection Technology
251
formula of the double-armed helix of the Archimedes helix antenna is expressed as: R = R0 + α(φ − φ0 )
(1)
In formula (1), R represents the distance from any point of the spiral to the polar coordinate center O; R0 represents the distance from the start point A of the spiral coil to the polar coordinate origin O; α represents the growth rate of the spiral; φ Represents the rotation azimuth; φ0 represents the rotation starting azimuth. The Archimedes helix antenna is a flat-printed structure, and the width of the spiral metal wire is equal to the distance between the wires, forming a self-complementary structure, which enables better impedance matching. In the area outside the effective radiation area, the main electromagnetic radiation has been radiated into the space, and the area outside the main radiation area will not have a major impact on the antenna. Since part of the current still passes through the effective radiation area along the helical coil, reflection will occur according to the principle of terminal effect and affect the performance of the antenna part. The Archimedes helix antenna designed above is installed on the sensor, and it is effectively connected to provide effective support for the complete acquisition of the operating signal of the subway electromechanical equipment. The obtained operating signals of subway electromechanical equipment are integrated into set X = {x1 , x2 , · · · , xn }, which provides a basis for the screening of subsequent operating signal characteristics. 2.2 Research on Leakage and Discharge Faults of Subway Electromechanical Equipment Based on the causes of leakage and discharge faults, the types of leakage and discharge faults of electromechanical equipment are divided, as follows: (1) Suspension leakage discharge fault In electromechanical devices, the factors that cause levitating leakage discharges are often because air gaps exist in solid insulators or because there are some air bubbles suspended in liquid insulators. The local leakage discharge phenomenon caused by the existence of an air gap in the center of the insulation changes with the change of the gas pressure and the electrodes in the system. The pulse of the partial leakage discharge will occur before the maximum value of the positive and negative half cycles. The amplitude and orientation of such a pulse are roughly the same, and the repeatability is also the same. In addition, the upper and lower amplitudes will be asymmetrical., but this is normal. When the voltage of the equipment rises to a fixed value, the floating leakage discharge of the electromechanical equipment will occur, and its leakage discharge will be larger than the minimum detectable range when it occurs. As the voltage in the follower experiment increases, its leakage discharge capacity is generally constant, but the repetition rate of the leakage discharge pulse will gradually increase. The voltage at the time of extinction is about the same as the voltage that appeared at the beginning. However, the length of time that the voltage is applied has little effect on the leakage and discharge capacity. If there are second or more small bubbles, and leakage discharge occurs, then there will be a higher pulse amplitude when there is a higher voltage.
252
W. Wang and X. Ma
(2) Corona leakage discharge failure Corona leakage discharges often occur on high voltage electrical conductors surrounded by gas. In the case of non-uniform electric field, local leakage discharges occur on the surface of the charged conductor in the gas or liquid, where the field strength is high. Corona leakage discharge basically includes two kinds of leakage discharge of metal conductor in liquid insulating medium and leakage discharge in gas. The voltage value of the leakage discharge signal is symmetrical in its positive and negative half cycles. And no matter which half cycle it is in, its leakage discharge duration and interval are the same. In any half cycle, the amplitude of the leakage discharge signal is the same, or distributed regularly. When the leakage discharge position of the electromechanical equipment is very close to the ground, the large pulse of the signal is displayed in the negative half cycle; when the leakage discharge position is close to the high voltage, the displayed pulse appears in the positive half cycle. The initial leakage discharge capacity is greater than the lowest value that can be measured. In the initial period, a large leakage discharge pulse will be generated, and then it will increase with the increase of the voltage. The repetition rate that changes with it also increases. In the case of very high voltage, smaller pulses will be generated in the other half cycle. The small pulse will keep the leakage discharge unchanged as the voltage increases. The extinction voltage coincides with the starting voltage. Because partial leakage discharge will corrode the conductor electrode. Therefore, the amplitude of the leakage discharge signal will change with the change of time. (3) Leakage discharge failure along the dielectric surface The occurrence of creepage discharge is due to the existence of dirt and dust on the surface of the insulating medium, which is caused by leakage discharge. Generally, this kind of leakage discharge phenomenon mostly exists on the surface of the outer metal conductor to the medium, but if there are conductive impurity holes of different sizes in the medium, the leakage discharge pattern is almost unchanged. Because of this, it cannot be easily judged whether the waveform is a leakage discharge on the surface of the dielectric. However, if the leakage discharge terminal is a high-voltage electrode and the non-leakage discharge terminal is grounded, the leakage discharge pulse in the positive half cycle will be large and thin, and the negative half cycle will be small and dense; in the symmetrical electrode, this phenomenon does not occur. The phenomenon of leakage discharge is almost the same in the positive and negative half cycles, and the leakage discharge pattern is also almost symmetrical [5]. (4) Internal leakage discharge fault Some sharp corners and burrs in the metal parts and insulating parts inside the electromechanical equipment are internal leakage discharges. In the electric field, because the dielectric constant of water is much higher than that of electromechanical equipment oil, the impurities are first polarized and attracted to the vicinity of the electrode with the strongest electric field strength, and are gradually arranged along the direction of the electric force line, which produces impurities “small bridge”. When the impurities between the two poles are small enough and the distance is large enough, the “small bridge” formed is intermittent. According to the electric field principle, the electric field
Leakage and Discharge Fault Detection Technology
253
in the oil will be distorted due to the existence of “small bridges”. This is because the dielectric constant of the fiber is relatively large, which strengthens the electric field of the oil at the fiber end, so that leakage discharge occurs from this part, and gas is decomposed. Spark leakage discharge. When the distance between the two poles is small enough and there are enough impurities, the two electrodes may be connected by a “small bridge”; at this time, because the “small bridge” has a large conductance, the current flowing through the “small bridge” will be If it is very large, the “small bridge” will heat up violently, the oil and water near the “small bridge” will gradually boil and vaporize, and finally a gas channel will be generated, that is, the “bubble bridge”, which will generate sparks for leakage discharge. If the fiber is not damp, because the “small bridge” has a small conductance, the impact of the spark leakage discharge voltage on the oil will be relatively small; on the contrary, the impact will be relatively large. Therefore, the heating process of the “small bridge” is closely related to the spark leakage discharge of electromechanical equipment caused by impurities. The above process completes the in-depth exploration of the causes and specific types of leakage and discharge faults of subway electromechanical equipment, and makes sufficient preparations for the subsequent feature extraction of leakage and discharge faults of electromechanical equipment. 2.3 Feature Extraction of Operating Signal of Electromechanical Equipment Based on the obtained subway electromechanical equipment operating signal set 1, the big data analysis technology-wavelet transform algorithm is applied to scientifically extract the operating signal characteristics of subway electromechanical equipment, which provides a basis for subsequent electromechanical equipment leakage and discharge fault detection. Singular spectral entropy is one of the many methods for analyzing one-dimensional time series in time domain, and it is a good tool to study noise-containing signals. The basis of singular spectral entropy analysis is the singular value decomposition theorem in matrix analysis. Singular values are commonly used in matrix analysis. Because of their good stability, singular values have important applications in signal processing. Singular value decomposition is an important matrix decomposition method in linear algebra and matrix theory. The steps to perform singular spectrum analysis on operating signals of subway electromechanical equipment are: Choose a length of L for the analysis window. L is also known as the embedding dimension in embedded analysis. In order to make full use of the information of the signal, the time delay constant of 1 is selected. Therefore, the pattern data in the signal is intercepted and analyzed with a window order of (L, 1). Construct a pattern data matrix or a trajectory matrix in embedding space. The X = {x1 , x2 , · · · , xn } sequence is divided into n − L segment pattern data in a pattern window of (L, 1), and these data form a
254
W. Wang and X. Ma
pattern matrix, which is expressed as: ⎡ ⎢ ⎢ Y =⎢ ⎣
x1 x2 .. .
⎤ · · · xL · · · xL+1 ⎥ ⎥ . ⎥ .. . .. ⎦
x2 x3 .. .
(2)
xn−L+1 xn−L+2 · · · xn
Perform matrix singular value decomposition on pattern data matrix Y . Assuming that the calculated singular value is β1 ≥ β2 ≥ · · · ≥ βL , then βi constitutes the singular value spectrum of the operating signal of the electromechanical device. The number of non-zero singular values reflects the number of different patterns included in each column of the data matrix, and the value of the singular value βi represents the percentage of the corresponding pattern in the total pattern [6]. The eigenvector corresponding to the largest eigenvalue is regarded as the first-order mode, the eigenvector corresponding to the second largest eigenvalue is regarded as the second-order mode, and so on. Through the corresponding relationship between the singular value and the mode in the mode matrix, we can think that the singular value spectrum {βi } is a division of the operating signal of the electromechanical device in the time domain. Therefore, the singular spectral entropy of the operating signal of the subway electromechanical equipment in the time domain can be defined as: ⎧ L ⎪ ⎪ ⎪ χi log2 χi ⎨ Gs = − i=1 (3) βi χi = ⎪ L ⎪ ⎪ ⎩ β i
i=1
In formula (3), Gs represents the singular spectrum entropy of the operating signal of the subway electromechanical equipment in the time domain; χi represents the percentage of the i singular value in the entire singular spectrum, or the i mode in the percentage of the entire pattern. In the operation process, in order to facilitate the comparison, we generally normalize the singular spectral entropy of the operating signal of the subway electromechanical equipment. After processing, the influence of the selected window length on the operation can be reduced. The expression is:
ˆs = G
−
L i=1
χi log2 χi
log2 L
(4)
By analyzing the operating signal of subway electromechanical equipment in the frequency domain, its information entropy characteristics, that is, the power spectrum of the operating signal, can also be obtained. Suppose the discrete Fourier transform of running signal {xi } is: 1 −δ xi e 2π n n
x(δ) =
i=1
(5)
Leakage and Discharge Fault Detection Technology
255
In formula (5), x(δ) represents the discrete Fourier transform result of the operating signal; δ represents the discrete Fourier transform factor, which ranges from 0 to 1. Based on the result of formula (5), the power spectrum of the operating signal of the electromechanical equipment is obtained, and the expression is: H (δ) =
1 |x(δ)|2 2π n
(6)
In formula (6), H (δ) represents the power spectrum of the operating signal of the electromechanical equipment. The power spectrum H = {H1 , H2 , · · · , Hn } of each frequency can also be regarded as a distinction for equipment operating signals. Then the power spectrum entropy calculation formula is: ⎧ L ⎪ ⎪ ⎪ Fa = − γi log2 γi ⎨ i=1 (7) Hi γi = ⎪ L ⎪ ⎪ ⎩ H i
i=1
In formula (7), Fa represents the power spectrum entropy of the operating signal of the subway electromechanical equipment in the frequency domain; γi represents the difference measure value of the operating signal. Obviously, the power spectrum entropy is also the highest value when the signal is white noise, so the operation result of formula (7) can still be normalized, and the expression is:
Fˆ a =
−
n i=1
γi log2 γi
log2 L
(8)
The above research obtains the information entropy characteristics of the electromechanical equipment operating signal in the time domain and frequency domain. The two study the operating signal characteristics from different perspectives, which makes the operating signal feature larger and increases the amount of calculation for leakage and discharge fault detection. Therefore, this research applies the big data analysis technology - wavelet transform algorithm to effectively combine the analysis results in the time and frequency domains, and obtain the information entropy characteristics of the time-frequency joint domain [7]. The singular spectral entropy and power spectral entropy of the operating signal of subway electromechanical equipment are mapped into the wavelet space, and the information entropy characteristics of the time-frequency joint domain are obtained by adding weight coefficients. The expression is: Ki =
ˆ s (i) + τ2 Fˆ a (i) τ1 G 5υ0
(9)
In formula (9), Ki represents the information entropy feature of the time-frequency joint domain of the operating signal of subway electromechanical equipment; τ1 and
256
W. Wang and X. Ma
τ2 represent the weight coefficients corresponding to the singular spectrum entropy and power spectrum entropy; υ0 represents the wavelet transform coefficient. In the above process, the extraction of information entropy in the time-frequency joint domain, which is the characteristic of the operating signal of subway electromechanical equipment, is completed, which lays a solid foundation for subsequent leakage and discharge fault detection. 2.4 Leakage and Discharge Fault Detection of Electromechanical Equipment The leakage and discharge fault signals of subway electromechanical equipment are obtained, and the process shown in the previous section is used to extract the information entropy K ∗ of the joint time-frequency domain. Based on this, combined with the support vector machine algorithm, the accurate detection of leakage and discharge faults of electromechanical equipment is realized. Support vector machine is a binary classification model. Its purpose is to find a hyperplane to accurately and perfectly segment the sample. The principle of segmentation is to maximize the interval. Before dividing the sample, the data vector in the low-dimensional space is divided. Map to high-dimensional space, establish a maximum interval hyperplane in the high-dimensional space, and completely classify the original data into two independent hyperplanes, that is, realize the binary classification of data [8]. Support vector machines can solve overfitting and local optimal problems very well, and are especially suitable for solving high-dimensional and nonlinear problems. If a linear function can separate the samples, the data samples are said to be linearly separable. In other words, a linear function in a two-dimensional space is a straight line, a linear function in a three-dimensional space is a plane, and so on. If the dimension of the space is not considered, such linear functions are collectively called a hyperplane. Linearly separable SVMs deal with strictly linearly separable data sets, and all points in the data set must strictly satisfy the linearly separable constraints in order to apply linearly Normal sample
5
Optimal classification Hyperplane
4
Vertical axis
Leakage discharge fault sample
3
2
l
a erv int
1
0
0
1
2 3 Horizontal axis
4
5
Fig. 2. Example diagram of linearly separable samples
Leakage and Discharge Fault Detection Technology
257
separable SVMs. An example of a linear function in a linearly separable two-dimensional space is shown in Fig. 2. As shown in Fig. 1, interval calculation is very important in the application of SVM algorithm. The essence of the interval is equal to the projection of the difference between two heterogeneous support vectors on the support vector space. The calculation formula is: ζ =
(Ki − K ∗ ) · V ·
(10)
In formula (10), ζ represents the interval; represents the support vector space; V represents the interval calculation parameter; represents the range of the support vector space. In addition, to construct an SVM with good performance, the choice of kernel function is the key [9]. However, different types of kernel functions correspond to different mapping spaces, thus showing different properties, which in turn determine different nonlinear problem solving capabilities, as well as different applicable scopes and environments. Therefore, it is necessary to select the appropriate kernel function according to the specific data. The four commonly used kernel function types are as follows: (1) Linear kernel function, which is the most basic kernel function, suitable for dealing with the situation when the number of features is much larger than the number of samples, the expression is: (11)
Ki · Kj = Ki · Kj In formula (11), Ki and Kj represent the operating signal characteristics of any two electromechanical devices. (2) Polynomial kernel function, which is a global kernel function. The higher the data dimension, the easier it is to classify, and the generalization ability is strong, but its learning ability is weak. The expression is: d
d Ki · Kj = Ki · Kj + 1
(12)
In formula (12), d represents the total number of terms contained in the polynomial. (3) RBF kernel function, which is suitable for dealing with the situation when the number of features is much smaller than the number of samples. In the absence of prior knowledge of the samples, the RBF kernel function will always achieve good results. Therefore, the RBF kernel support vector machine learning It has strong ability and is the most widely used kernel function at present. The expression is: 2 −Ki · Kj
r Ki · Kj = exp (13) σ2 In formula (13), σ represents the radial basis variance of the operating signal characteristics of the electromechanical equipment.
258
W. Wang and X. Ma
(4) The sigmoid kernel function, which is also a global kernel function, has strong generalization ability and weak learning ability. It is relatively limited in application and must meet certain conditions to apply. The expression is:
g Ki · Kj = tanh ϑ Ki , Kj + μ (14) In formula (14), ϑ and μ represent the auxiliary factors of the sigmoid kernel function. According to the operating signal characteristics of subway electromechanical equipment, the RBF kernel function is selected as the kernel function for leakage and discharge fault detection of electromechanical equipment [10]. By introducing the RBF kernel function into the support vector machine algorithm to obtain the optimal classification hyperplane, the operating signals of subway electromechanical equipment can be quickly and accurately divided into normal signals and leakage discharge fault signals, thus realizing the leakage and discharge fault detection of subway electromechanical equipment. Effective detection.
3 Experiment and Result Analysis 3.1 Experiment Preparation Stage In order to verify the application performance of the proposed technology, the subway electromechanical equipment room is selected as the experimental object. The selected experimental object subway electromechanical equipment room contains a variety of electromechanical equipment, which can include all types of leakage discharge faults, and meet the technical testing requirements of leakage discharge fault detection. At the same time, the big data analysis technology - wavelet transform algorithm is applied in the proposed technology, which involves the wavelet transform coefficient υ0 , which directly determines the accuracy of the feature extraction of electromechanical
Feature extraction accuracy of electromechanical equipment operation signal /%
90
75
60
45
30
15
0
0
0.1
0.2 0.3 0.4 Parameter value
0.5
Fig. 3. The relationship between the wavelet transform coefficient υ0 and the accuracy of feature extraction of electromechanical equipment operating signals
Leakage and Discharge Fault Detection Technology
259
equipment operating signals. Therefore, before the experiment is carried out, the optimal value of the wavelet transform coefficient υ0 needs to be determined. The relationship between the wavelet transform coefficient υ0 obtained through testing and the accuracy of feature extraction of electromechanical equipment operating signals is shown in Fig. 3. As shown in Fig. 3, when the value of wavelet transform coefficient υ0 is 0.3, the feature extraction accuracy of electromechanical equipment operating signal reaches the maximum value of 85.5%. Therefore, the optimal value of wavelet transform coefficient υ0 is determined to be 0.3. 3.2 Analysis of Experimental Results Based on the selected experimental objects and the determined experimental parameters, the leakage and discharge fault detection experiment of subway electromechanical equipment is carried out. In order to improve the accuracy of the experimental conclusions, the method of Reference [2] and the method of Reference [3] are used as experimental comparison methods, and 10 different experimental conditions are set, as shown in Table 1. Table 1. Setting table of experimental conditions Experimental condition number
Electromechanical equipment operating time/d
Leakage discharge fault occurrence times/time
1
30
1
2
60
6
3
90
10
4
120
12
5
150
15
6
180
18
7
210
20
8
240
23
9
270
29
10
300
30
Taking the experimental conditions shown in Table 1 as the background conditions, and the success rate of leakage and discharge fault detection as the evaluation index, the effectiveness and feasibility of the proposed technology are verified. The success rate data of leakage discharge fault detection obtained through experiments are shown in Table 2. As shown in the data in Table 2, the leakage and discharge fault detection success rates obtained by applying the proposed technology are all greater than the given minimum limit, and the maximum value reaches 95.10%, the minimum value also reached 84.12%, which fully confirms the effectiveness and feasibility of the proposed technology.
260
W. Wang and X. Ma Table 2. Leakage discharge fault detection success rate data table
Experimental condition number
Leakage discharge fault detection success rate The method of this paper
Reference [2] method
Reference [3] method
Minimum limit
1
90.12%
85.02%
80.10%
65.00%
2
95.10%
78.23%
79.80%
75.41%
3
89.25%
70.06%
72.77%
61.02%
4
84.12%
64.02%
78.05%
54.12%
5
86.59%
67.95%
76.19%
75.30%
6
88.01%
63.50%
65.92%
61.48%
7
87.52%
64.58%
63.51%
58.49%
8
90.15%
62.85%
50.95%
52.10%
9
91.40%
59.14%
56.80%
48.62%
10
90.06%
60.12%
65.34%
62.78%
4 Conclusion The subway is composed of a variety of electromechanical equipment. With the upgrading and development of electromechanical equipment, its internal structure has become more and more complex, resulting in a sharp increase in the frequency of leakage and discharge failures, which restricts the development and application of subways. Therefore, a method based on big data analysis is proposed. Leakage and discharge fault detection technology of subway electromechanical equipment. The experimental results show that the technology greatly improves the success rate of leakage and discharge fault detection, provides an effective basis for the handling of leakage and discharge faults of electromechanical equipment, and also provides a guarantee for the safety of subway operation.
References 1. Min, C., Xiaobo, L., Ruiyi, W., et al.: Reliability prediction of metro traction inverter system based on grey optimization. Comput. Simul. 37(7), 168–171 (2020) 2. Fan, G., Xiubin, X., Yujing, C.: Design of on-line detection terminal for leakage and discharge fault of metro distribution equipment. Microcomput. Appl.. Appl. 37(8), 82–85 (2021) 3. Wenzeng, X.: Analysis and Improvement Measures of Leakage Current in Metro Station Power Distribution System. Urban Mass Transit 23(12), 169–172 (2020) 4. Xu, J.: Big NB-IoT data: enhancing portability of handheld narrow-band internet of things performance on big data technology. Mob. Inf. Syst. 2021(5), 1–6 (2021) 5. Evstifeev, A.A., Zaeva, M.A.: Method of applying fuzzy situational network to assess the risk of the industrial equipment failure. Procedia Comput. Sci. 190(150), 241–245 (2021) 6. Zhou, H., Zheng, Z., Cen, X., et al.: A data-driven urban metro management approach for crowd density control. J. Adv. Transp. 2021, 1–14 (2021)
Leakage and Discharge Fault Detection Technology
261
7. Ramere, M.D., Laseinde, O.T.: Optimization of condition-based maintenance strategy prediction for aging automotive industrial equipment using FMEA. Procedia Comput. Sci. 180(16), 229–238 (2021) 8. Yu, W., Dillon, T., Mostafa, F., Rahayu, W., Liu, Y.: A Global Manufacturing Big Data Ecosystem for Fault Detection in Predictive Maintenance. IEEE Trans. Industr. Inf.Industr. Inf. 16(1), 183–192 (2020) 9. Xu, W., Yuan, C., Peng, K., et al.: Big data driven urban railway planning: Shenzhen metro case study. J. Comb. Optim.Optim. 42, 593–615 (2021) 10. Liang, S., Zhang, S., Zheng, X., et al.: Online fault detection of fixed-wing UAV Based on DKPCA algorithm with multiple operation conditions considered. Xibei Gongye Daxue Xuebao/J. Northwest. Polytech. Univ. 38(3), 619–626 (2020)
Research on Random Intrusion Depth Detection of Internet of Things Based on 3D Convolutional Neural Network Xingfei Ma1(B) and Wuguang Wang2 1 Wuxi Vocational Institute of Commerce, Wuxi 214153, China
[email protected] 2 WuXi City College of Vocational Technology, Wuxi 214153, China
Abstract. There are many problems in the industrial Internet of Things, such as low feature extraction rate, low detection efficiency and poor adaptability. To solve this problem, a random intrusion depth detection method based on threedimensional convolution neural network is proposed. According to NIDS, an intrusion detection model of the Internet of Things is built, through which distributed network data packets are collected, and the principal component analysis algorithm is used to preprocess them to reduce data dimensions. Combined with deep learning theory and technology, select data features to form feature matrix. With this as the input, the random intrusion detection in the Internet of Things is completed by using 3D convolution neural network (3DCNN) combined with long and short memory (LSTM) method. The experimental results show that the F1 value of the detection method is above 0.9, indicating that the detection accuracy of the method is high. Keywords: Three Dimensional Convolution Neural Network · Internet of Things · Random Intrusion · Depth Detection Method
1 Introduction With the deepening of IOT application, the complex network environment and endless attacks make it face many challenges, such as hacker intrusion, security vulnerability attacks, worms and so on. IOT can be divided into three layers: perceptual layer, network layer and application layer. The perceptual layer mainly focuses on data security, such as preventing malicious node attack, sampling and data forgery. Network layer security is represented by preventing Dos attacks and ensuring routing security [1]. Application layer security can satisfy user privacy and access control. At present, most of the security mechanisms for the Internet of Things tend to be passive, and 2023 Philosophy and Social Science Research Funds for Colleges and Universities in Jiangsu Province【 Research on Innovation Talent Cultivation Path of Industry-Education Integration in Higher Vocational Colleges under the Perspective of Talent Chain, Industry Chain, Innovation Chain and Value Chain 】(Project No. 2023SJYB0979). © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 262–276, 2024. https://doi.org/10.1007/978-3-031-50571-3_19
Research on Random Intrusion Depth Detection of Internet of Things
263
Intrusion Detection (ID) can monitor the transmitted data of the network in real time and take measures to monitor, analyze and warn the intrusion so as to improve the network’s ability to cope with external threats. Traditional intrusion detection research is not perfect, there are still the following problems: the Internet of Things environment is complex, the network traffic data collected is high-dimensional. With the change of the operating environment and structure of the Internet of Things, it is necessary to constantly update the model to detect the new unknown attacks. Based on the above background, a random intrusion depth detection method of Internet of Things based on three-dimensional convolution neural network is proposed. Build an intrusion detection model of the Internet of Things through NIDS to collect distributed network data packets. In order to reduce the data dimension, the principal component analysis algorithm is used to preprocess it. Using 3DCNN and LSTM method, the accuracy of intrusion detection is enhanced, so as to realize random intrusion detection of the Internet of Things.
2 Design of Intrusion Detection Model for Internet of Things Network intrusion detection is the process of discovering the behavior of unauthorized user using or attempting to use computer system and the behavior of legal user abusing its privilege. Network intrusion detection can be divided into misuse intrusion detection and anomaly intrusion detection. Misuse intrusion detection, also known as knowledgebased or feature-based intrusion detection, extracts the pattern features of various attacks and forms a rule base. However, misuse intrusion detection can only identify known attacks, so there is a certain rate of under-reporting, and its rule base can be continuously expanded by self-learning to detect new attacks and reduce under-reporting rate [2]. Anomalous intrusion detection, also known as statistical or behavior-based Intrusion detection, is used to identify abnormal behaviors that differ greatly from normal activities in the host computer or network. Anomaly detection first collects historical data on normal operational behaviors within a certain period of time, in order to establish a normal behavior profile on behalf of the user, host computer or network connection. Then it collects event data and uses neural network, genetic algorithm or traditional statistical analysis to determine whether the current behaviors are abnormal behaviors by comparing with normal behavior patterns; According to the distribution of data collection, analysis and response, it can be divided into centralized network intrusion detection and distributed network intrusion detection. Centralized intrusion detection adopts a single host to analyze its audit data or network traffic, and look for possible intrusion behavior; because of adopting the method of centralized processing, the host that realizes the intrusion detection function will become the bottleneck of the system; on the one hand, the performance of the system is affected by undertaking too much work; on the other hand, the host is often the primary target of attack, and once it is broken, the security of the system cannot be guaranteed. Distributed network intrusion detection adopts multiple agents distributed in each part of the network to carry out intrusion detection respectively, and can deal with possible intrusion behaviors cooperatively; this method distributes the detection agents in the interested or important position on the network, independently and autonomously runs, collects intrusion information, and independently or cooperatively detects network intrusion behaviors; this intrusion detection method realizes the function and security
264
X. Ma and W. Wang
decentralization, solves the problem of single point failure, or restricts it within a certain scope, and does not seriously affect the security performance of the system. 2.1 Network Packet Collection Generally speaking, intrusion detection systems can be divided into host type and network type. Host intrusion detection systems (HIDS) often use system logs, application logs, etc. as data sources, and of course, other means (such as monitoring system calls) can be used to collect information from the host for analysis. The data source of the Network Intrusion Detection System (NIDS) is the data packet on the network, and the network card of the network intrusion detection engine can be set in mixed mode to monitor and judge all the data packets within the network segment. When installing NIDS, the key is to choose the location of the data acquisition section, because it determines the visibility of the “event”, and the data acquisition section has many possibilities: (1) If the network segment is connected by a bus hub, it can be simply connected to one port of the hub; and (2) For a switched Ethernet switch, the problem becomes complicated. Because the switch does not use a shared media approach, the traditional use of a sniffer to listen on the entire subnet is no longer feasible, and NIDS can be deployed on the switch’s listening port to obtain all data flow through the switch. NIDS consists of the following elements: agents, forwarders, monitors, filters, and user interfaces, the structure of which is shown in Fig. 1.
user Internet
Switch Detect proxy users Router HIDS analysis engine server Detection agent Fig. 1. Composition of NIDS
HIDS management terminal
Research on Random Intrusion Depth Detection of Internet of Things
265
An agent is an independent entity responsible for monitoring some aspect of a host that reports the information it collects to the forwarder. For example, a proxy can monitor many connections to a host and collect suspicious information. The agent reports Telnet findings to the specified forwarder, but it does not have the authority to generate an alert directly. Usually the forwarder or monitor generates an alarm based on the information sent by the agent. Reports sent by forwarders through different agents provide a complete picture of the host’s status, and reports sent by monitors through different forwarders provide a complete picture of the monitored network [3]. Filters can be seen as a hierarchy of data selection and abstraction for proxies. When the agent collects host information, it encounters two problems: (1) there may be multiple agents in a system using the same data source, such as using the same Unix log file in, and each agent processing the data source means repeated effort; and (2) there may be one agent that will be used in different Uni versions, but different Unix versions may have different locations and file formats that are not the same in Unix, which means writing several agents to accommodate different versions. The filter is proposed to solve these two problems. The proxy puts a condition on the filter, and the filter returns records that match the condition. Filters will also address issues related to system versions, enabling agents that implement uniform functionality to work with different operating systems. The transponder is responsible for collecting the information provided by each agent on the host and communicating with the outside world. Each host under distributed monitoring has a forwarding NIDS that controls the work of the agent, processes the data from the agent, and responds to requests from the monitor. The monitor controls the work of the transponder and processes the data it sends. The biggest difference between a monitor and a forwarder is that the monitor controls the forwarders on different hosts and the forwarder controls the agents on a single host. The monitor communicates with the user interface, thus providing a managed entry point to the entire distribution. The NIDS user interface provides users with visual graphics to manage and communicate user instructions to the monitor. Network packet collection unit is the basic component of NIDS. Generally, by intercepting all the traffic of the whole network, it simply filters out the unconcerned data according to the information of source host, destination host and service protocol port, and then sends the interested data to the higher application for analysis. On the one hand, the network packet collection module should be able to collect all the packets on the network, especially to detect the fragmented packets (which may contain attacks), on the other hand, the efficiency of data interception module to intercept the packets is also very important, which directly affects the speed of the whole NIDS and the adaptability of NIDS to the modern high-speed network [4]. In general network environment, we can use the API interface provided by the system directly to collect data packets. This method is very simple, easy to implement, but its function is limited and inefficient. At present, many software use libpcap library to collect network packets, and it can filter the data to reduce the data that need analysis. 2.2 Packet Preprocessing In general network environment, we can use the API interface provided by the system directly to collect data packets. This method is very simple, easy to implement, but
266
X. Ma and W. Wang
its function is limited and inefficient. At present, many software use libpcap library to collect network packets, and it can filter the data to reduce the data that need analysis (Fig. 2).
Decoding plug-in and port scanning plug-in
Distributed network packet
Preprocessor
Intrusion detection
Fig. 2. Structure Diagram of Data Packet Preprocessing
(1) Data dimension reduction The basic idea of principal component analysis is to construct a series of linear combinations of the original variables to form several comprehensive geometric indexes to remove the relevance of the data, so as to keep the variance information of the original high-dimensional data to the greatest extent. Principal Component Analysis (PCA) algorithm is a linear dimensionality reduction method. Combining MIS and PCA, this paper puts forward a new algorithm based on MIS and PCA, called MIS-PCA algorithm. Firstly, the algorithm transforms the data set into matrix, calculates the mutual information value (MI) of attributes, other mutual information value (LMessI), maximum mutual information value (MaxMI), then calculates the absolute mutual information reliability and relative mutual information reliability according to the correlation of MI, LessMi and MaxMI, and obtains the mutual information comprehensive reliability according to the absolute mutual information reliability and relative mutual information reliability. Finally, the matrix is screened by using the mutual information comprehensive reliability, and the dimension is reduced by principal component analysis [5]. The principle of MIS-PCA based on mutual information comprehensive credibility is the same as that of PCA, but the difference is only that the PCA algorithm is applied to do a feature screening before dimension reduction; the threshold H is set to filter out the feature attributes of raw data information that is almost useless. The specific steps of the MIS-PCA algorithm are as follows: Input: data matrix Anm , m represents the number of samples, n represents the number of attributes; Mutual information comprehensive confidence threshold k, contribution rate S. Step 1: calculate MI (mutual information value), MIA (absolute mutual information credibility), MIR (relative mutual information credibility) and MIS (mutual information comprehensive credibility) of each attribute, compare MIS with comprehensive mutual information credibility threshold k, and filter features to obtain a new matrix k. Step 2: use PCA algorithm to reduce the dimension of matrix B. The operations are as follows:
Research on Random Intrusion Depth Detection of Internet of Things
1) 2) 3) 4)
267
Centralize the sample matrix and get matrix Cnm ; Covariance matrix D based on Cnm ; Eigenvalue ei and eigenvector Ei of covariance matrix. Selected transform base
The maximum M of the eigenvalues is selected, and the corresponding M eigenvectors are used as column vectors to form the eigenvector matrix EnM . 5) Calculate the reduced dimension data matrix F = EE T . 6) Use the PCA algorithm to reduce the dimension of the matrix B. In this algorithm, the contribution rate is the ratio of the sum of selected eigenvalues to the sum of all eigenvalues, which can generally be expressed by formula (16). In addition, in the algorithm design, B is not used as a parameter input, but actually M is selected according to the contribution rate of eigenvalues. M
S=
i=1 n
ei (1) ei
i=1
where, ei represents the characteristic value. The principal component analysis (PCA) method has the characteristics of easy calculation and strong interpretation. The PCA algorithm measures information by the variance of the data: the larger the variance, the more key information it contains, otherwise, the less information it contains. Therefore, PCA is a process that transforms the coordinate projection of high dimensional data into a new coordinate system to represent the data in the direction of maximum variance. (2) Data standardization In data analysis, it is very common for data to have units, such as GDP in units of 100 million or millions, then there will be problems with the size of numbers due to unit problems; this situation may have an impact on the analysis and therefore needs to be processed, provided that the relative meaning of the numbers is not lost, that is, the larger the number before represents the higher GDP, the processed data cannot lose this characteristic [6]. To calculate the distance, the digits 1 and 2 can be subtracted directly to the distance value 1; another set of digits 10000 and 20000 can be subtracted directly to the distance value 10000. If the larger the number is, the farther the distance is, then the apparent 10000 is greater than 1, but this is only due to data units, not actual expectations. In such cases, before data analysis, it is sometimes necessary to standardize the data. Data standardization is to convert the original data into dimensionless indicator evaluation values through a certain mathematical transformation method, that is, all indicator values are at the same quantitative level, so that comprehensive analysis and comparison can be carried out. Standardization is one of the most common dimensional processing methods. The calculation formula is: ai =
ai − b c
(2)
268
X. Ma and W. Wang
In the formula, ai represents the processed data, ai represents the pre-processed data, and b and c represent the average and standard deviation of the data. This processing makes the data show a characteristic that the average value of the data must be 0 and the standard deviation must be 1. The compressed size of the data is processed, and the data has special characteristics (the average standard deviation of 0 is 1). This kind of processing is used in many research algorithms, for example, it is necessary to standardize before clustering analysis, or it is used by default in factor analysis. In clustering, for example, the underlying algorithm is based on distance to measure clustering, so the default SPSSAU selects for normalization [7]. In addition, there are some special research methods, such as sociological mediation, or moderation studies, may also standardize the data. 2.3 Intrusion Identification Based on 3D CNN In 2D convolution neural network, convolution operation is applied to 2D feature map, and the convolution process can only learn features from spatial dimension. When used in video analysis problems, the motion information of multiple consecutive frames is captured. Therefore, 3D convolution and 3D pooling are performed during the convolution and pooling phases of CNN to simultaneously compute features from both spatial and temporal dimensions. 3D convolution is achieved by stacking several consecutive frames together in a 3D kernel convolution. Through this structure, the feature mapping in the convolution layer is connected to a plurality of consecutive frames in the previous layer to capture the target information. One 3D convolution core can only extract one type of feature from the frame cube, because kernel weights are copied across the entire cube. The general design principle of CNN is to increase the number of feature maps by generating multiple types of features from the same set of lower level feature maps. Based on 3D convolution operation, we can design different 3DCNN networks for different analysis tasks. 2.3.1 Construction of Data Characteristic Matrix Feature extraction is an indispensable step in intrusion detection, which is directly related to the efficiency and accuracy of detection, and thus affects the performance of detection model. Characteristics, also known as attributes, describe the characteristics of a thing in one way or another. Usually we describe the characteristics of a transaction. In 3D CNN algorithm, the more information the feature contains, the more advantageous it is to distinguish different things. High-dimensional feature space will lead to data disaster, and the sample size will increase exponentially as the dimension increases. Increase model processing time. Moreover, some features in these high-dimensional data may contain very little or irrelevant information, which has no effect on the classification of machine learning algorithms. Intrusion detection algorithm is a classification problem, which is used to distinguish abnormal data. In the process of designing an intrusion detection algorithm, two problems are usually considered, one is the recognition performance of the model, the other is whether the input data contains valid features. Therefore, the
Research on Random Intrusion Depth Detection of Internet of Things
269
selection of classification algorithm, data size will lead to the efficiency of the latter model. Therefore, the feature extraction of the original data set is a crucial step. The degree to which Information Gain (IG) algorithms measure feature hi based on information entropy and exist to reduce information uncertainty in classification systems. For classification system, the data set of samples is H = {h1 , h2 , ..., hi , ..., hM }, the category set of samples is G = {g1 , g2 , ..., gM }, among which hi = {hi (1), hi (2), ..., hi (j), ..., hi (q)}T , hi (j) is the first feature of i, hi (j) = {d1 , d2 , ..., dl , ..., hR }, dl is the l value of j, that is, there are M samples in the data set, q features in each sample, and R values in each feature. The value set of sample category is gi = gˆ 1 , gˆ 2 , ..., gˆ p , that is, sample category has p. In information theory, information entropy represents the uncertainty of information system caused by the existence of random variables. Assuming that the probability of occurrence of each category gˆ i is f gˆ i , the entropy of the classification system is: V (G) = −
p f gˆ i ln f gˆ i
(3)
i=1
Under the condition that the value of feature hi (j) is dl , the definition of system conditional entropy is: V (G|dl ) = f (dl )
p f gˆ i |dl ln f gˆ i |dl
(4)
i=1
When the value of feature hi (j) is dl , the information gain of the feature is defined as the difference between the entropy V (G) of the classification system and the conditional entropy V (G|dl ) of feature hi (j) under a given category dl , and the calculation formula is U (dl ) = V (G) − V (G|dl )
(5)
The information gain value U (dl ) indicates the degree to which the uncertainty of data set classification is reduced by the value dl of feature hi (j). The greater the U value, the greater the reduction of uncertainty, the greater the importance of the classifier, and the stronger the classification ability of the feature. On the contrary, the smaller U (dl ) is, the smaller the uncertainty of data set classification is reduced by the value dl of feature hi (j), and the smaller the importance of the feature to the classifier is, the weaker the classification ability of the feature is. In this chapter, the features in the training set are arranged in descending order according to the U (dl ) value, and a small number of features with the highest order can be selected to achieve feature extraction. Eliminate the features that rank behind and have no or little influence on classification, reduce the operation amount of the classifier, and improve the operation efficiency. Based on the features extracted by information gain calculation, the feature matrix is constructed. First, the MI based ranking strategy is adopted to expand the samples at each time into a dynamic feature matrix. The real-time features in the matrix can reflect the
270
X. Ma and W. Wang
relevant features at the current time, and the delay features reflect the dynamic characteristics of the system operation. In this experiment, the data sets pass the nonparametric verification of SPSS software, and the minimum confidence levels are greater than 0.05, which conforms to the Gaussian distribution. Therefore, this paper uses PCA feature extraction algorithm to obtain the principal component space Y , and introduces the principal component features of the first z moments of Y to form the following augmented matrix. ⎡ t−z t−z+1 ⎤ y1 y1 ... y1t ⎢ t−z t−z+1 ⎥ ⎢ y2 y2 ... y2t ⎥ ⎥ (6) Y =⎢ ⎢ ⎥ ⎣ ... ⎦ yrt−z yrt−z+1 ... yrt where, Y is the augmented matrix; yit represents the value of the i th feature at time t; r is the number of selected features. Calculate the MI between each feature and all other time features to measure the serial correlation between features, i.e. J (i, j). J (i, j) = yit · Y Then the characteristic matrix J is constructed. ⎛ ⎞ o11 . . . o1r ⎜ ⎟ J = ⎝ ... . . . ... ⎠
(7)
(8)
ot1 · · · otr
The value standard of matrix J is as follows: MI greater than 1, value 5; MI is between 0.8 and 1, and the value is 4; MI is between 0.6 and 0.8, and the value is 3; MI between 0.4 and 0.6, value 2; MI is between 0.2 and 0.4, value 1; If MI is less than 0.2, the value is 0; 2.3.2 Intrusion Identification The constructed data feature matrix is regarded as a feature map composed of numbers, and the 3D convolution neural network algorithm (3D CNN), which is commonly used in the depth learning algorithm, is used for intrusion recognition. 3D CNN is one of the representative algorithms in the field of depth learning. It is a feedforward neural network including convolution computation, and is widely used in image processing, natural language processing and many other fields. In 3D CNN, the parameter sharing of its convolution kernel and the sparsity of the connection between layers can enable the network to extract feature information with less computation, which has a stable effect and no additional feature engineering requirements for data. The error function value is
Research on Random Intrusion Depth Detection of Internet of Things
271
obtained by calculating the difference between the real value and the predicted value, so as to adjust the network parameters reversely until the model reaches the optimum. In a typical 3D CNN, the first several layers are convolutions, and the last ones close to the output layer are fully connected one-dimensional networks (i.e., traditional BP neural networks). In 3D CNN network, nonlinear mapping of different features can be realized through convolution layer. The work of feature extraction is also mainly completed by the convolution layer, which outputs the feature vector after convolution through multiple convolution calculations. In the process of convolution, after the current receptive field and convolution nucleus have a convolution operation, they slide to the next window to continue convolution, and the sliding amplitude is set by the convolution step. The specific definition formula of convolution operation is as follows. ⎛ ⎞ j wij αi ∗ γij + δj ⎠ (9) βi = ς ⎝ i∈χj j
where, βi represents the convolution output of layer j; αi represents the characteristic matrix of layer i; γij , wij represents the convolution kernel and weight of layer i and layer j; δj represents the offset term of layer j; χj represents the subset of input characteristic j
graph used to calculate βi ; ς stands for the activation function. The complexity of the whole network becomes higher due to more features after the convolution layer operation. Pooling can divide the image into many non overlapping regions, and then calculate the nodes in different regions. Therefore, after the pooling layer, the feature dimensions are reduced while the main feature information is retained, and the complexity of the network structure is reduced. j (10) λl = ξ wjl βi + δl where, λl represents the output of pooling layer l; ξ represents pooling function; δl , wjl represents the offset and weight of pooling layer l; ξ stands for the activation function. In the fully connected layer, each neuron is connected to all outputs of the previous layer. Usually at the end of CNN, the final classification function can be realized. After the input data has undergone convolution pooling and other operations, the output feature vectors pass through the full connection layer, are classified through the Softmax function, and the prediction results are output. The output calculation formula of the full connection layer is as follows: μx = ζ (wxl λl ) + δx
(11)
where, μx represents the output of full connection layer x; ζ represents the full connection layer activation function, and wxl represents the connection weight value. In the structural design of 3D CNN model, for the large size of convolution kernel in the classic LeNets architecture, the two continuous convolution and pooled stacking methods are prone to over fitting. In this paper, the depth separable convolution layer is used to replace the conventional convolution layer, and the nonlinear modules in the network are added to compress the network parameters and accelerate the convergence
272
X. Ma and W. Wang
speed of the model. The batch normalization layer is added to normalize the standard normal distribution of the middle layer, so as to reduce the impact of super parameter fluctuations in the network, smooth the optimization space in the training and accelerate the network convergence. In this paper, we choose to use PReLU function as the activation function, which can effectively reduce the risk of over fitting and the vulnerability of neural units in the training process. Finally, LSTM method is integrated to make the intrusion detection more accurate by using its feature of preserving the sequence of features. After the structural adjustment of the convolutional neural network model, an intrusion detection model based on 3D CNN is established to process the input data. For the processed network data packets, the mainstream method is to build a network model based on one-dimensional convolution to process and classify the network data. In the data processing phase, this method converts the one-dimensional sequence data type to the three-dimensional digital matrix form to form a classifier. The specific process is as follows: The first step is data input. The data samples are preprocessed to expand the features, and then converted into a 21 * 21 two-dimensional matrix as the input of the network. The second step is feature extraction. Feature extraction mainly includes Depth Separable convolution, Max Pooling, batch normalization and PReLU activation function. Batch normalization and activation functions are nested in each convolution layer. By discarding the original convolution operation, two separable convolutions are used for feature extraction to reduce the model training parameters. The third step is to classify the branch roads. The secondary branch and the primary network share the convolution layer, and the secondary branch connects a full connection behind the convolution layer. The data is divided into two categories: normal and abnormal. The branch network is set to make a second classification judgment, so as to prevent feature loss in the process of down sampling and ensure that the network can learn distinguishing features between various abnormal samples and normal samples. Fourth, convolution and pooling can effectively extract features, but network intrusion data also has its corresponding sequence relationship. By using the sequence relationship between network learning features, we can detect whether there is an intrusion.
3 Experiment Design and Result Analysis 3.1 Data Description At present, KDD99 dataset is the earliest one used in the public network intrusion detection dataset. The NSL-KDD dataset is an improved version of KDD99, which removes a lot of redundant data. In this dataset, each record has 41 attribute characteristics and 1 category label. The category tag contains one normal data category and four types of intrusion attack data categories. The Network Security Research Group of the Australian Network Security Center (ACCS) introduced the UN SW-NB 15 dataset. In this dataset, the training set has 175343 connection records, and the test set consists of 82337 connection records, including 9
Research on Random Intrusion Depth Detection of Internet of Things
273
intrusion attack categories and 1 normal category. Each record contains 42 attribute characteristics and 1 category label. The model diagram of random intrusion depth detection method based on threedimensional convolution neural network is shown in Fig. 3.
adjoining course
softmax classifier
output forecast results
...
data input convolution layer
softmax loss + center loss
convolution layer
pooling layer
data feature extraction
sequence expansion
reduce data feature dimension
calculation Accuracy F1 Recall
intrusion detection
Fig. 3. Model diagram of random intrusion depth detection method based on three-dimensional convolution neural network
3.2 Feature Extraction Result Test For the two data sets, the first 15 features are extracted using information gain. The information gain of each feature is shown in Table 1 below. Table 1. Feature Extraction Results Feature serial number
Information gain value
Information gain value
1
16.32
17.41
2
14.21
15.23
3
14.02
14.21
4
13.65
14.02
5
12.10
12.52
6
11.25
11.22
7
10.52
10.52
8
9.52
8.12
9
8.85
8.04
10
8.62
7.65
274
X. Ma and W. Wang
The feature matrix is constructed according to the extracted features. Taking one of the data samples as an example, the characteristic matrix is as follows: 3 1 4 3 2 4 2 4 0 5 5 3 1 3 5 2 0 5 4 5 0 5 3 0 4 2 4 4 0 2 3 5 J= 4 3
0 2 4 3
1 5 4 2
1 1 1 1
0 4 0 5
2 5 2 2
0 0 3 0
1 5 0 1
4 3 4 3
5 5 2 0
(12)
5 5 0 2 3 5 0 3 5 2 0 3 4 1 5 4 2 5 3 5 4 4 2 4 5 1 5 2 1 3 4 2 2 5 4 1 1 5 3 5
3.3 Test Method Performance Test The evaluation indicators used for intrusion detection include accuracy (ϕ), recall (ψ) and F1. In intrusion detection, attack samples are regarded as positive samples and non attack samples as negative samples. TP represents the number of samples that were originally positive and were predicted to be positive; TN represents the number of samples originally negative and predicted to be negative; FP represents the number of samples whose positive classes are predicted to be negative; FN indicates that the number of negative samples is predicted to be positive samples. Accuracy: it indicates the proportion of real positive samples in the total number of predicted positive samples; Recall rate: refers to the percentage of the samples that are actually positive and correctly predicted in all the predicted positive samples. The higher the value, the higher the reliability of the positive sample predicted by the model. ⎧ TP ⎪ ⎪ ⎨ϕ = TP + FP (13) TP ⎪ ⎪ ⎩ψ = TP + FN F1 Measure is a balance point that considers the precision and recall at the same time, so that both can reach a relatively high value at the same time. F1 Measure can be regarded as a weighted average of the precision and recall, with the maximum value of 1 and the minimum value of 0. φψ F1 = 2 (14) φ+ψ Under the same test data set, use the research methods, intrusion detection technology based on Bayesian network, intrusion detection method based on improved SMOTE, and
Research on Random Intrusion Depth Detection of Internet of Things
275
intrusion detection method based on cuttlefish optimization algorithm to conduct random intrusion detection of the Internet of Things, and get the detection results. According to the results, calculate the F1 value. The results are shown in Fig. 4 below.
Fig. 4. Comparison Diagram of F1 Values
It can be seen from Fig. 4 that when the research method detects NSL-KDD data set and UN SW-NB 15 data set, its FI value is above 0.8. When the intrusion detection technology based on Bayesian network, the intrusion detection method based on improved SMOTE, and the intrusion detection method based on squid optimization algorithm detect NSL-KDD dataset and UN SW-NB 15 dataset, the F1 value of these three methods is less than 0.8. The comparison shows that the F1 value of the research method is significantly higher than that of the other three comparison methods, indicating that the detection accuracy of this method is higher.
4 Conclusion As the network environment becomes more and more complex, network security accidents occur frequently, and attack means change constantly, which threaten the security of the Internet of Things. Therefore, the important role of Internet of Things intrusion detection becomes more and more obvious. Aiming at the low accuracy of intrusion detection in industrial Internet of Things, a random intrusion depth detection method based on three-dimensional convolution neural network is proposed. According to NIDS, the Internet of Things intrusion detection model is built, network data packets are collected and preprocessed. The random intrusion depth detection in the Internet of Things is completed by constructing the feature matrix and taking it as the input, using 3 DCNN and LSTM method. Experimental results show that the proposed method has high detection accuracy. Because in the actual network environment, all kinds of unpredictable
276
X. Ma and W. Wang
emergencies are uncontrollable. Therefore, in order to further improve the stability of the network intrusion detection model in the real environment, further research is needed.
References 1. Zhang, H., Li, Y., Lv, Z., et al.: A real-time and ubiquitous network attack detection based on deep belief network and support vector machine. IEEE/CAA J. Autom. Sin. 7(3), 790–799 (2020) 2. Dong, R.H., Yan, H.H., Zhang, Q.Y.: An intrusion detection model for wireless sensor network based on information gain ratio and bagging algorithm. Int. J. Netw. Secur. 22(2), 218–230 (2020) 3. Yang, W.H.: Security detection of network intrusion: application of cluster analysis method. Comput. Opt. 44(4), 660–664 (2020) 4. Zhu, D.H., Cheng, Y.: Leak control of sensitive data in Internet of Things based on local differential privacy. Comput. Simul. 38(02), 472–476 (2021) 5. Jo, W., Kim, S., Lee, C., et al.: Packet preprocessing in CNN-based network intrusion detection system. Electronics 9(7), 1151 (2020) 6. Alhajjar, E., Maxwell, P., Bastian, N.: Adversarial machine learning in network intrusion detection systems. Expert Syst. Appl. 186(2), 115782 (2021) 7. Li, J., Wu, W., Xue, D.: An intrusion detection method based on active transfer learning. Intell. Data Anal. 24(2), 363–383 (2020)
Intelligent Measurement of Power Frequency Induced Electric Field Strength Based on Convolutional Neural Network Feature Recognition Ying Li(B) , Zheng Peng, Mancheng Yi, Jianxin Liu, Sifan Yu, and Jing Liu Guangzhou Power Supply Bureau of Guangdong Power Grid Co., Ltd., Guangzhou 510000, China [email protected]
Abstract. Aiming at the problem of large measurement error in existing electric field intensity measurement methods, an intelligent measurement method of power frequency induced electric field intensity based on convolution neural network feature recognition is proposed. According to the working principle of power devices in power environment, the mathematical model of power frequency induced electric field is established. The power frequency induction electric field intensity signal is collected by the intelligent chemical frequency induction electric field intensity measuring device. The convolution neural network is used to extract and recognize the characteristics of the power frequency induced electric field intensity signal. Through feature matching, intelligent measurement results of power frequency induced electric field intensity are obtained. The test results show that the average electric field intensity measurement error of the proposed method is reduced by 1.24 N/C, which solves the problem of large measurement error. Keywords: Convolutional Neural Network · Feature Recognition · Power Frequency Induction · Electric Field Strength · Intelligent Measurement
1 Introduction With the continuous development of power system, more and more transmission lines, the system becomes more and more complex and huge, and the accuracy and reliability of line parameters are also increasingly strict in power system calculation. Accurate line parameters are the basis of power system calculation such as power flow calculation, fault analysis, network loss calculation and relay protection setting calculation. The measurement of power system line parameters has become an indispensable process before the new transmission line is put into operation. Moreover, the commonly known transmission line parameters are measured at the initial stage of the line construction, and these parameters will change more or less due to climate, temperature, environment, geography and other factors after being put into operation. Therefore, it is necessary to measure the line parameters that have been put into operation. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 277–293, 2024. https://doi.org/10.1007/978-3-031-50571-3_20
278
Y. Li et al.
Power frequency refers to the abbreviation of power grid working frequency, which refers to the frequency of AC power in the power grid. All generators, transmission and distribution equipment and users in the same power grid use AC power of this frequency. The electric field intensity in some areas of power frequency electric field around highvoltage substations, transmission lines and other equipment exceeds the allowable value of 5k V/m in national standards. With the continuous improvement of the voltage level of transmission lines, the environmental problems caused by power frequency electric fields have become increasingly prominent, which has become one of the main factors restricting the construction of ultra-high voltage transmission projects in China. The timely and effective measurement of electric field is an indispensable means to actively face and solve this new threat. Electric field intensity is one of the important indicators of power frequency induced electric field, which is a physical quantity used to express the strength and direction of electric field. The direction of the electric field strength at a point in the electric field can be determined by the electric field direction of the electric field force applied at the point where the electric charge is tested; The electric field strength can be determined by the ratio of the force on the test charge to the charge charge at the test point. Electric field intensity measurement refers to the measurement of electric field intensity at the receiving site to obtain various propagation data and parameters for correct design of wireless circuits. The measurement of electric field intensity is also an important work in solving electromagnetic compatibility problems. Among the existing methods for measuring power frequency induced electric field strength, the more mature research achievements include: Reference [1] proposes an electric field intensity sensor that can be used for unmanned aerial vehicles to solve the technical difficulties of large range field intensity and wireless data transmission and meet the test requirements of ultra-high voltage substations. Take the pillar porcelain bottle as the test object, carry out the laboratory measurement of the field strength distribution at different voltage levels and different positions. In reference [2], the measurement method of electric field intensity based on the explicit sensitivity matrix of three-dimensional electromagnetic response is studied by using the three-dimensional finite volume method of electric field coupling potential and the direct solution technology. First, the finite volume method is used to discretize the electric field mixed potential equation; The large algebraic equations corresponding to the forward modeling of moving source electromagnetic field are established, and the interpolation operator and projection operator are determined; The projection operator is used to calculate the electromagnetic response of multiple emitters; According to the block model and the conductivity distribution characteristics of the abnormal body, the electric field intensity is calculated by using the discrete vector of the projection operator and the scattering current element. However, the traditional electric field strength measurement methods mentioned above have obvious measurement strength error problem, which is mainly due to the inaccurate analysis of electric field signal characteristics. Therefore, convolutional neural network feature recognition technology is introduced.
Intelligent Measurement of Power Frequency Induced Electric Field Strength
279
Feature recognition refers to the extraction of special information of things or phenomena, and convolutional neural network feature recognition technology applies convolutional neural network to multiple iterations on the basis of traditional feature recognition technology to improve the accuracy of feature recognition. Convolution neural network is a feedforward neural network, whose artificial neurons can respond to the surrounding units within a part of the coverage. Generally, the basic structure of convolutional neural network includes two layers, one is the feature extraction layer, and the input of each neuron is connected with the local acceptance domain of the previous layer, and the local features are extracted. The convolutional neural network feature recognition technology is applied to the optimization design of intelligent measurement method of power frequency induced electric field strength, in order to improve the intelligent measurement accuracy of power frequency induced electric field strength.
2 Design of Intelligent Measurement Method for Power Frequency Induced Electric Field Strength 2.1 Establishing the Mathematical Model of Power Frequency Induced Electric Field Taking the substation as the research environment, the power frequency induced electric field model is built. Busbars, incoming lines, outgoing lines and other types of conductors are stored in the substation. Considering that the power frequency electromagnetic field in the station is distributed 1.5 m above the ground, the distance between the observation location and the excitation source is far greater than the geometric size of the bundled conductors. The bundled conductors are often considered as one conductor according to the actual needs of the project. The bundled conductors are equally divided on the same circumference, and the equivalent radius is: nrz (1) Rwireway = Rn R In the formula, R and r represent the radius values of the split wire and the child wire, respectively, and n is the number of the split wire. Due to the small span of the conductor in the substation, the influence of conductor sag on the calculation of power frequency electromagnetic field can be ignored, and its calculation error will not lead to the effectiveness of the assessment and prediction of power frequency electromagnetic field distribution in the substation. According to the above method, the electric field of other components in the substation can be obtained. When calculating the power frequency electric field generated by point charge, take Cartesian rectangular coordinate system, set the position of simulated point charge q0 as (x0 , y0 , z0 ), and its coordinate is shown in Fig. 1. In the power frequency-induced electric field environment shown in Fig. 1, the potential at any point can be expressed as: ψ=
q 4π εR
(2)
280
Y. Li et al.
Fig. 1. Mathematical model of the power-frequency-induced electric field
In formula (2), ε is the electric field coefficient, based on which the potential coefficient of a single simulated point charge is: γ =
ψ 4π R
(3)
The potentials and their coefficients at all positions in the power frequency induced electric field environment are obtained according to the above methods, and are input into the substation structure to realize the construction of the mathematical model of the power frequency induced electric field. 2.2 Installation of Intelligent Measuring Equipment for Power Frequency Induced Electric Field Strength As the wave length under power frequency is larger than the size of the metal conductor surface, the phase difference of electric field strength between points on the metal conductor surface can be ignored [3]. The working principle of intelligent measuring equipment for power frequency induced electric field strength is shown in Fig. 2.
Fig. 2. Structure diagram of intelligent measurement equipment of electric field strength
Place the inductive electrode plate device in the electric field to be measured. After the motor speed is stabilized, the electrode plate will generate and output an inductive current signal with a certain amplitude. After the current voltage conversion circuit, filter and amplify the signal. Then, use the data acquisition card to collect the single frequency signal waveform, and monitor its amplitude and frequency. The amplitude of
Intelligent Measurement of Power Frequency Induced Electric Field Strength
281
the output voltage signal has a certain proportional relationship with the DC electric field strength to be measured. After calibration, the magnitude of the electric field strength to be measured can be directly displayed and read through the software interface [4]. The induction electrode plate is made of aluminum plate with a thickness of 1 mm and a diameter of 60 mm by mechanical processing to approximate sector shape. A hole with a diameter of 5 mm is opened in the center of the static electrode plate, and then it is fixed on the motor bracket with PVC pipe. The center of the moving electrode plate is fixed on the motor shaft with a screw with a diameter of 3 mm, and leads from two electrode plates are led out through the motor bracket and shaft respectively. The working principle of intelligent measuring equipment for power frequency induced electric field strength is shown in Fig. 3.
Fig. 3. Schematic diagram of the intelligent measuring equipment of electric field strength
The interface of the electric field strength intelligent measurement equipment is connected to the outer surface, side surface and inner surface of the metal sheet, respectively, and the displacement current flows from the slice outer surface to the ground, and the displacement current from the inner surface and from the side surface flows to the shell. The conduction current flows from the sheet to the metal conductor body by measuring the resistance. Then, the area fraction of the conduction current density detected on the closed surface of the formula can be expressed as: (4) In formula (4), the variable ρ is the conduction current density, S is the detection surface area, d is the distance between the measurement points, and the calculation result IR is the total displacement current. The alternating current signal induced by the induction device is converted into voltage signal through I-U conversion circuit, and then amplified and filtered. After being adjusted, it is differentially input by AI1 and AI19 input ports of the data acquisition card NI-USB6210, and connected to the terminal monitoring host computer through USB plug and play data bus [5]. The data output by the assistant of the data acquisition device is directly sent to the waveform graph one way to monitor the signal waveform in real time, and the other way is sent to the single frequency measurement function to monitor
282
Y. Li et al.
the signal amplitude and frequency in real time. After calibration, the amplitude of the signal output has a K times proportional relationship with the electric field strength to be measured within a certain range, and the output electric field strength can be displayed directly on the interface. 2.3 Intelligent Acquisition of Power Frequency Induced Electric Field Strength Signal Using the intelligent measuring equipment installed in the mathematical model of the power frequency induced electric field, the real-time power frequency induced electric field strength signal is obtained. Figure 4 shows the intelligent acquisition process of power frequency induced electric field strength signal.
Fig. 4. Flow chart of power frequency-induced electric field intensity signal acquisition
In order to make the SCM better process the measurement signal, a current voltage conversion circuit is added prior to the filter amplifier circuit to obtain the weak electric field signal [6] output by the sensor by taking the current. In order to ensure the acquisition quality of the power frequency sensing electric field intensity signal, the initial collected
Intelligent Measurement of Power Frequency Induced Electric Field Strength
electric field intensity signal is filtered by using formula (5). xwave = fwave (x) fwave = med (X )
283
(5)
In formula (5), fwave () is the filter function, med () is the median calculation function, and x, xwave and X represent the initial acquisition of the electric field intensity signal, the signal filtering processing result and the initial acquisition signal set, respectively. Thus acquisition and processing of power frequency sensing electric field intensity signal. 2.4 Establishment of Convolutional Neural Network The core of convolutional neural network is convolution and pooling operation. The convolution layer is composed of multiple small-scale convolution kernels. During the network training, the convolution kernel updates its own parameters through multiple iterations. After the model converges, the convolution kernel can automatically extract the two-dimensional key features of the sample and generate multiple features. Pooling layer uses the principle of image local correlation to sample features, reduce the scale of neural network and reduce the amount of data processing. After multiple convolution and pooling operations, the hidden feature information will be fully extracted, and the classification results will be output through the full connection layer and the classifier. Figure 5 shows the basic architecture of the convolutional neural network.
Fig. 5. Convolutional neural network architecture diagram
As shown in Fig. 5, the entire convolutional neural network structure consists of six layers, the first layer is the input layer, the second and fourth layers are convolutional layers, the third and fifth layers are the lower sampling layers, and the sixth layer is the full connection layer. In the convolution layer, the input characteristic map is convolved with the convolution filter. The convolution result plus a bias term is used as the input of the activation function. After the activation function processing, the output characteristic map of the layer is obtained. Is a matrix of order l, including l × 1 trainable parameters [7]. For a size of m × n Input characteristic diagram of n, and l × l After the convolution kernel of l is convolved, the size of the output feature map is (m − l + 1) × (n − l + 1),
284
Y. Li et al.
and the expression form of the convolution layer is: ⎛ ⎞ yji = g ⎝ xil ∗ ij + φ ⎠
(6)
i∈Mj
In formula (6), g() is the activation function, Mj is the data set of the input convolutional neural network, xil is the initial input data, and ij and φ are the convolutional kernel and bias terms, respectively. In the subsampling layer, lower the input features. The extracted data features are mapped into smaller plane ranges to simplify the network structure and reduce the computational scale. Subsampling samples the input features at a certain step size, but not the continuous sampling. Generally, the sampling step size is consistent with the sampling kernel width. If the sampling kernel size is r × r, it is necessary to divide the input feature into several r × r sub-regions for mapping, and each region passes through the sampling function and output one eigenvalue, which reduces the size of the output feature to the 1r of the input feature. The expression form of the subsampling layer is follows:
yjl = ωjl h xjl (7) In formula (7), variable ωjl is the weight value of the sampled data and h() is the sampling function.The downsampling layer works as shown in Fig. 6.
Fig. 6. Schemschematic of lower sampling layer
According to the above way, we can get the working principle of the full connection layer and the output layer in the convolutional neural network, and complete the construction of the convolutional neural network. 2.5 Extraction and Identification of Electric Field Strength Signal Characteristics The convolution neural network is used to extract the characteristics of the power frequency induced electric field intensity signal intelligently acquired. The feature extraction and recognition process of convolutional neural network can generally be divided into two steps: forward propagation and back propagation. Forward propagation is to input the acquired power frequency induced electric field strength signal into the input
Intelligent Measurement of Power Frequency Induced Electric Field Strength
285
layer of the convolutional neural network, and reduce the accuracy of feature location extraction of the previous layer in the new mapping through convolution operation. After the convolution operation, you can choose whether to add paranoia and activate the new features of, and finally form a new feature through the new features mapped by the nonlinear activation function, which is used as the input of the next sub sampling layer [8]. The output results of the convolution layer are directly input to the down sampling layer to complete the statistics and extraction of the features of different positions of the signal, that is, the convolution features are pooled to reduce the dimension. Finally, through Softmax classifier, the initial extraction results of electric field strength signal features are obtained. Back propagation algorithm is the main algorithm used in CNN training. First, the corresponding values are obtained from the input training data through forward calculation, and then the error is calculated in reverse, the gradient is calculated for the weight and offset, and the weight and offset values of the network are adjusted according to the gradient. In the whole training process, forward calculation and reverse calculation are conducted alternately until the deviation is reduced to a threshold range or the required number of iterations is reached, so as to stop iteration. In order to make the network converge effectively, it is necessary to set the optimization objective function. For a dataset Z, set the target function to the average loss value of all data in the entire dataset as: 1 (i)
+ η(Z) gZ X |Z| |Z|
λ(Z) =
(8)
i
In formula (8), gZ () is the loss solution function of the electric field intensity signal data. The calculation method is to find the loss value of each individual sample X, and then sum up all, and finally find the mean. η(Z) is the regular term which set to attenuate the overfitting phenomenon [9]. Using stochastic gradient descent, we calculate a linear combination of the negative gradient ∇λ(Z) and the last updated weight value to update the weights. The iterative formula is as follows: t+1 = wt − α∇λ(Z) (9) ωt+1 = ωt + t+1 In the formula, t represents the weight value of the last update, and w and α respectively correspond to the weight value of the previous gradient and the learning rate of the convolutional neural network. Through the forward and reverse propagation of the convolutional neural network, the results of the extraction of the eigenvectors such as the peak and margin of the power-frequency-induced electric field intensity signal are obtained, which can be quantified as: xp τpeak = xrms (10) x τMargin = xpr In the formula, xp , xr and xrms represent the mean square, root mean square and effective values of the electric field intensity signal, respectively. Similarly, through multiple
286
Y. Li et al.
iterations of the convolutional neural network, the extraction results of other eigenvectors of the power-frequency-induced electric field intensity signal can be obtained. The final extracted electric field intensity signal features are matched with the standard signal features to complete the feature identification work of the field strength signal. The specific feature matching process can be quantified as follows: (11) s(τextract , τstandard ) = |τextract − τstandard |2 In formula (11), the variables τextract and τstandard are the extracted field strength signal characteristics and the set field strength signal standard characteristics, respectively, so that the extraction and identification of the electric field intensity signal characteristics are completed. 2.6 Realize the Intelligent Measurement of Power-Frequency-Induced Electric Field Intensity The intelligent measurement of the electric field of the power frequency induction. Through the division of the conductor and the search of the simulated charge, the power- frequency electric field intensity value in each direction of the desired point P xp , yp , zp in the space can be deduced: ⎧
1 β xp − x1 − kt ⎪ L ⎪ ⎪ E = px
3 dt ⎪ ⎪ 4π ε0 0 2 ⎪ ⎪ t + κ0 t + κ1 ⎪ ⎪ ⎪ ⎪
⎪ 1 β y − y − mt ⎪ ⎨ L p 1 Epy =
3 dt (12) 4π ε0 0 2 ⎪ t + κ t + κ ⎪ 0 1 ⎪ ⎪ ⎪
⎪ 1 ⎪ ⎪ β zp − z1 − nt L ⎪ ⎪ Epz = ⎪
3 dt ⎪ 4π ε0 0 2 ⎪ ⎩ t + κ0 t + κ1 In the formula (12), L is the linear charge length, (x1 , y1 , z1 ) is the coordinate value of the electric field endpoint, k, m and n respectively represent the distance between the measurement point and the endpoint in x, y and z, and β is the constant coefficient. The specific value is related to the identified electric field signal characteristics. The calculation formula of the κ0 and κ1 parameters is as follows:
κ0 = −2 k xp − x1 + m yp − y1 + n zp − z1 2 2 2
(13) κ1 = xp − x1 + yp − y1 + zp − z1 According to the superposition theorem, the electric field strength generated by the three-phase conductor segment is added to [10] in x, y andz directions. The components of the electric field intensity generated at point P xp , yp , zp in all directions are expressed as: EiN =
N i=1
Epi (i = x, y, z)
(14)
Intelligent Measurement of Power Frequency Induced Electric Field Strength
287
Finally, according to the electric field intensity in the direction x, y and z direction at any point in space, the total electric field intensity value E of this position can be obtained, that is: 2 + E2 + E2 (15) E = ExN yN zN The formula (12)-formula (15) combines the intelligent measurement results of the power frequency-induced electric field intensity at all positions in the electric field environment. Finally, the external environmental interference factors are collected to compensate for the measurement error of the electric field strength. The compensation process can be expressed as follows: Eout = E ± ϑ
(16)
In the formula, E is the direct electric field intensity measurement result, and ϑ is the measurement error. According to the above process, accurate power frequency induction electric field intensity intelligent measurement results.
3 Experimental Analysis of Measurement Accuracy Test For the purpose of testing the field strength measurement accuracy of the intelligent measurement method of power frequency induced electric field strength based on convolutional neural network feature recognition, the measurement accuracy test experiment is designed by means of comparative experiment, and whether the optimized design method achieves the expected effect is completed. 3.1 Configuration of Power Frequency Induced Electric Field Environment The power grid of a city is selected as the research environment for the experiment. The voltage level of the power grid can be divided into three parts: 500 kV, 220 kV and 10 kV. In the power grid environment, there is one 500 kV substation with a main transformer capacity of 2 * 1500 MVA, 10 220 kV substations with a total main transformer capacity of 5470 MVA, and 15 10 kV substations with a main transformer capacity of 3587 MVA. The total length of the three voltage levels is 687.4 km, 763.9 km and 178.9 km respectively. Through parameter setting, ensure that all power elements in the grid are at the same working frequency. According to the operation of each power element in the grid, the configuration result of power frequency induction electric field environment is obtained, as shown in Fig. 7. Mark the actual electric field strength and direction of each node in the power frequency induced electric field environment configured in Fig. 7 as the criteria to judge whether the output result of the measurement method is accurate. 3.2 Structural Parameters of Input Convolution Neural Network Because the intelligent measurement method of power frequency induced electric field strength optimized and designed uses the convolution neural network feature recognition
288
Y. Li et al.
(a) 500kV electric field environment
(b) 220kV electric field environment
(c) The 10kV electric field environment Fig. 7. Distribution diagram of the power-frequency-induced electric field
method, it is necessary to set the structure and operation parameters of the convolution neural network. The structural parameters to be set include the number of convolution kernels, the initial weight, the number of batch sample training, and the learning rate. The number and size of convolution kernel is the core problem of convolution neural network. The size of the convolution kernel is the size covered by a convolution kernel on the feature map during the convolution operation. After each convolution operation, the convolution kernel moves once according to the size of the step, and then calculates again. A feature map can be obtained by traversing a convolution check with a feature map, and the number of feature maps can be obtained by the number of convolution cores in a convolution layer. The training number of batch samples is the amount of data contained in each batch, and the learning rate is crucial to the training process of DCNN. If the set learning rate is too large, it is easy to cause the shock of the network recognition rate, which makes it difficult for the network to converge to the minimum point. On the contrary, if the set learning rate is too small, it will lead to too many iterations in the training and spend a lot of time. Set the number of convolution kernels to 60, the initial
Intelligent Measurement of Power Frequency Induced Electric Field Strength
289
weight value to 2.0, the number of batch sample training to 20, and the learning rate to 0.001. 3.3 Laying Power Frequency Induced Electric Field Strength Measuring Points Randomly select multiple measuring points in the configured power frequency induced electric field environment, and install the designed intelligent measuring equipment in the electric field environment. The specific layout of measuring points is shown in Fig. 8.
Fig. 8. Real scene layout of power frequency induction electric field intensity measurement points
In the experimental environment shown in Fig. 8, the installed measuring equipment is adjusted to ensure the normal operation of the measuring equipment in the experimental environment, and the true value of the electric field strength at the position of the measuring point is obtained. 3.4 Describe the Measurement Process of Power-Frequency-Induced Electric Field Intensity After completing the correction of the power frequency sensing electric field intensity measurement equipment and the program, the measurement is first made in the 500 kV electric field environment. Meanwhile, the electrical equipment and the measurement equipment in the electric field environment get the visual intelligent measurement results of the electric field strength through the operation of the convolutional neural network feature identification technology, as shown in Fig. 9. The above procedure can obtain the intensity measurement results of multiple measurement points in the electric field environment.
290
Y. Li et al.
Fig. 9. Intelligent measurement results of power-frequency-induced electric field intensity
3.5 Set the Field Strength Measurement Accuracy Test Index and Comparison Items When the measurement error of electric field intensity is set as the quantitative test index of the experiment, the numerical results are as follows: E = |Eout − Eset |
(17)
In formula (17), the variables Eout and Eset respectively represent the measurement results output by the measurement method and the set electric field intensity value. The lower the final calculation value of E, the smaller the measurement error of the corresponding measurement method, that is, the higher the measurement accuracy. In order to reflect the advantages of intelligent measurement method of power frequency induced electric field intensity based on convolutional neural network feature identification, the traditional electric field intensity measurement method based on virtual instrument, electroluminescence effect and electric field intensity measurement method based on the experiment are the corresponding electric field intensity measurement results output by the comparison process, as shown in Fig. 10. Similarly, the output results of other conventional measurements can be obtained. 3.6 Analysis of Measurement Accuracy Test Experiment Results Through the statistics of the relevant data, the test and comparison results reflecting the intelligent measurement accuracy of the power-frequency-induced electric field intensity are obtained, as shown in Table 1. By substituting the data in Table 1 into Formula 17, it can be concluded that the average measurement errors of the three comparison measurement methods are 1.66 N/C, 1.41 N/C and 1.53 N/C. In addition, the average measurement error of the intelligent measurement method of power frequency induced electric field strength based on convolutional neural network feature recognition is 0.29 N/C. It is proved that compared with the traditional electric field strength measurement method, the optimized design
Intelligent Measurement of Power Frequency Induced Electric Field Strength
291
Fig. 10. Output interface of the conventional electric field strength measurement method
measurement method has higher measurement accuracy. The reason for the advantage of this method is that the mathematical model of power frequency induced electric field
292
Y. Li et al.
Table 1. Test results of intelligent measurement accuracy of power frequency sensing field strength Electric field strength measuring point number
Set the actual value of the power frequency induction field intensity (N/C)
Measurement Results of Results of the results of reference [2] electric field reference [1] method (N/C) intensity method (N/C) measurement method based on the Rydberg atomic quantum effect (N/C)
Results of Electric Field Strength Based on Convolutional Neural Network (N/C)
1
857.5
855.4
856.1
855.7
857.3
2
623.8
621.6
622.5
622.3
623.5
3
756.1
754.3
754.8
754.4
755.7
4
474.2
473.1
472.7
473.0
473.7
5
567.4
566.3
565.9
565.8
567.1
6
575.8
573.2
574.3
574.1
575.6
7
674.9
673.7
673.6
673.3
674.7
8
803.2
802.0
801.7
802.1
803.0
is established according to the working principle of power devices in the power environment, which provides a basic model for subsequent accurate measurement. Through feature matching, the intelligent measurement result of power frequency induced electric field intensity is obtained, which further solves the problem of large measurement error.
4 Conclusion The difficulty of electric field intensity measurement is that the probe is placed in the field to be measured, which is easy to cause distortion due to electrostatic induction. In addition, the following signal processing is difficult due to impedance, temperature drift and other factors. Through the application of convolutional neural network feature recognition technology, the intelligent and accurate measurement of electric field strength is realized, providing reference data for the setting and management of power components, electric fields and other parameters. Acknowledgement. Science and Technology Project of China Southern Power Grid Co., Ltd. (GZHKJXM20200058).
Intelligent Measurement of Power Frequency Induced Electric Field Strength
293
References 1. Goncharenko, I.A., Ryabtsev, V.N.: High-frequency electric field intensity sensor based on double-slot waveguides filled with electro-optical polymer. Quantum Electron. 51(11), 1044– 1050 (2021) 2. Zhong, Y., Deng, W., Liu, W., et al.: Research on measurement technology of electric field strength sensor based on UAV. Electr. Eng. (21), 146–148 (2022) 3. Chen, B., Wang, H., Yang, S., et al.: An efficient algorithm of three-dimensional explicit electromagnetic sensitivity matrix in marine controlled source electromagnetic measurements. Acta Physica Sinica (6), 356–371 (2021) 4. Yu, H., Zhao, M., Wu, X., et al.: Influences of crack-face electric boundary conditions on stress intensity factors of ferroelectric single crystals. Appl. Math. Model. 101, 380–405 (2022) 5. Zhang, G., Wu, P., Sun, J., et al.: Effect of local electric field on radial oscillation of electron beam in low-magnetic-field foilless diode. IEEE Trans. Plasma Sci. PP(99), 1–5 (2020) 6. Liu, S., Wang, S., Liu, X., Gandomi, A.H., Daneshmand, M., Muhammad, K., De Albuquerque, V.H.C.: Human memory update strategy: a multi-layer template update mechanism for remote visual monitoring.IEEE Trans. Multimedia 23, 2188–2198 (2021) 7. Liu, S., Wang, S., Liu, X., et al.: Fuzzy detection aided real-time and robust visual tracking under complex environments. IEEE Trans. Fuzzy Syst. 29(1), 90–102 (2021) 8. Gao, P., Li, J., Liu, S.: An introduction to key technology in artificial intelligence and big data driven e-learning and e-education. Mob. Netw. Appl. (2021) 9. Lu, B., Gao, S., Song, L., et al.: The study on a new method for measurement of electric fast transient voltage based on electric induction of metal slice. IEEE Trans. Ind. Electron. PP(99), 1 (2020) 10. Li, H., Li, W., Ma, Q., et al.: Attenuation of plasmaspheric hiss associated with the enhanced magnetospheric electric field. Ann. Geophys. 39(3), 461–470 (2021)
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot Based on Deep Reinforcement Learning Rong Zhang1(B) and Xiaogang Wei2 1 Jiangmen Polytechnic, Jiangmen 529000, China
[email protected] 2 Chongqing Vocational Institute of Tourism, Chongqing 409000, China
Abstract. When the multi-axis industrial robot is disturbed by the outside, there will be some problems, such as large deviation of heading angle, time-consuming motion control and long one-way adjustment distance. Therefore, a motion stability control algorithm of multi-axis industrial robot based on deep reinforcement learning is proposed. Design the structural characteristics of multi-axis industrial robot, consider that the foot position can reach the expected motion range, and optimize the motion trajectory planning mode of multi-axis industrial robot. According to the expected movement speed and angular velocity of the center of mass specified artificially in advance, the position vector of the nominal foothold of the foot end of the multi-axis industrial robot is calculated. Based on deep reinforcement learning, the dynamic equation is constructed, and the motion stability control algorithm is designed by describing the change rate of the momentum of the center of mass. Experimental results show that the average deviation of heading angle of the proposed algorithm is 5.92, the average time of motion control is 2.57 s, and the average distance of one-way adjustment is 31.78 mm, which shows that the proposed method has good fault tolerance of motion stability control and can effectively improve the accuracy and efficiency of motion stability control. Keywords: Deep Reinforcement Learning · Multi-Axis Industrial Robot · Motion Stability Control · Structural Features · Motion Trajectory
1 Introduction In the face of industrial transformation and upgrading, especially the manufacturing industry directly promotes the more and more extensive application of multi-axis systems such as CNC machining centers and industrial robots. For multi-axis industrial robots, stability refers to the ability to maintain stability during motion, such as the antioverturning ability of a footed robot. The more the number of spindles in the multi-axis system, the higher the complexity of the control, and the more difficult it is to coordinate and control each axis. The stable motion of the multi-axis robot is the basis for the robot to complete its tasks. Due to the special structure of the footed mobile robot, maintaining © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 294–307, 2024. https://doi.org/10.1007/978-3-031-50571-3_21
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
295
a continuous and stable walking is an important challenge that must be overcome for robot research tasks. As a typical multi-axis system, industrial robots usually require multiple axes to coordinate movement at the same time, which can reduce errors, ensure processing quality, and improve processing speed, which is an objective production need. The motion stability control of multi-axis industrial robots can effectively ensure the safe and reliable operation of robots. Therefore, it is of great significance to study the motion stability control of robots. At present, scholars in related fields have carried out research on robot motion stability control, and have achieved certain research results. Reference [1] conducts kinematic analysis on the three-dimensional modeling of industrial robots, establishes DH parameter table, and calculates the inverse solution of industrial robots. The 3D model plug-in is imported into Simulink, and the motion control of the industrial robot is realized by building the mechanical control model and improving the motor drive physical model. This method has certain validity and feasibility, but its heading angle deviation is relatively large. Reference [2] uses the embedded ARM industrial computer as the host computer, selects C++ as the programming language for development, and realizes the motion stability control of the industrial robot through the NURBS real-time interpolation algorithm. This method has certain practicability and generality, but its motion control takes a long time. Reference [3] proposed a multi-layer neural network trajectory tracking control method for industrial robots based on iterative learning control. A multilayer neural network is trained to model the dynamical inverse of the nonlinear inner-loop dynamics using the desired trajectory and the corresponding refined input trajectory. Tracking performance still improved when applying the trained neural network to a physical robot, but not as well as in the simulator. Aiming at the above problems, a motion stability control algorithm for multi-axis industrial robots based on deep reinforcement learning is proposed. By designing the structural features of the multi-axis industrial robot, the motion trajectory planning mode of the multi-axis industrial robot is optimized, and the position vector of the nominal foot end of the multi-axis industrial robot is calculated. The dynamic equation is constructed based on deep reinforcement learning, and the motion stability control algorithm is designed. The motion stability control of this method has better fault tolerance, higher accuracy and higher efficiency.
2 Design the Structural Features of Multi-axis Industrial Robots The motion of multi-axis industrial robots depends on the design of the overall structure of the robot. The overall structure of the robot affects its motion mode, motion speed, kinematics and dynamic equations, workspace and load conditions and other motion properties. Its motion stability mainly refers to the performance of the multi-axis industrial robot always maintaining a stable posture when moving under a pre-specified motion gait. If a multi-axis industrial robot wants to go out of the laboratory and complete work tasks in various complex natural environments, it not only needs to walk stably on the flat ground, but also needs to have the ability to avoid obstacles autonomously in various environments. Therefore, the overall structure of the robot is particularly important. The gait of a multi-axis industrial robot refers to the motion form of stepping during the
296
R. Zhang and X. Wei
process of walking. According to the different duty ratios, the gait of the robot can be divided into different gaits by imitating the motion gait of animals. The duty cycle refers to the ratio of the time that a single leg is supported on the ground when the robot moves to the total time of one cycle movement. The overall structure of a multi-axis industrial robot is generally composed of a fuselage and legs. The fuselage contains the installation of the legs, the layout and fixation of lidar and depth cameras and other hardware accessories, and the single leg contains the transmission of the mechanical structure of the legs and the drive. When the robot is in static motion, it needs to satisfy at least three legs in a supporting state at any time. Its stability is usually judged by the supporting triangle. The supporting triangle stability criterion indicates that the projection of the center of mass on the plane of the supporting point must be located within the supporting triangle, so that the system can maintain balance and static stability. The design of the single-leg structure of the robot and the transmission mode of the drive directly affect the difficulty of the robot control algorithm. Therefore, the most important part of the design of the robot is the leg structure and the transmission mode of the drive. Due to its simplicity, it is still in use today, especially in scenarios with high security and stability requirements that require static motion. The robot will encounter various complex working environments during the movement process. The reasonable structure design of the whole machine can directly reduce the friction between the parts, reduce the energy loss of the system, and improve the movement speed and movement efficiency. And it will also greatly affect the robot power components, the size of the actuator, the force state and the overall layout [3, 4]. When the projected point of the center of gravity on the support area is inside the support triangle, the stability margin value is positive, and when it is outside the support triangle, the value is negative. The larger the stability margin value, the more stable the robot is. From this, the structural feature design requirements of the multi-axis industrial robot are obtained, as shown in Fig. 1.
Fig. 1. Schematic diagram of structural feature design requirements for multi-axis industrial robots
Therefore, it is very important to carry out a reasonable mechanical structure design of the whole robot on the basis of meeting the structural characteristics design requirements of the multi-axis industrial robot. On the one hand, it is necessary to ensure that
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
297
the robot output unit can meet the requirements of the work. And its key components must have sufficient strength and rigidity, and on the basis of meeting the working requirements of the system, the weight of each component unit of the robot should be reduced as much as possible to facilitate the control of the robot’s motion state. The triangular stability criterion based on the projected point of the centroid is only applicable when the robot is free from external forces and walks statically at a constant speed in a horizontal, flat terrain in a constant direction. On the other hand, reasonable use of the limited space on the whole machine ensures that the various unit components of the robot are connected effectively and reliably and transmit motion accurately. And in the design of the whole machine, the compact and stable internal structure of the robot should also be considered, and the appearance and structure are beautiful and delicate. During the movement of the robot, considering that the position of the foot end can reach the expected range of motion, and to avoid friction and collision with other links during movement, each leg of the robot needs to be set with three degrees of freedom. When the robot is subjected to interference force, there is acceleration, or the contact point of the foot end is not in the same plane, the resultant point of the robot is often not at the center of mass of the robot body. Therefore, it is necessary to judge the stability of the robot through motion analysis of the robot. And starting from the project requirements, the leg structure not only needs to have sufficient strength and stiffness, but also needs to have a large load-bearing capacity so that heavier materials can be transported during work. When the robot needs to accelerate the movement, the robot can complete the acceleration movement by applying a torque at the joints to generate a reaction force at the foot end. Therefore, the zero-moment point can be defined as the resultant point of the foot force on the support plane of the hypothetical robot, and the zero-moment point can be obtained by solving the robot motion equation. Considering that the robot may face a variety of complex terrain environments, the robot needs to have good motion stability, and the single-leg structure also needs to have a certain buffering capacity. In addition, the single leg of the robot should also meet the requirements of simple structure, flexible movement, and easy control.
3 Optimizing the Motion Trajectory Planning Mode of Multi-axis Industrial Robots The motion mode of multi-axis industrial robots is usually to first plan the motion trajectory of the foot end, then use inverse kinematics to calculate the angle of each joint, and finally use the servo controller to realize the joint position servo. The swing phase refers to the period from when the multi-axis industrial robot lifts its legs so that the foot ends are lifted off the ground until the legs are vacated to the ground. The purpose of the swing phase trajectory planning is to make the swing legs have enough height from the ground to avoid obstacles. To plan the motion trajectory of the foot end, it is first necessary to calculate the desired motion landing point of the robot [5, 6]. And reduce the energy loss in the movement process, considering that the legs are not affected by environmental forces at this time, so higher speed and acceleration can be pursued. In the process of triangular gait, the position vector of the nominal landing point of the robot foot can be calculated according to the expected movement velocity and angular
298
R. Zhang and X. Wei
velocity of the center of mass manually specified in advance: dε = φ +
1 ε
(1)
In formula (1), φ represents the position of the foothold, and ε represents the projected position of the hip joint on the support surface. The parameters that need to be paid attention to in the trajectory planning of the swing phase are the starting point, the foothold point, the swing amplitude and the function equation. Among them, the position difference between the landing point and the starting point determines the gait length and affects the movement speed of the robot. When the robot is in a diagonal gait, the two diagonally moving legs can be seen as a virtual leg, which simplifies the walking process into an elastically loaded inverted pendulum model. The midpoint of the movement process is called the support center point. It can be seen that when the foothold is just at the support center point, the robot can maintain a certain speed. When the landing point is in front of the center point, the robot will accelerate. The function equation determines the smoothness and real-time adjustment ability of the motion trajectory of the swing phase. In order to adjust the gait length with the speed of the robot, the calculation formula of the gait length is: R=
|h − 1|2 γW − (h − η) + 2 2
(2)
In formula (2), γ represents the swing phase gait duty cycle, W represents the gait cycle, represents the control gain, h represents the desired motion speed of the robot, and η represents the starting position of the foot in a cycle. When the landing point is behind the center point, the robot will decelerate. The position of the center point can be obtained from the body movement speed, so during the normal walking process of the robot, the foothold should satisfy the following relationship: 1 Y = × ϕ − λ2 (3) μ In formula (3), μ represents the support center point, ϕ represents the standing time of the support leg, and λ represents the desired motion speed of the robot. When moving on the ground, the swing amplitude can be slightly smaller than the maximum leg height of the robot. After the starting point, gait length and swing amplitude are known, the function equation can use the cycloid function, and the motion trajectory of the foot end is obtained as: 2 sin R − γ 2W (4) Q=σ× 2π In formula (4), σ represents the height of the raised leg, and R represents the forward length. Considering that the robot can resume normal motion when it is disturbed by external, the position of the capture point is introduced into the calculation formula of the nominal landing point in the robot. Therefore, considering the given movement speed
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
299
and the detected movement disturbance, the appropriate foothold can be calculated. The disturbed foothold position is shown in the formula: 1 1 U = × (g − ψ)2 − (5) g ψ In formula (5), g represents the height of the robot’s hip joint, and ψ represents the swing time of a single leg. The starting point of the footfall position is calculated from the projected position of the hip joint on the support surface instead of the actual position in world coordinates, which can avoid leg drift caused by calculation errors. The support phase refers to the period of time during which the legs of the multi-axis industrial robot support the ground. If the trajectory planning of the support phase is directly carried out and converted into joint angles for position control, it is easy to cause rigid impact to the multi-axis industrial robot, damage the internal structure, and is not conducive to stable motion control. When the robot is moving forward, the legs need to move to a desired position in the moving forward direction within a certain period of time, and the desired position is calculated according to the method described above [7, 8]. Therefore, using impedance control to ensure flexible contact between the leg and the ground when the leg touches the ground can effectively reduce the impact. For the use of traditional position control, high stiffness can improve control accuracy, but high stiffness may cause damage to the airframe. To move the robot to the desired position, the trajectory of the swing leg needs to be planned. In order to make the robot have higher stability, the swing trajectory of the foot end needs to ensure that the speed at the initial moment of raising the leg and the moment of landing is zero, and it also needs to have a certain height to overcome obstacles on the road. To avoid this problem, the control needs to be compliant. There are two main types of compliance control: passive and active. Passive compliance is mainly achieved by springs, which requires high mechanical design of the robot. Active compliance is mainly achieved through control algorithms, using a virtual spring structure to connect the internal action points of the robot to generate corresponding virtual forces to drive the robot to move.
4 Constructing Dynamic Equations Based on Deep Reinforcement Learning Because the deep reinforcement learning algorithm itself requires the agent to obtain information independently by interacting with the environment without external supervision. Therefore, the stronger the exploration ability of the algorithm, the more fully understand the environmental information, the higher the reward, the better the learning strategy. In the deep reinforcement learning algorithm, the action network is responsible for generating actions, and the evaluation network is responsible for evaluating the generated actions. The goal of learning is that under the more effective guidance of the evaluation network, the action network can learn the optimal motion strategy. Although some control methods can adjust the trajectory of the foot according to the speed of the robot, the entire modeling control process still requires complicated and tedious manual tuning, cannot find the optimal control strategy for the terrain environment, and has poor
300
R. Zhang and X. Wei
real-time adjustment ability. The strategies in policy gradient reinforcement learning can generally be divided into two types: stochastic strategies and deterministic strategies. The random strategy is that in a certain state, there are multiple actions with different probabilities to choose from, and the random strategy uses a normal distribution to generate an action. Theoretical mathematical models can be used to design effective leg motion planning or control algorithms. If the established model equations are always true during the motion time, it is considered that the motion planning or control algorithm designed according to the given model is physically correct and feasible. These models quantify the relationship between the input parameter α and the current state β of the system, and the resulting change in state α. This can be expressed in ordinary differential equations, as shown in formula (6): α (6) βz = F × − α − z2 z In formula (6), F represents the joint control amount, z represents the mechanical parameter of the leg member. Deep reinforcement learning can continuously learn in the interaction with the environment. In the multi-axis industrial robot motion control problem, without the need to accurately model the robot, the robot can actively adapt to the new environment and avoid designing control algorithms for each scene. The deterministic strategy is to only select the action with the highest probability as the only choice, and no longer consider other actions, which can speed up the learning speed, but it may also lead to the failure to find the global optimal solution. Then according to the actual situation of motion, reasonable assumptions are made to deduce the state space dynamics model that can be used for controller design [9]. According to the Lagrangian dynamic calculation method, the complete dynamic equation of the multi-axis industrial robot is established as shown in formula (7): Tv 2 (7) G = Tv + Yt−1 + Y t−1 In formula (7), T represents the inertia matrix of the joint space, Y represents the generalized coordinate of the robot, v represents the Coriolis force, centrifugal force and gravity term, and t represents the external force on the robot. The robot can be divided into a joint-driven part and an under-actuated floating base part, and the two parts have a cross-dependency relationship, that is, there is a dynamic coupling between the base and the legs, and also contains a large number of nonlinear constraints. The full dynamics model is extremely complex due to the highly coupled and dynamic nature of the model. At the same time, the goal of reinforcement learning is to find an optimal policy that computes the mathematical expectation of the return values of all segments downsampled by each policy, and this optimal policy has the largest expectation [10]. The return value is calculated as the accumulation of a fragment, and there is a large sample deviation. First, it is assumed that the robot is only affected by the ground force, and the robot can also be regarded as a part of its own gravity when it is loaded. The robot’s floating base and four legs are then modeled as a whole and the mass of the legs is ignored. By describing that the rate of change of the centroid momentum is equal to the sum of the external torques, the centroid dynamics model can be established, as
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
301
shown in formula (8): l = [(b − 1) − k]2 ∂
(8)
In formula (8), ∂ represents the mass of the robot, l represents the gravitational acceleration, b represents the position of the center of mass in the world coordinate system, and k represents the position of the foot end in the world coordinate system. At the same time, in order to solve the problem of deterministic strategy, the DDPG algorithm draws on the idea of experience replay in DQN. The current state, action, reward feedback and next state are stored in the experience replay pool as a tuple, and the action network and evaluation network are trained by continuously extracting data randomly from the experience replay pool, which effectively increases the amount of training data. Therefore, the value function is used to predict the return value, that is, to calculate the expectation of the return value, which is mainly used to evaluate the quality of different states, and can be used to guide the selection of actions. In order to be able to apply the model predictive control algorithm, it is necessary to do some combination of multi-axis industrial robots to build a convex model that can be used for linear optimization. The DDPG algorithm also draws on the idea of the dual network in DQN, and promotes the learning effect and improves the learning performance by using the current network and the target network with different update frequencies.
5 Design Motion Stability Control Algorithm The motion stability of the robot mainly refers to the performance that the robot always maintains a stable posture when moving under a pre-specified motion gait. When the robot is walking, the friction force of the ground is a problem that needs to be considered in the stability analysis process. The gait of a multi-axis industrial robot refers to the movement of stepping during the process of walking. According to the different duty ratios, the gait of the robot can be divided into different gaits by imitating the motion gait of animals. Since the foot end of the robot studied in this paper and the ground can be regarded as point contact, the ground provides support force and horizontal friction force to the support legs during the movement. The duty cycle refers to the ratio of the time that a single leg is supported on the ground when the robot moves to the total time of one cycle movement. Next, the geometric and mechanical stability of multi-axis industrial robots and the stability of diagonal gait motion under disturbance are discussed. If a footed robot needs to move forward, there must be an external force, and this external force is the friction and support provided by the ground. In a steady state of equilibrium, the rate of change of the centroids is zero and there is no movement in the horizontal direction. In order to make the multi-axis industrial robot support stably on the ground without relative sliding between the support foot and the ground, it is necessary to ensure that the resultant force of the support leg and the ground in the support phase is within the friction cone. When the robot does not have accelerated motion, the zero-moment point is the projection point of the center of mass on the support surface. Therefore, the dynamic stability criterion based on the zero moment point can still adopt the static triangle stability criterion to define the stability of the robot. When the robot is subjected
302
R. Zhang and X. Wei
to a lateral impact from the outside world, it must quickly judge the motion state and trend of the fuselage, and determine whether the impact is large enough to cause the fuselage to overturn, and then measures to stabilize the lateral impact must be taken. That is, during the triangular gait motion of the multi-axis industrial robot, when the zero moment point is located inside the support triangle, the stability margin value is positive, and when it is outside the support triangle, the value is negative. The larger the stability margin value, the more stable the robot is. Based on the lateral stability conditions and the rolling motion of the body under lateral impact, the lateral speed and roll angle are monitored by the body sensing device as the conditions for dynamic stability determination. Let the upper limit of the lateral speed and roll angle of the fuselage be m, n, respectively, when the fuselage moves stably in place or in the forward direction. It can be considered that when the airframe sensor detects that the lateral speed of the body or the roll angle exceeds the upper limit, the body is subjected to a lateral impact, and the lateral impact stability control needs to be completed. The dynamic stability judgment of whether the state of the body has a tendency to overturn can be expressed as for: 1 |m0 | > |m1 | ∪ |n0 | > |n1 | (9) P= 0 In formula (9), m0 , m1 represents the upper limit of the lateral speed during stable motion in the original and forward directions, respectively, and n0 , n1 represents the upper limit of the roll angle during stable movement in the original and forward directions, respectively. However, if only the influence of the center of mass of the robot base and the position of the resultant force point on the stability is discussed, the influence of the motor torque and power consumption limitation of the system is actually ignored, which is very inconvenient for the application of the robot in actual work. According to the necessary conditions of the lateral impact control strategy, lateral stride is the key to keep the robot body stable and counteract the lateral speed. When the robot has too much weight or has a large load, the torque required by the joint may be greater than the maximum torque that the joint can provide. The legs of the robot cannot reliably support the weight of the entire body, and the phenomenon of falling and overturning will occur. Therefore, it is also necessary to consider the joint torque limit of all joints of the real robot to avoid any joint torque in the leg being higher than the maximum allowable torque. In addition, since the synchronous control of multiple axes of an industrial robot requires the controller to ensure that the trajectory tracking error of each axis converges to zero, on the other hand, it also handles and adjusts the synchronization error between each axis, and the burden of control computing tasks is relatively large. From this, the expression formula of the constant velocity approach rate of the stability control algorithm is obtained as: H = −t ×
q2 sgn(f )
(10)
In formula (10), t represents the exponential approach term, q represents the jitter amplitude, and f represents the jitter coefficient. In Trot gait, the two legs on the diagonal line are grouped in pairs, each group of legs moves in the same form, and the force of the hip joints on the torso is also similar. Therefore, the movement of the two legs on
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
303
the diagonal can be equivalent to a virtual leg. In the physical system, the point-to-face contact model is constrained by a friction cone. When the ratio of the tangential friction force of the contact surface to the normal contact force is greater than a certain value, the foot end will slide. In order to avoid this phenomenon, the tangential force of the contact surface should be limited within the friction cone range. If the number of axes is large, the control amount will increase geometrically, and the hardware requirements such as the main frequency of the controller will be very high, so it is necessary to select an appropriate synchronization function relationship. In the support phase, the support leg controls the robot body to follow the lateral impact direction and move toward the landing point of the laterally straddling swing leg. If can achieve balance in one step, you can return to the original Trot gait in the next gait cycle by controlling the movement of the body to the middle position between the swing leg and the support leg. If the synchronization error is defined as a specific functional relationship between each axis and all other axes of the system, the calculation of the error becomes complicated, which challenges the calculation performance and real-time performance of the controller. If the airframe speed or roll angle cannot be controlled within the threshold value of dynamic stability judgment by one step, it is necessary to realize the stability of the motion state through continuous strides. Therefore, choosing the cross-coupling control strategy of the minimum relevant axis can not only establish indirect connection between each axis, meet the requirements of multi-axis synchronous control, but also limit the calculation amount of each axis.
6 Simulation Analysis 6.1 Experiment Preparation The robot in this experiment is driven by 12 torque motors, that is, it is composed of 4 legs with joints with 3 degrees of freedom each, which can perform various gaits and adapt to different terrains for movement. Before using an industrial robot, it is necessary to understand the important parameters of the robot, such as D-H parameters, maximum motion range and dynamic parameters, so as to establish a realistic robot model within the specified mechanical performance parameters and maximize the use of and protect robots. The single leg has three degrees of freedom, and its joints are arranged as hip flexion and extension, knee flexion and extension, and hip adduction and abduction. With larger offset settings, each joint can achieve a larger range of motion, demonstrating high mobility. The master station writes its messages into standard Ethernet data frames and encapsulates them into EtherCAT data frames. Devices are identified by frame type Ox88A4. The motion range of the first joint of the robot leg (±90°), the second joint (±180°), and the motion of the third joint are infinite. The setting of the workspace makes the robot leg configuration variable and the movement very flexible. The EtherCAT protocol was adapted for cyclic process data in the industrial sector, eliminating the need for a large protocol stack. The EtherCAT data frame sequentially includes an Ethernet frame header, an EtherCAT header, an EtherCAT data segment, and an FCS frame check sequence. The robot hardware system is mainly composed of Intel NUC control computer, sensor, driver, robot joint, battery and remote operation terminal. The robot has 12 drive joints, each joint has a rated working torque of 60 Nm and a peak
304
R. Zhang and X. Wei
torque of 120 Nm. It needs to communicate synchronously, so the EtherCAT bus is used for communication. The EtherCAT data segment contains an indefinite number of sub-messages, and the headers of the sub-messages indicate the access mode of the master device. The data segment of the sub-packet can be embedded in the data format of other protocols such as CoE or SoE as required. Sensors used include binocular cameras, inertial measurement units and lidars, and the IMU provides measurements of the azimuth and angular velocity of the robot base. 6.2 Experimental Results In this simulation experiment, the method of reference [1], the method of reference [2] and the proposed method are selected for comparative analysis. In order to ensure the higher reliability of the experimental results, 5 groups of experiments were set up, each group of experiments was carried out 20 times, and the average value was taken as the final result. The heading angle deviation, motion control time-consuming and one-way adjustment distance of the three methods were respectively tested under the condition of 50 kg load pressure. The heading angle deviation is used as an evaluation index for the accuracy of the robot motion stability control. The smaller the heading angle deviation is, the higher the accuracy of the robot motion stability control method is. The comparison results of the heading angle deviation of the three methods are shown in Fig. 2.
Fig. 2. Heading angle deviation of three methods
According to Fig. 2, the mean values of the heading angle deviations of the method of reference [1], the method of reference [2] and the proposed method are 10.94°, 11.81° and 5.92°, respectively. It can be seen that the heading angle deviation of the proposed method is smaller than that of the method of reference [1] and the method of reference [2], which indicates that the proposed method has a higher accuracy of the robot motion stability control.
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
305
The motion control time is used as the evaluation index of the robot motion stability control efficiency. The shorter the motion control time, the higher the control efficiency of the robot motion stability of the method. The time-consuming comparison results of motion control of the three methods are shown in Fig. 2.
Fig. 3. Time-consuming motion control of the three methods
According to Fig. 3, it can be seen that the average time-consuming of motion control of the method of reference [1], the method of reference [2] and the proposed method are 3.74 s, 3.84 s and 2.57 s, respectively. It can be seen that the motion control time of the proposed method is relatively short, which indicates that the proposed method has high efficiency in controlling the motion stability of the robot. The one-way adjustment distance is used as the evaluation index of the fault tolerance of the robot motion stability control. The shorter the one-way adjustment distance, the better the fault tolerance of the robot motion stability control of the method. The comparison results of the one-way adjustment distance of the three methods are shown in Fig. 3. According to Fig. 4, the mean values of the one-way adjustment distances of the method of reference [1], the method of reference [2] and the proposed method are 47.70 mm, 49.51 mm and 31.78 mm, respectively. It can be seen that the one-way adjustment distance of the proposed method is short, which indicates that the proposed method has good error tolerance for robot motion stability control. In summary, since this method is based on the prespecified desired motion speed and centroid mass angular velocity. The position vector of the nominal foothold of the robot foot is calculated. Using the deep reinforcement learning algorithm and design the motion stability control algorithm by describing the rate of change of the centroid momentum. Therefore, this method can effectively improve the accuracy and efficiency of motion stability control.
306
R. Zhang and X. Wei
Fig. 4. One-way adjustment distance of three methods
7 Conclusion This paper studies the motion stability control algorithm of multi-axis industrial robot based on deep reinforcement learning. By designing the structural features of the multiaxis industrial robot, the motion trajectory planning mode of the multi-axis industrial robot is optimized. Based on the deep reinforcement learning algorithm, the robot motion equation is established, and the motion stability control algorithm is designed by describing the change rate of the center of mass. The experimental verification results show that the motion stability control of the proposed method has good fault tolerance, high accuracy and efficiency. In the follow-up, we will continue to test the application effect of the multi-axis industrial robot motion stability control algorithm designed this time in complex environments.
References 1. Lu, B., Han, J., Zhang, Y.: Simulation analysis and experiment of industrial robot motion control system based on dSPACE. J. Chongqing Technol. Bus. Univ. Nat. Sci. Ed. 38(06), 50–57 (2021) 2. Wang, Y.: Research on design of industrial robot motion control system. Microcomput. Appl. 36(10), 86–88, 116 (2020) 3. Chen, S., Wen, J.T.: Industrial robot trajectory tracking control using multi-layer neural networks trained by iterative learning control. Robotics 10(1), 50 (2021) 4. Alam, M., Ibaraki, S., Fukuda, K.: Kinematic modeling of six-axis industrial robot and its parameter identification: a tutorial 15(5), 599–610 (2021) 5. Cheng, K., Khokhar, M.S., Ayoub, M., et al.: Nonlinear dimensionality reduction in robot vision for industrial monitoring process via deep three dimensional spearman correlation analysis (D3D-SCA). Multimedia Tools Appl. 80(4), 5997–6017 (2021) 6. Bai, B., Li, Z., Zhang, J., et al.: Research on multiple-state industrial robot system with epistemic uncertainty reliability allocation method. Qual. Reliab. Eng. Int. 37(2), 632–647 (2021)
Research on Motion Stability Control Algorithm of Multi-axis Industrial Robot
307
7. Han, Y., Liu, J.: Research on optimization of motion path error of industrial robot based on B-spline curve. Chin. J. Constr. Mach. 18(3), 199–204 (2020) 8. Zhang, J., Xu, H.: Motion control of industrial robot with series two-link manipulator considering joint nonlinear. J. Mech. Transm. 44(1), 28–34 (2020) 9. Zhu, Z., Wang, S., Zhang, Y.: ROENet: a ResNet-based output ensemble for malaria parasite classification. Electronics 11(13), 2040 (2022) 10. Zhu, Y., Li, H.: Fault area determination of islanded microgrid based on deep reinforcement learning. Comput. Simul. 38(7), 78–82 (2021)
An Intelligent Mining Method of Distributed Data Based on Multi-agent Technology Zhongwei Chen and Xiaofeng Li(B) College of Information Engineering, Guangxi University of Foreign Languages, Nanning 530222, China [email protected]
Abstract. In order to solve the above problems, a new distributed data intelligent mining method based on Multi-Agent technology is studied. Collect distributed data, complete data storage through compression, archiving and storage, use realtime data monitoring architecture to determine data status, implement information collection through data analysis and extraction, input data to filter corresponding parameters, build energy consumption statistical reports to improve report functions, establish mining models, determine historical data, and analyze historical data. Analyze the trend of data energy consumption, and realize digital information analysis according to the change probability of data energy consumption. According to the essential characteristics, application-related characteristics, performance characteristics and acquisition characteristics of the equipment, realize data preprocessing, use mapping processing to complete the missing trapping, extract the main information, and project the data to the 1-dimensional to 3-dimensional space to obtain the initial model. The model mines key factors, selects random variables, analyzes the distribution mode of the network structure, realizes parameter independence, establishes a relational model, and obtains the mining results. The experimental results show that the distributed data intelligent mining method based on Multi-Agent technology can effectively shorten the mining time, and the accuracy rate of mining results is as high as 90%, which can provide good theoretical support. Keywords: Multi-Agent Technology · Distributed Data · Data Mining · Intelligent Mining
1 Introduction With the rapid development of China’s economy, great achievements have been made in distributed data, chemical industry and other manufacturing industries. Distributed data digitization refers to the use of digital information technology to guide the regular flow of energy, build a new efficient, orderly and economic modern distributed data system, and improve the security, accessibility and sustainability of the distributed data system. Nowadays, some large manufacturing industries have begun to use digital information output to manage and apply distributed data systems. The current method is mainly © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 308–322, 2024. https://doi.org/10.1007/978-3-031-50571-3_22
An Intelligent Mining Method of Distributed Data
309
to collect information by using sensors acting as field devices, and then store and classify distributed information through manual operation software. Although centralized management and storage of distributed data has been realized, the huge amount of information leads to time-consuming and labor-intensive, and unsatisfactory results can not be obtained in terms of labor cost and data accuracy. With fewer data types and small storage space, distributed data collection can only be carried out for small regions, and the monitoring effect for large regions is not ideal. In addition, there is a large amount of data in China’s traditional distributed data industry that cannot find out the rules, these hidden data are difficult to be mined and used. The traditional distributed data management and monitoring system has such structural problems as decentralized statistical data, untimely data updates, incomplete data, etc. Under the background of big data today, the traditional operation mode can no longer meet the actual needs [1]. Multi-Agent System (MAS) is a collection of multiple Agents, and its multiple Agent members cooperate with each other, serve each other, and complete a task together. Its goal is to build large and complex systems into small, easy-to-manage systems that communicate and coordinate with each other. The activities of each Agent member are autonomous and independent, and their own goals and behaviors are not restricted by other Agent members. They negotiate and resolve their conflicts and conflicts through competition and negotiation [2]. The main research purpose of MAS is to solve largescale complex problems beyond the individual ability of agents through interactive groups composed of multiple agents. Data mining is a new technology, which has a short history and few applications. However, data mining is of great significance to the enterprise itself. Data mining technology has strong mining ability and can deeply analyze various information. In order to solve the above problems, this paper studies a new distributed data digital information analysis method based on data mining technology.
2 Distributed Data Digital Information Collection and Storage 2.1 Distributed Data Collection Distributed data collection is mainly used in the monitoring field to collect the distributed data of the equipment. The distributed data collection process is shown in Fig. 1 below: It can be seen from Fig. 1 that the load circuit, detection circuit and control circuit together form a detection device to detect input signal 1 and input signal 2 of the monitoring site respectively, so as to determine useful distributed data information, and use the collected distributed data information as sample data [3, 4]. 2.2 Distributed Data Storage The distributed database system is used to realize data storage, and the data center is used to compress and archive the collected distributed data and store it in the distributed database system. Distributed databases generally include historical distributed databases and real-time distributed databases. The real-time distributed database is mainly responsible for displaying real-time distributed data, as well as timely backup and storage in
310
Z. Chen and X. Li load Load circuit Detection circuit capacitor bank
Control circuit
input signal1 load Load circuit Detection circuit capacitor bank
Control circuit
input signal2
Fig. 1. Distributed data collection process
the historical distributed database system. Operators can monitor the trend of distributed data at any time to obtain the dynamic state of distributed data [5]. The distributed data repository is shown in Fig. 2 below:
data query
network terminal
network terminal
Probe Node
historical data
Real-time data
Fig. 2. Distributed data repository
Observing Fig. 2, it can be seen that distributed data is stored in a distributed database for archiving, so that historical distributed data can be queried, and distributed data analysis can be achieved through a large amount of historical distributed data. Distributed data within the database can be directly transmitted to the client, allowing operators to monitor in real time. The real-time distributed database is responsible for real-time monitoring, and the historical distributed database is responsible for historical queries. The distributed database is the information center of the entire distributed data management system, and it is the distributed data source to ensure the accuracy and usability of distributed data analysis [6].
An Intelligent Mining Method of Distributed Data
311
3 Intelligent Analysis of Distributed Data Based on Multi-agent Technology The intelligent analysis of distributed data based on Multi-Agent technology is mainly divided into real-time monitoring distributed data analysis, report distributed data analysis, historical distributed data analysis and distributed data analysis. The intelligent analysis process of distributed data based on Multi-Agent technology is shown in the following figure:
start
command code
Real-time monitoring data analysis
data analysis
Report data analysis
data processing Historical data analysis
N Whether the cycle is over
Y
End
Fig. 3. Distributed data analysis process based on Multi-Agent technology
In Fig. 3, the distributed data analysis process is: generate the distributed data analysis command code, analyze the real-time monitoring data according to the command code, and generate the analysis report. Compare and analyze the real-time monitoring data with the historical data to determine whether the monitoring period is over. If so, the analysis results will be given; Otherwise, carry out further data processing, analyze the processed data, and repeat the above process until the end of the monitoring cycle.
312
Z. Chen and X. Li
3.1 Real-Time Monitoring of Distributed Data Analysis In order to ensure the real-time management of the distributed data system and timely collect the effective distributed data information, the whole system adopts the realtime distributed data monitoring architecture. After the collection system collects the distributed data and stores it in the real-time distributed database, it extracts and backs up these distributed data. Through the on-site simulation writing, combined with the real-time distributed data input, it presents the on-site operating parameters and status to the operator’s system in real time. Through the real-time monitoring, it can record the on-site distributed data consumption, control and monitor the distribution in real time, Real time understanding of on-site operation can find problems and correct them in a timely manner. For the traditional distributed data management and monitoring system, the statistical distributed data is scattered, the distributed data is not updated in time, and the distributed data is one-sided and incomplete. It can be well optimized and improved. It can not only effectively control and monitor the distributed data The equipment can improve the efficiency of energy, promote the best use of energy, reduce the loss of unnecessary energy, reduce the cost of system operation and the error of system decision [7]. 3.2 Distributed Data Analysis of Reports The collection time interval of distributed data is very short. It is often collected in minutes or even seconds. In order to ensure the real-time and accuracy of distributed data, the system will record the distributed data generated by each time unit, so this distributed data is an unusually large base. If the distributed data stored in the server is messy and scattered, it is meaningless for real production, Therefore, it is necessary to classify and sort the collected distributed data according to time, space or equipment type [8, 9]. The distributed data of large regions can be divided into small regions according to the spatial classification method for sorting and recording. The formula of distributed data digital report based on Multi-Agent technology is shown in (1): k (m − k) (1) U = i=1 m Among them, U represents real-time distributed data consumption; k represents the value of numerical variables; m is the sample size of type variables. Select the most suitable distributed data report according to different dimensions, the system collects distributed data information according to the recorded historical distributed database and real-time distributed database, inputs distributed data to filter corresponding parameters, and builds distributed data consumption statistical report to complete the report function. 3.3 Distributed Data Analysis Statistical analysis of historical distributed data is carried out. The expression of historical distributed data analysis formula based on Multi-Agent technology is shown in (2): l
kl l=1 Z
V = ∫
(2)
An Intelligent Mining Method of Distributed Data
313
Among them, V represents the historical distributed data consumption; k represents the value of the numerical variable; Z represents the sample size of the classification variable, l represents the category of the lth classification variable; kl represents the lth classification variable of the first classification variable. Sample size [10]. Based on the analysis results of real-time distributed data and historical distributed data, we can further figure out the general trend of distributed data consumption in the next period of time. Take the current time point as the starting point parameter, find out the consumption of distributed data for a period of time forward as the reference interval, and make a prediction formula through the trend of distributed data to make an approximate prediction trend and value. Since the distributed data for the next period of time has not yet been generated, it can only be used as the prediction distributed data, so it can not be specific to accurate values, but can only represent a general trend. The distributed data analysis formula is shown in (3): Ln = kt + b
(3)
Among them, Ln represents the distributed data consumption value; k is the change probability of distributed data consumption; t is the time; b is the change coefficient. Through the formula, the distributed numerical value trend in the next period of time that is closer to the reference distributed data interval can be calculated. Through the trend determination between distributed data and distributed data, basic distributed data mining is completed, and distributed data analysis is realized.
4 Generation of Distributed Data Multi-agent Technology In order to realize the deep mining of distributed data, this paper processes the distributed data and generates Multi-Agent technology. The generation of Multi-Agent technology is between the distributed data platform and the upper business application. As the key component of the middle layer, the application of Multi-Agent technology is the best practice scheme for distributed data mining of electric power metering. The generation of Multi-Agent technology is based on the fuzzy C-means clustering algorithm. First, the distributed data set of power metering is initialized. Set the number of clusters K, the membership factor m, and the random initialization matrix U , and substitute them into the fuzzy C-means clustering algorithm, as follows: C=
Km U
(4)
where, C is the fuzzy C mean value. The iterative function of C is obtained by calculating the mean value of fuzzy C: 1 u= C
(5)
Among them, u represents the obtained iterative function. After the iterative function is obtained, the cluster center and membership factor of the Multi-Agent technology are obtained according to the iterative function, and the convergence degree is judged. If the
314
Z. Chen and X. Li
change value of the objective function is less than the preset threshold, the clustering result is output, and the clustering result finds the cluster iteration value: u1 =
1 KM
(6)
where, u1 represents the obtained clustering iteration value. Then, the Multi-Agent technology is generated. First, the distributed data source computing platform of distributed data is accessed. The computing platform supports distributed computing for a large number of distributed data and provides the function of querying distributed databases. Then, a large distributed data governance component is established, which consists of three layers: distributed data layer, analysis layer, and label layer. The preliminary generation of Multi-Agent technology is completed through the large distributed data governance component. The generation process of Multi-Agent technology for large distributed data of electric power metering is shown in Fig. 4 below:
Start
business processing
information control
logic operation N Does it meet the logical requirements?
Y N whether to generate tags
Y result output
End
Fig. 4. Generation process of distributed data Multi-Agent technology
An Intelligent Mining Method of Distributed Data
315
It can be seen from Fig. 4 that for distributed data deep mining, multiple MultiAgent technologies need to be established. Therefore, the rule engine is used as the production machine of Multi-Agent technology, resulting in a large number of MultiAgent technologies. The management and standardization of Multi-Agent technology are completed through the label identification center. The specific process is as follows: Multi-Agent technology is commercialized. Users can set the generation conditions of Multi-Agent technology, and give the function of modifying and reviewing MultiAgent technology. At the same time, the logic check of the Multi-Agent technology is carried out through automatic analysis. The generation conditions of the Multi-Agent technology are formulated according to the distributed data that needs to be mined in the large distributed data of power metering. The staff only needs to manage the generation and large scale of the Multi-Agent technology. Maintenance of distributed data. As the production machine of Multi-Agent technology, the rule engine is generated by the distributed data entities generated after the Multi-Agent technology is commercialized. Rules are formulated and graphically controlled in the development of Multi-Agent technology. As the device for producing Multi-Agent technology, the rule engine can set the generation rules of Multi-Agent technology. The generation rules are different for Multi-Agent technology with different purposes, Trigger generation conditions according to generation rules to complete the production of multiple Multi-Agent technologies. The label recognition center can carry out the logical operation of the derivative combination of the Multi-Agent technology. After the staff performs the setting operation, the existing simple Multi-Agent technology can be upgraded to a more advanced, complex and valuable Multi-Agent technology. Using intelligent computing for automatic analysis, the demand sorting of the multi-agent technology of large distributed data of electric power metering is carried out. Represents the properties of Multi-Agent technology in numerical form. After the completion of Multi-Agent technology, it is necessary to judge its value. Multi-Agent technology with low value cannot be used for mining distributed data. Therefore, a value function needs to be established to judge the value of Multi-Agent technology. For the value judgment of Multi-Agent technology, it is necessary to obtain the value function of its dissimilarity index, which is calculated by the following formula: J =
K
(i = 1, 2, · · · , N )
(7)
i=1
When the value of J is greater than 1, it is considered that the Multi-Agent technology is of high value and can be used for deep mining of distributed data. The generated Multi-Agent technology mainly has the following functions: Distributed data extraction, extracting distributed data, classifying it according to Multi-Agent technology, and extracting different large distributed data by judging the mining requirements set by the staff. Distributed data conversion is used to preprocess and convert each distributed data module. Through the preprocess and conversion, problems in distributed data sources can be found in time, invalid information can be filtered, and error labels can be established for error information.
316
Z. Chen and X. Li
Distributed data identification, which uses intelligent computing to automatically analyze distributed data and identify Multi-Agent technology. 4 Distributed data mining based on Multi-Agent technology. The univariate missing mode is mapped to obtain the multivariate missing mode, as shown in formula (8): (8) Y k×1 = f X1k×1 , . . . , Xik×1 , . . . , Xnk×1 Among them, Y k×1 represents the obtained mapping result. The distributed data is determined by the multiple linear regression method. When the correlation between the distributed data is high, the least squares regression method is used to establish a model, which can realize one-by-one comparison, ensure that the results of distributed data processing are more reliable, and improve the overall processing efficiency sex. 0Due to the limited equipment data, the equipment sample size that can be collected is relatively small. The least square method is used to solve the problem of too small sample size. Extract the distributed data of equipment in the way of dependent variable. If there are p independent variables extracted, the set obtained variables are expressed as q, the is expressed as x1 , x2 , . . . ,xp , and the dependent set obtained is expressed as y1 , y2 , . . . , yq . The independent variable distributed data table X = x1 , x2 , . . . , xp n×p is established, and the dependent variable distributed data table is Y = (y1 , y2 , . . . , yq )n×q . Extract the principal components of independent variables and dependent variables. Set the principal components extracted by X as t1 , and the principal components extracted by Y as u1 , The extracted t1 and u1 should carry as much variation information as possible, while ensuring that t1 and u1 have the maximum correlation. Set the initial model, there are m indicators and n samples in the device, and the sample matrix is established, as shown in formula (9):
(9) X 0 = xij0 m×n
Judge the evaluation index. If the evaluation index is bigger, it is better. It is called a positive index. If the evaluation index is smaller, it means it is worse. It is defined as a reverse index. Due to the low availability of the reverse indicator, it is necessary to convert the reverse indicator into a positive indicator. The conversion formula is as follows: xij =
xij0 xj0max
, i = 1, 2, . . . , m j = 1, 2, . . . , n
(10)
Among them, xij represents the positive indicator after conversion; i represents distributed data; j represents the indicator before conversion, which is a reverse indicator; xij0 represents the indicator before conversion. The distributed data is transformed by sample projection, and the characteristics and structure of the distributed data are reflected by one-dimensional projection. When the projection direction is different, the characteristics of distributed data structure are also different, so it is necessary to determine the best
An Intelligent Mining Method of Distributed Data
317
projection direction, and judge the characteristics of high-dimensional distributed data through the projection direction. Set the sample matrix, set the matrix to X = xij n×m , and set the projection direction to α. Summarize the different projection directions to get the projection direction set as follows: α = [α1 , α2 , · · · , αn ]
(11)
Then the projection value of matrix X on projection direction α is shown in formula (12): Z = αTX
(12)
Among them, Z represents the one-dimensional projection obtained, which can well reflect the characteristics of distributed data. In the process of projecting highdimensional distributed data to low-dimensional distributed data, it is dispersive. It seeks the optimal projection direction within multiple different projection directions, judges the weight of feature vectors, and extracts key factors according to the weight. Under the specific projection index, determine the abnormal value of covariance, eliminate the abnormal value, and prevent the unstable operation process. There is nonlinear optimization in the projection process. The MAD statistical function is used to define the objective function, solve the obtained PP model, calculate the local characteristics, and analyze the similarity between variable sequences. If the starting point is different in nature, the nature is also different. It is necessary to reveal the characteristics of key factors from the side to complete the distributed data analysis. Use the Bayesian network structure to obtain a given distributed data set, select a random variable, set the random variable as S h , determine the prior probability p S h of the equipment effect, and calculate the distribution result p(S h |D) of the posterior probability. The calculation formula is as formula (13) as shown: p S h |D = p S h , D /p(D) = p S h /p D|S h /p(D) (13) where, p(D) represents the normalization constant obtained; p D|S h represents the boundary likelihood value of mining. After determining the distribution mode of the network structure, achieving parameter independence and ensuring the integrity of different parameters, a distributed data set is obtained, and the independent relationship between different variables in the distributed data set of the supplier’s equipment is evaluated to obtain the network structure. The test method adopted is dependency test, and the test error is analyzed by independent test and network structure through searching. Analyze the evaluation function, compose all the key factors into a network structure, determine the network structure with the highest score, and use the heuristic search function to obtain the amount of distributed data. The calculation formula is as follows (14): f (n) =
n i=1
(−1)i+1 Cni 2i(n−i) f (n − i)
(14)
318
Z. Chen and X. Li
The plasticity of the relational model is poor, and it will change when the environment changes. Therefore, the Multi-Agent technology needs to update the parameters through incremental learning and adaptive learning. When the parameters are updated, the Bayesian network structure of the supplier’s equipment will not change. The secondary parameter learning is completed by updating the structure. Because the structure change and parameter learning cannot be synchronized, the sensitivity of analysis needs to be improved. If the distributed data has continuity and gradualness in the change process, determine the structure with the sequence characteristics to ensure the stability of the mining process.
5 Experimental Analysis In order to fully test the mining performance of this method, a comparative test experiment is carried out. The experiment process is: select the experimental data, set the experimental scheme and indicators, and carry out the comparative test under the above conditions to obtain the test results. 5.1 Experimental Data In order to test the performance of the mining method in this paper, in the original distributed data system, four computers are arranged. Each host is equipped with an Agent Server. A host is used as the master control site, on which a coordinator, humanmachine interface and global knowledge base are deployed. Users interact with the entire system through this computer. Only Agent Server is deployed on the other three computers. Return to the coordinator after completing the mining task. Select the sample data of this experiment from the communication service database. First of all, in the case of uniform data distribution, that is, the data on each database is roughly equal, or the difference is not large, 8000 sample data records are taken from the database, and 2000 sample records are distributed at each local site. The experimental data is divided into four groups, with different support levels of 0.1, 0.2, 0.3, and 0.4. 5.2 Experimental Scheme and Indicators In order to fully verify the mining performance of the mining method in this paper, the method in this paper is compared with the ant colony algorithm and the ERP mining algorithm based on the data volume per unit time, the stability of the mining method, and the mining accuracy. 5.3 Analysis of Experimental Results The data volume experiment results of the three methods are shown in Table 1 below: According to the results of mining quantity shown in Table 1, the mining quantity of the three mining methods is increasing with the increase of mining time, but in the same experimental time, the data volume of the method in this paper is higher than that of the two comparison methods. This is because the method in this paper uses
An Intelligent Mining Method of Distributed Data
319
Table 1. Experiment results of mining data volume Experiment time/min
Mining data volume/MB Ant colony algorithm
ERP Mining algorithms
The method of this paper
10
10.25
5.88
20.25
20
15.44
10.02
40.18
30
21.36
15.47
60.94
40
25.47
21.36
81.22
50
29.33
25.96
104.36
60
35.36
30.44
122.28
feature tags to mine data. In the process of mining, information can be well classified, and data processing can be completed through information editing, so as to achieve the mining purpose of low cost and high information throughput. The traditional mining method is difficult to consider the logic relationship of the power program in the mining process, and is limited by the transmission mode, and cannot mine a large amount of data information. Distributed data is susceptible to external interference in the mining process, resulting in unstable mining information and reducing robustness. In order to further explore the feasibility of the mining method, experiments were conducted to compare the stability of the mining process (Fig. 5). According to the above figure, the mining method proposed in this paper has good stability in the mining process, the information processing results are more accurate, and the volatility of the two comparison algorithms is strong, indicating that the stability of the two comparison algorithms needs to be improved. The reason why this method has high mining stability is that it can eliminate data through data analysis, solve the problem of information islands, and perceive the operation dynamics of information through the analysis of equipment and operation status, so as to ensure mining stability. The excavation accuracy test results are shown in Table 2 below: According to the above table, the mining method proposed in this paper has higher mining accuracy and stronger mining ability. The data mining accuracy of the method in this paper has always remained above 99%, while the maximum mining accuracy of the two comparison algorithms has not exceeded 87%. Therefore, it shows that the mining accuracy performance of this method has been greatly improved. This is because this paper uses Multi-Agent technology to process information, analyze information status, and improve mining accuracy.
Z. Chen and X. Li 40
grid voltage/V
20
0
-20
-40 0
50
100
150 200 frequency/Hz
250
300
350
(a) Ant Colony Algorithm Mining Stability
grid voltage/V
40
20
0
-20
-40 0
50
100
150 200 frequency/Hz
250
300
350
(b) ERP algorithm mining stability 40
20
grid voltage/V
320
0
-20
-40
0
50
100
150
200
250
300
frequency/Hz
(c) Mining stability of our method Fig. 5. The result of mining stability experiment
350
An Intelligent Mining Method of Distributed Data
321
Table 2. Mining accuracy experimental results Number of experiments/time
Mining accuracy/% Ant colony algorithm
ERP Mining algorithms
The method of this paper
1
84.25
85.63
99.33
2
85.33
84.21
99.41
3
84.76
82.63
99.44
4
85.21
82.33
99.81
5
86.23
84.21
99.36
6
84.44
85.27
99.74
6 Conclusion In order to improve the performance of distributed data mining, a distributed data intelligent mining method based on Multi-Agent technology is proposed. Select the equipment effect through the Multi-Agent technology method, cluster the supplier’s equipment, flexibly analyze various indicators through cluster analysis, determine the information through screening, and prevent the problem of too much error and too much task through observation and decision-making. The experimental results show that the proposed mining method can further improve the mining accuracy of distributed data on the basis of improving the amount and stability of mining data, and the precision of distributed data mining of this method is always above 99%. Therefore, the proposed mining method based on Multi-Agent technology can better meet the requirements of distributed data intelligent mining. In the future, we need to further study how to complete the distribution to reduce costs and ensure smooth operation.
References 1. Jl, A., Yuan, T.A., Yu, Z.A., et al.: Privacy preserving distributed data mining based on secure multi-party computation. Comput. Commun. 153(15), 208–216 (2020) 2. Bai, J., Du, Y.: Data mining framework for sensitive information of large-scale social online games based on blockchain. J. Xi’an Univ. Technol. 37(03), 397–402 (2021) 3. Baloch, S., Muhammad, M.S.: An intelligent data mining-based fault detection and classification strategy for microgrid. IEEE Access 56(9), 179–185 (2021) 4. Yan, N., Huang, Y., Wang, Q.: Incremental mining method of dynamic data based on deep learning. Inf. Technol. 56(03), 114–119 (2022) 5. Zhang, J.: Interaction design research based on large data rule mining and blockchain communication technology. Soft. Comput. 24(21), 16593–16604 (2020) 6. He, B.: Simulation of time series data mining algorithm based on multi-objective decision. Comput. Simul. 36(11), 243–246 (2019) 7. Nagoev, Z., Pshenokova, I., Nagoeva, O., et al.: Learning algorithm for an intelligent decision making system based on multi-agent neurocognitive architectures. Cognit. Syst. Res. 66(12), 82–88 (2021)
322
Z. Chen and X. Li
8. Cai, K., Chen, H., Ai, W., et al.: Feedback convolutional network for intelligent data fusion based on near-infrared collaborative IoT technology. IEEE Trans. Ind. Inf. 18(2), 1200–1209 (2021) 9. Wang, C., Di, X.: Research on integrity check method of cloud storage multi-copy data based on multi-agent. IEEE Access 8(2), 17170–17178 (2020) 10. Ye, M., Zhang, G.: A parallel computing pattern oriented data mining method based on hadoop technology. Electron. Technol. Softw. Eng. 23(15), 159–161 (2021)
Medical and Healthcare
Local Binary Pattern and RVFL for Covid-19 Diagnosis Mengke Wang(B) School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, Henan, People’s Republic of China [email protected]
Abstract. Recently, the use of artificial intelligence to improve the efficiency of Covid-19 diagnosis has become a trend due to the spread and proliferation of Covid-19 and the fact that healthcare professionals alone are no longer sufficient to cope with the rapid spread of Covid-19. Chest computed tomography (CT) is an effective method to diagnose Covid-19. Using image processing methods to help diagnose such images has become critical. In this trend, we propose a way to detect Covid-19 efficiently. The scheme employs a hybrid model. Local binary patterns (LBP) implement feature extraction in the preprocessing stage. Validation classification results are obtained using the random vector functional link (RVFL) network, which is finally validated by 10-fold cross-validation. It experimentally demonstrated the usefulness of our proposed model for diagnostic-level progress. It helps healthcare workers accurately identify Covid-19. Keywords: local binary pattern · random vector functional link network · deep residual network · Covid-19
1 Introduction 1.1 Background Covid-19 spread rapidly around the world from the moment it was discovered in December 2019, and as of mid-May 2022, there have been hundreds of millions of patients with Covid-19, and many people died of the disease worldwide [1, 2]. Covid-19 has not wholly disappeared, and confirmed cases continue to rise, severely impacting our lives [3]. Covid-19 can form acute respiratory syndromes and spread rapidly around the world. The virus is called coronavirus due to its appearance of spike-like structures [4]. Most people with Covid-19 present with fever, sore throat, and general discomfort and need rest to recover [5, 6]. However, some patients have more severe conditions, exceptionally the orderly, which can be life-threatening [7, 8]. It can be life-threatening in extreme cases. Therefore, to avoid unnecessary deaths. We need timely diagnosis and treatment [9]. During an outbreak, real-time reverse transcription polymerase chain reaction (RT-PCR) is used to diagnose Covid-19. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 325–343, 2024. https://doi.org/10.1007/978-3-031-50571-3_23
326
M. Wang
However, the diagnosis of a suspected patient can only be made if the test is positive. This method cannot exclude the interference of other factors, and it takes time. So, it is urgent to present one machine learning model to detect this disease automatically [10, 11], and we need to apply the computer vision aspect of seeing Covid-19 to clinical practice. Machine learning models have long provided treatment and assistance for many diseases and are much more efficient than manual ones [12]. For example, there are machine learning models that make it easier for people with visual impairments and make life easier for patients [13]. The machine learning model can be an excellent alternative to RT-PCR to diagnose the early stages of Covid-19 [14]. Our imaging device, chest CT, would be beneficial in diagnosing Covid-19 [15]. The continuous development of computer science over the years has added to the field of medical imaging. Machine learning and deep learning are increasingly important in medical image processing. Support vector machines (SVM) [16] also perform well in predicting disease types. However, the main problem with deep learning is using back propagation (BP) methods to tune the parameters, which have many drawbacks. For example, the BP algorithm is a gradient descent algorithm, and the objective function that needs to be optimized is very complex, leading to the inefficiency of the BP algorithm and the convergence rate of the network becoming slow. In addition, the network weights of the BP algorithm can fail to train if they are reoriented. Its initial weights are too sensitive, leading to different results for each training. And there is instability in its learning rate. So, to improve these drawbacks, this paper uses a feedforward neural network RVFL to classify CT images. LBP is used to extract the texture features of CT images. 1.2 Related Work Nowadays, deep learning is becoming more and more potent as it is widely used in various aspects. We can use this technology to solve Covid-19 diagnostic problems. Since 2020, the worst catastrophe facing humanity is Covid-19. Therefore, we should conduct research on Covid-19 in all possible places. At the same time, deep learning researchers have also begun to use all possible methods. Through computer-assisted Covid-19, they can help medical workers. Ismael and Sengur [17] used three methods to identify and classify the Covid-19 virus on chest X-ray (CXR) images. And all three ways achieved good accuracy, sensitivity, and specificity. Jain et al. [18] extract 6432 CXR from the Kaggle repository scanning sample, compare the three models, and report their accuracy. Similarly, he evaluates the model regarding accuracy, sensitivity, and specificity. He also pointed out that the high precision may result from over-fitting. Aminu et al. [19] proposed a new deep learning model, but the model has few parameters, and if the number of trained data is large, then training cannot be performed. The authors use the L2 regularization method merged with the global average to avoid new problems due to insufficient data. The technique works by extracting Covid-Net models and providing them to various classifiers that measure the model’s effectiveness. A hybrid model was presented by Zhang et al. [20]. They used wavelet and the three-stage biogeography-based optimization (3SBBO) method for feature extraction, and updating of weights and biases. A cross-validation method was also used to obtain more accurate results. Two models of CNN named Covid-RNet-1 and Covid-RENet-2 were proposed by Khan et al. [21]. The authors used CNN architecture
Local Binary Pattern and RVFL for Covid-19 Diagnosis
327
for convolutional operations to classify images in the dataset and obtained high scores in terms of accuracy, MCC and F1 [22]. 1.2.1 WE-ABC Wavelet entropy and artificial bee colony (WE-ABC) algorithms use WE for feature extraction of CT images and then combine the advantages of the ABC algorithm with few parameters and simple computation to find the most suitable solution to the problem and verify Covid-19. It is well known that the Fourier transform is used to transform signals. The signal after the Fourier transform is decomposed into its constituent frequencies by the Fourier transform. And the altered function has the form of different combinations of sine waves [10]. However, the Fourier transform also has some drawbacks. For example, if accurate localization in the time domain is required, all signals in the corresponding frequency domain must be analyzed. Since wavelet transform has more variables, wavelet transform is used instead of Fourier transform. If a denotes scale, τ represents translation, ψ is wavelet, and m stands for time. Then, the formula is as follows: m−τ 1 ∞ dm (1) L(a, τ) = √ ∫ f(m)ψ a a −∞ The ABC algorithm is an optimization method. By simulating the nectar collection mechanism of actual bees, the bees in the algorithm are classified into three types: honeygathering bees, observation bees, and scout bees. The three species of bees have different jobs, but they can be interchanged under certain conditions, as shown in Fig. 1:
The conditions for switching between 1 and 2 are that some observation bees find a better nectar source, and the two bees swap roles.
The conversion condition for 3 and 5 is to re-find the nectar source, selecting the bee with the better nectar location in the colony as the honey-gathering bees and the other as the scout bees 4
1 2
3
5 6 The conversion of 4 and 6 is subject to the condition that all nectar sources have been exploited
Fig. 1. Interconversion diagram of three species of bees.
In the initialization formula of the ABC algorithm, xm represents a feasible method, with n variables in each viable strategy. We assume that the number of bees collected
328
M. Wang
and observed is N, so the value of m is from 1 to N, and the value of i is from 1 to n. xmi = li + rand(0, 1) × (ui − li )
(2)
The ui and li represent the maximum and minimum boundaries of xmi , respectively. Each hired bee searches for nearby feasible solutions by some already existing feasible solutions. Then, selected by a greedy algorithm. Use fm as the target function of xm . The formula is as follows: 1 if fm(xm ) ≥ 0 (3) fit m(Xm ) = 1+fm(xm ) 1 + abs(fm(xm ) ) if fm(xm ) < 0 The scout bee judges the viable solution brought back by the hired bee to be available. If this viable solution is still not optimized after several updates, it will move on to the next new viable solution. The probability formula for food source selection is: fit m(xm ) Pm = N m=1 fit m(xm )
(4)
1.2.2 GLCM-GA Gray-level cooccurrence matrix and genetic algorithm (GLCM-GA) is a hybrid construction of GLCM and GA [23]. In which GLCM performs the extraction of image features, and the GLCM-GA model is optimized using GA. Finally, the experimental results are obtained using the feedforward neural network classification function. If two pixels in an image have their orientation and the distance between them already specified between them. Then their spatial relationship can be calculated using GLCM. The features of the image texture are represented using image greyscale. The GLCM reacts to the spatially relevant features of an image using four features: energy, entropy, contrast, and homogeneity. The GA algorithm mimics the Darwinian evolutionary selection process by necessity and is essentially an efficient, global search method. As the algorithm is trained, it can gain more and more valid information during the search process and apply this valuable information to the following algorithm. The algorithm becomes more and more efficient by accumulating search information in this way. The algorithm consists of five main steps: image initialization, three operations, and termination condition determination. As shown in Fig. 2 m refers to the evolutionary algebraic calculator. 1.2.3 WE-ELM The most important feature of the extreme learning machine (ELM) is that it learns more efficiently than traditional learning algorithms, such as single-layer feedforward neural networks (SLFN). It is also more accurate and stable. The overall structure of ELM comprises an input layer, a hidden layer, and an output layer. Each of these three layers is composed of one or more neurons. ELM is unidirectional because it satisfies a one-way propagation process. Each neuron is the output of the last neuron and the input
Local Binary Pattern and RVFL for Covid-19 Diagnosis
Initial group
Selection operation
Crossover operation
Mutation operation
Stop criterion respected?
yse
329
Termination operation
no
m=m+1
Fig. 2. Diagram of a genetic algorithm.
of the next neuron [24]. A standard SLFN structure is shown in Fig. 3. In the figure, w is the connection weight between the input layer and the implied layer, k is the output layer weight, xn denotes the data example, ym means the token corresponding to the data example, and the middle Oi means the implied layer. O1 k
w
y1 x1
O2 y2
x2
ym
Oi
xn input
Hidden later
output
Fig. 3. Single hidden layer neural network structure.
The network structure of the EL model is the same as that of SLFN, except that instead of the traditional gradient-based algorithm that has been tried and tested in neural networks, random input layer weights, and deviations are used in the training phase. The ELM training SLFN consists of stochastic feature mapping and a linear parameter solution.
330
M. Wang
1.3 Deep Residual Network The deep residual learning structure solves the degradation problem of network models. Image recognition techniques using this model can improve their recognition performance [25]. It is implemented by having the intermediate stack network layer explicitly fit a residual mapping rather than having these residual network structures fit a desired potential mapping relationship. It is not connected directly but is a residual mapping of the network computation [26]. Therefore, in the network structure, there is no direct connection between the weights of this layer and the weights of its upper layers, causing it to incur a convergence delay [27]. In the residual network, each residual block extracts valuable features. This simplified network training. Figure 4 shows the general process of the residual structure. The activation function x consists of convolution, pooling, stepping, and get (x). It is mapped from the previous layer of the network and obtained by varying it. Adding the activation function x to the transformed H(x), H(x) + x is the resultant vector. Each block optimizes the activation function by this method and retains the parameters of the original activation function x [28].
Identity reference
x
Function Layer H
H(x)
+
H(x)+x
Fig. 4. Residual block.
1.4 Our Work Usually, when training neural networks, the forward neural network training speed is slow. This paper proposes a model combining RVFL and LBP for better diagnosis. First, feature extraction is performed using LBP, then classification results are obtained using RVFL. Using the hybrid network model can improve the accuracy of diagnosis, and the time of diagnosis is also shortened. RVFL has a significant speed advantage for the same learning content as it takes a much shorter time to learn than the traditional forward neural network. Therefore, the model in this paper can diagnose lung images more quickly and efficiently, which is beneficial to disease control and contributes to Covid-19.
2 Dataset In this research work, the CT configuration and image acquisition is performed as follows: Philips Ingenuity 64-row spiral CT machine, MAS is 240, KV is 120, its layer thickness is 3 mm, layer pitch is also 3 mm, the pitch is 1.5: the width of the lung window is 1500 HU, the length of the lung window is 500 HU, the width of the mediastinal
Local Binary Pattern and RVFL for Covid-19 Diagnosis
331
window is 350 HU, the length of the lung window is 60 HU, and then the lesions are reconstructed by taking images of the lungs. The lung window image is generated with a layer thickness of 1 mm, and the generated layer spacing is also 1 mm. The person being examined will need to lie on their back, stop breathing and then take a deep breath for a routine scan from the tip of their lungs to the angle of their rib diaphragm. The hierarchical selection method is used for each subject in the experiment to select between 1 and 4 slice sections. The stratified selection method is used for those with Covid-19, but there are no restrictions for those without the disease, and any image level can be selected. The resolution of all images used in the experiment was 1024 × 1024. When two results have different (J1 , J2 ) arise, a higher-level physician (β) needs to be asked to share the same view. Suppose X refers to the CCT image scan and N is the label of each expert and the N the following equation can find the label: N(J1 ) = N(J2 ) N(J1 ) (5) N(X) = NV(Nall) else Nall = [N(J)1 , N(J2 ), N(β)]
(6)
where NV indicates a majority vote, Nall stands for All Experts label.
Fig. 5. A unprocessed Covid-19 image.
Figure 5 shows the Covid-19 unprocessed image. After preprocessing operations on the image (image scaling and normalization), feature extraction is performed on CCT medical images. By image features, we mean the semantic image information contained in an image, for example, brightness, edges, texture, and colors. It is obtained by transforming the image. Feature extraction can reduce the dimensionality and complexity of an image without losing the image information, which is beneficial to image processing. In this paper, we use the LBP algorithm to extract some desired features from the chest image dataset of Covid-19. Figure 6 is a flowchart which image processing. In the preprocessing stage, grayscale images are obtained, followed by histogram pull and then cropping out the text at the edges of the images to get the cropped dataset after cropping. Finally, an image needs to be sampled twice to become of size [256 × 256] (There are four variables for cropping during the set of the values of TOP, BOOT, LEFT, and RIGHT to 150). This operation can save us a lot of hardware resources.
332
M. Wang
Fig. 6. Preprocessing.
3 Methodology 3.1 LBP LBP is a feature extractor often used to extract features [4]. It is a traditional texture analysis descriptor that has received attention in the last decade. LBP is robust to some properties, such as invariance to illumination panning and scaling, achieving state-ofthe-art results in some applications. For example, in the face recognition system, LBP texture characteristics are used as the input of deep networks, the network parameters are obtained through the greedy training network layer by layer, and the test sample is predicted with a trained network [29, 30]. Compared with the characteristic face method, the recognition rate of LBP has dramatically improved. When the LBP algorithm was first proposed, it was defined in a 3 × 3 pixel neighborhood. The LBP algorithm is described as follows: We must select a pixel point as the central pixel point and take its value as the threshold. Then the neighboring pixel values around it and the center pixel value are compared individually. If it is higher than the central pixel value, it is marked as 1; if it is lower than the central pixel value, it is marked as 0. By this method, the pixel value is described by LBP. LBP value reflects the texture characteristics of this image block. The following Fig. 7 shows: To define it, use a more standard formula. L(xc , yc ) =
p−1
2 p s ip − ic
(7)
p=0
where (xc , yc ) represent the middle elements, its value is ic , and peripheral pixel value is ip . The symbolic function s(x) is represented as follows: 1x≥0 s(x) = (8) 0x T It indicates that the associated node is affected by the abnormal node and an exception occurs. The calculation formula of the threshold T is as follows:
T= min C Aj ; Bj (9) 2 Aj Bj >s 2
In formula (9), 2 refers to the determination coefficient; s 2 refers to the decision threshold. The threshold T can distinguish whether there is a strong correlation between the abnormal node and its directly associated node. If there is a strong correlation between them, it means that the directly related nodes are affected by the abnormal nodes, and the directly related nodes are also abnormal. Then, the detection of indirectly related nodes is implemented, and the research object is the indirectly related node with a distance of 2 from the abnormal node. If there is an abnormality in a single node, the direct detection is not only time-consuming, but also consumes a lot of resources because of the large number of indirectly associated nodes and complex topology. In order to improve the detection efficiency, it is necessary to further select the indirect association nodes to be detected. A large number of research results show that the importance of nodes represents the degree of impact on node performance in the event of node failure. In the cloud computing environment, if only important nodes are detected, some invalid nodes can be filtered and the detection speed can be accelerated. Therefore, indirectly related nodes with high importance can be quickly found through node importance. Through the node importance evaluation index, the importance of each node can be counted, and the associated nodes can be traversed in order according to the node importance. However, in the cloud computing environment, the number of nodes is huge. If a node has an exception, it will traverse all indirectly related nodes, which will increase the load of magent and reduce the efficiency of the algorithm. In order to improve the efficiency of the node anomaly detection algorithm, it is not necessary to traverse all the indirectly related nodes. The indirectly related node algorithm only needs to remove the nodes with low importance, and then traverse the indirectly related nodes in order according to the importance of the nodes.
402
Q. Liu and N. Wang
In order to detect the status of indirectly associated nodes, it is not only necessary to calculate the importance of nodes, but also to obtain other relevant information. Data flow refers to the traffic rate of communication between this node and other nodes in a specific time period. Because there are often close connections between nodes, the data flow values between nodes are similar and the change trend is similar. If the data flow suddenly changes too much, it may mean that the node has abnormal behavior. A density based method is used to establish a detection model for normal data streams. This method belongs to the unsupervised method and can tolerate a small number of anomalies in the training data. The degree, aggregation coefficient and data flow are selected as the criteria for anomaly detection, and are defined as the attributes of nodes. The attribute of a node can represent the state of the node, calculate the importance of the node using the degree and aggregation coefficient, and detect the indirectly related nodes that may have abnormalities using the data flow. 2.3 Reliability Assessment The fuzzy comprehensive evaluation method is used to evaluate the reliability of smart campus network topology. Fuzzy comprehensive evaluation is a comprehensive evaluation method based on fuzzy mathematics. This method takes the membership degree theory of fuzzy mathematics as the basis, converts the qualitative evaluation of the evaluation object into quantitative evaluation, that is, makes an overall evaluation of the object restricted by many factors with the fuzzy mathematics theory. The characteristics of fuzzy comprehensive evaluation method are that the results are clear and systematic, which can solve the fuzzy and difficult to quantify non deterministic evaluation problems. The method of fuzzy comprehensive evaluation is a great application of fuzzy mathematics in practice and has a wide prospect. In reality, in the evaluation of some programs, achievements and technical levels, only some fuzzy language can be used to express the evaluation results. For example, the evaluator makes “high, medium and low” decisions on complex decision-making problems based on various factors considered and relevant data and conditions; “Excellent, good, medium and poor”; “Large, medium and small”; At this time, this kind of evaluation problem can be calculated by the method provided by the fuzzy comprehensive evaluation method, and finally the quantitative comprehensive evaluation result can be obtained. The fuzzy comprehensive evaluation method can comprehensively summarize the advantages and disadvantages of each evaluation index, and accurately reflect the comprehensive evaluation results of the evaluated object. Using fuzzy comprehensive evaluation to evaluate the reliability of smart campus network topology can comprehensively summarize various reliability indicators of smart campus network topology and comprehensively reflect the level of network transmission reliability. The specific process includes: determining a fuzzy set (called factor set U) composed of a variety of indicators that can reflect the reliability level of the topology of the smart campus network, setting the evaluation levels of these network reliability indicators, forming a fuzzy set (called comment set V) for reliability indicator evaluation, and obtaining the attribution degree of each reliability indicator to each evaluation level (called fuzzy membership matrix R) through experiments, Then, according to the weight distribution of each reliability index in the evaluation of the topology of the smart
Research on Modeling and Evaluation of Topology Reliability
403
campus network, the quantitative expression result of the evaluation is obtained through the synthesis of the fuzzy matrix determined by the evaluation model. The traditional fuzzy comprehensive evaluation method can be divided into the following five steps: The evaluation factor set U is determined. The indicators for evaluating the objects to be evaluated should reasonably and accurately reflect the performance of the objects. The evaluation comment set V is determined. Generally speaking, the results of the evaluation are divided into several grades, thereby establishing a comment set. The fuzzy membership matrix R is established. Each element in the fuzzy membership matrix R represents the degree of membership of the evaluation grade to which the specified index belongs. Determine the index weight W. In fuzzy comprehensive evaluation, the index weight can be determined by AHP. The fuzzy comprehensive evaluation set S is calculated. After determining the index weight W and the fuzzy membership matrix R, the fuzzy comprehensive evaluation set S can be calculated. In the traditional fuzzy comprehensive evaluation method. Add a step, namely step (6): calculate the efficiency index EI . The calculation formula is as follows: v EI = βf · χf (10) f =1
In formula (10), βf refers to the specific gravity indicating that the performance parameter belongs to each grade; χf refers to the quantitative value corresponding to each level of data collection performance; v refers to the number of indicators. There are still some defects in using the principle of maximum membership to determine the final evaluation result. In order to solve this problem, the efficiency index EI is calculated by using the comprehensive index method. Using the efficiency index EI to evaluate the data collection performance of the smart campus network topology, not only comprehensively considers the evaluation coefficient of each index, but also can distinguish which of the two objects belonging to the same level has the better data collection performance.
3 Experimental Test 3.1 Establishment of Experimental Environment For the reliability modeling and evaluation method of smart campus network topology based on cloud computing, its performance is tested through experiments. The campus of the experimental school covers an area of 500 m long and 400 m wide, with an area of about 200000 m2 , about 300 faculty members and 8000 students. The planning of the school’s functional area is reasonable and clear, and the teaching, administration and accommodation areas are relatively independent. The school is divided into administrative office building, teaching building, library, conference room, stadium, canteen,
404
Q. Liu and N. Wang
teachers’ dormitory and students’ dormitory according to the area. There are two office buildings, of which - office building has 5 floors, 10 rooms on each floor, and each has 4 office positions; The other building has 12 floors, 5 rooms on each floor, and 8 office positions in each room. The data center is located on the third floor of the 12 storey administrative office building, with an area of about 250 m2 . There are 7 teaching buildings with 5 floors and 5 rooms on each floor. There is one laboratory building with 5 floors and 10 rooms on each floor. There are 5 conference rooms, including 3 small conference rooms, 1 large conference room and 1 multimedia studio. The library has five floors, including a borrowing room, a periodical reading room and an electronic reading room. Sports venues, including indoor courts, outdoor courts, basketball courts and volleyball courts. Teachers’ dormitory, 30 rooms in one building, single room or double room. There are 7 student dormitories, each with 5 floors, 30 rooms on each floor, and 6–8 people in each dormitory. There is one canteen. The existing campus network has been built synchronously with the campus construction period for ten years. The structure is a three-layer network structure of core switch + convergence switch + access switch. The core switch is located in the data center machine room, the convergence switch is located in the weak current well on the first floor of the building, and the access switch is located in the weak current well on each floor. The convergence switch connects the core switch through the optical fiber uplink interface, The access switch accesses the aggregation switch through the uplink port. Therefore, build the smart campus network of the University. It is composed of infrastructure layer, support platform layer, application platform layer, application terminal, information system security system. Using the design method, the reliability of the topology of the experimental smart campus network is modeled and evaluated, and the performance of the design method is tested. In the test, the three methods introduced will be used as comparison methods, which are represented by method 1, method 2 and method 3 respectively for comparison test. 3.2 Performance Testing First, test the evaluation accuracy of the design method and the three comparison methods. The test results are shown in Table 1. According to the data in Table 1, the evaluation accuracy of the design method can reach 98.65%, and the evaluation accuracy of methods 1, 2 and 3 can reach 96.52%, 97.10% and 94.63%. The evaluation accuracy of the design method is the highest as a whole. Based on the above test results, the evaluation times of the four methods are compared, and the test results are shown in Fig. 1. According to the data in Fig. 1, the evaluation time of the design method is less than 1000s as a whole, and the evaluation time of methods 1, 2 and 3 is higher than that of the design method, which indicates that the evaluation time of the design method is the shortest. Finally, the modeling performance of the four methods is tested respectively, and the modeling time of the four methods is compared to verify the modeling efficiency of different methods. The test results are shown in Fig. 2.
Research on Modeling and Evaluation of Topology Reliability
405
Table 1. The accuracy comparison of different methods to assessing the reliability of campus network topology structure Serial number
Test times
Average assessment accuracy/% Design method
Method 1
Method 2
Method 3
1
50
98.25
95.30
94.63
94.20
2
100
98.62
94.63
93.24
92.30
3
150
98.41
94.68
94.32
92.04
4
200
98.65
94.75
92.54
90.21
5
250
98.63
95.20
97.10
94.63
6
300
98.52
96.30
96.01
93.14
7
350
98.48
96.52
93.54
94.30
1 2 3
1100
1000
900
800
700
600
500 50
100
150
200
250
Fig. 1. Time comparison test results of different methods to evaluate the reliability of smart campus network topology structure
According to the modeling time-consuming test results in Fig. 2, the design method has the lowest modeling time-consuming, and the other three methods have higher modeling time-consuming, which indicates that the design method has the highest modeling efficiency.
406
Q. Liu and N. Wang 80
70
h
60
50
40
30
20 1
2
3
Fig. 2. Time-consuming comparison of smart campus network topology modeling under different methods
To sum up, because the method in this paper checks the abnormal status of single nodes and associated nodes by using cloud computing method, the interference nodes are excluded for the evaluation results, thereby improving the accuracy. Combined with the fuzzy comprehensive evaluation method, the reliability of the intelligent campus network topology is evaluated, which shortens the time of evaluation and modeling.
4 Conclusion In order to improve the security of smart campus network, this paper studies a cloud computing-based reliability modeling and evaluation method of smart campus network topology. The topological structure of the campus network is constructed by using OPNET software. Based on the topology, the abnormal nodes of the topology model are detected through cloud computing. And the fuzzy comprehensive evaluation method is used to evaluate the topology reliability. The test results can show that the evaluation time and modeling time of the method in this paper are shorter, and the evaluation accuracy is higher. With the growth of campus network data, it is easy to cause the explosion of network storage space. In order to alleviate network space, we will further streamline the topology in the future to improve campus network reliability.
References 1. Zhu, J., Guo, X., Yin. J., et al.: Basic topology, modeling and evaluation of a novel hybrid dc breaker for HVDC Grid. IEEE Trans. Power Delivery, PP(99) (2020)
Research on Modeling and Evaluation of Topology Reliability
407
2. Qu, A., James, D.L.: Fast linking numbers for topology verification of loopy structures. ACM Trans. Graph. 40(4), 1–19 (2021) 3. Zhang, P., Dong, Z.Y., Shen, Z.: Evaluation method for the network reliability based on the network topology. Comput. Syst. Appli. 21(12), 99–102 (2012) 4. Wang, W., Pei, Y., Wang, S.H., et al.: PSTCNN: explainable COVID-19 diagnosis using PSOguided self-tuning CNN. Biocell 47(2), 373–384 (2023) 5. Wang, W., Zhang, X., Wang, S.H., et al.: Covid-19 diagnosis by WE-SAJ. Syst. Sci. Control Eng. 10(1), 325–335 (2022) 6. Wang, X., Zhang, Y., Wen, T.: Advances in metabolic network modeling by integrating omics data. Science Bulletin 66(19), 2393–2404 (2021) 7. Feng, G., Han, Y., Wang, Y., et al.: Modeling and simulation analysis of QoE perception of network quality based on user preferences. Comput. Simulat. 37(4), 356–360 (2020)
Intelligent Retrieval Method of Massive Music Information Resources Based on Deep Learning Yi Liao1(B) and Lin Han2 1 Music Department of Xi’an , Shiyou University, 710065, China
[email protected] 2 Nanchang JiaoTong Institute, Nanchang 330100, China
Abstract. With the increase of data storage capacity and the development of transmission technology, the number of music shows an unprecedented growth. However, this explosive growth makes it more and more difficult to find interesting music clips in such a huge library of music information resources, so it is of great significance to study the intelligent retrieval methods of massive music information resources. Use the word segmentation tool to segment and label the music information resources. According to the word segmentation and part of speech tagging, build a feature extractor based on deep learning, obtain the features of massive music information resources and label the resources using approximate matching method. Combined with the result of resource feature annotation, the size of resource buffer is determined to build an intelligent hierarchical index. For different hierarchical indexes, sparse reconstruction technology is used to preprocess information resources, and pseudo-correlation feedback is used to expand the cache query. Based on the cache query expansion, match the query content entered by the user with the extension words with the similar meaning of the keyword to obtain the matching cost. According to this cost, the result of matching cost less than the set threshold is fed back to the user through feedforward neural network to obtain the retrieval results of massive music information resources. The experimental results show that the maximum expansion coefficient of this method can reach 0.99, the retrieval speed is fast, and the maximum error between the retrieval recall and the actual data is 0.30 kB. Keywords: Deep Learning · Music Information Resources · Intelligent Retrieval
1 Introduction With the rapid development of the Internet and the information society, the characteristics of intelligence, globalization and non-group information widely exist in life, and the information age enables users to transmit resources in a more convenient way. Text search has become more and more mature, but the sensory information expressed in other forms is difficult to describe clearly. If the characteristics of a certain type of information can be found for matching and retrieval, this can be regarded as the best way to improve efficiency and save resources. Sound information is an important medium in life, accounting © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 408–423, 2024. https://doi.org/10.1007/978-3-031-50571-3_30
Intelligent Retrieval Method of Massive Music Information Resources
409
for about 25% of the total amount of information. Music is one of the most common sound media. With the development of the Internet era, people require music information retrieval to be faster, more accurate, more convenient, and more efficient. However, music has its own many characteristics, such as tone, melody, timbre, and sound intensity. It is difficult to specify the characteristics of music itself, unlike traditional literary instinct, which is described in an intuitive way. As audio text retrieval is common in life, it can be retrieved by inputting song names, lyrics, etc. As far as music is concerned, the first message is its own melody, and the second message is descriptive text. Everyone has had this experience. When a song appears in their mind, they can only remember the melody, but can’t remember the song name. People often want to use the song melody they remember to get the song they want, as is the case with music information retrieval. It has become an inevitable trend that users should adopt better information technology means to process music signals, and now it has attracted people’s attention. How to quickly and accurately extract the information you need from a large amount of information has a very important application prospect. Traditional retrieval technology often matches similar resources in certain modalities, and the obtained resources are mostly homomorphic resources with high similarity. Therefore, it is necessary to retrieve massive music information resources. Some literatures propose a retrieval method driven by approximate resource matching, which constructs an artificial convolutional neural network model. Using approximate resource matching results to extract music information features, measure multi-tag similarity according to the concept of conditional entropy, thereby realizing massive music information resource retrieval; some literatures propose a retrieval method based on the combination of on-chain and off-chain. Blockchain and distributed storage technology are combined to achieve the purpose of decentralized data storage. It also provides data retrieval interfaces to managers to achieve the integrity retrieval of massive music information resources. Due to the correlation between different music information resource data, although the above two methods can effectively retrieve data, it is difficult to determine the characteristics of different massive music information resources and the size of the resource buffer, resulting in the inability to build relevant indexes and the decline of the intelligent retrieval effect of massive music information resources. In order to solve this problem effectively, this paper introduces deep learning and proposes a new intelligent retrieval of massive music information resources. The proposal and application of the concept of deep learning is the biggest breakthrough in the field of speech recognition in the 21st century. In 2011, Microsoft first used deep neural network, which has achieved remarkable results in the task of speech recognition. Deep neural network has attracted more and more attention in speech recognition. Considering that deep learning has the advantages of strong learning ability, wide coverage, strong adaptability and good portability, it can effectively determine the characteristics of different massive music information resources and the size of the resource buffer, so as to realize the intelligent retrieval of massive music information resources, which has high practical application value.
410
Y. Liao and L. Han
2 Feature Extraction of Massive Music Information Resources Based on Deep Learning For feature extraction of massive music information resources, text features should be extracted first, and redundant function words should be eliminated at the same time. Because the information objects in the massive music information resource library are described by the same multimodal information resource specification, in different cases, the internal nodes of the multimodal information resource standard tree of each information object are part of the multimodal information resource specification, and the difference is the element value on the leaf node. Comparing the query tree with the standard tree, it is found that matching the query tree with the standard tree data is unnecessary to match the metadata of the object with the query tree data [1, 2]. When the query tree matches the metadata tree, since there is no node on the subtree that can match the node in the query tree, there is no need to consider the matching situation of the subtree with the node as the root node. Let A and A be two unordered label trees, and the edit distance between them is: f (1) d A, A = min f |A −→ A In formula (1), f is the edit sequence mapping. Therefore, before querying, first match the query tree with the multimodal data standard scheme tree of the resource target database, and then record the matching information (i.e. preprocessing information) of the associated node [3]. By analyzing the obtained information, we can avoid a large number of non associated nodes matching in the future matching between the query tree and the multimodal data standard tree, and avoid unnecessary duplication [4]. The tagging method is used to extract the word segmentation features of music information resources marked with word segmentation tools, and the weighting method of (word frequency - inverse word frequency) is used to calculate the feature weight of music information resources. The formula is: w = η × log2
ϑ n
(2)
In formula (2), η represents the number of times the information to be retrieved appears in the retrieval document. Generally speaking, the most important information in music information resources is the information with the highest frequency in all resources, and the resource word frequency is usually used to measure the characteristics of similar texts. The lower the frequency of resource occurrence, the better the identification of resource classification [5]. In a standard restricted Boltzmann machine all observed variables are related to different parameters in the hidden layer. Understand the principle of the model from the perspective of the image, which will make the explanation and understanding more convenient [6, 7]. As the dimension of the image increases, the number of connection weights in deep learning will also become quite large, which will make the operation more complicated and slower in the process of training and updating. In fact, only a few parameters are needed to describe spatial local features, and these parameters also play
Intelligent Retrieval Method of Massive Music Information Resources
411
an important role in extracting information resource features [8]. Therefore, in order to solve these problems, a deep learning-based feature extractor is constructed, as shown in Fig. 1.
Extract path 1
Eigengraph submatrix1
Extract path 2 Eigengraph submatrix2
Eigengraph submatrix3
Extract path 3
Fig. 1. Feature extractor based on deep learning
As can be seen from Fig. 1, Each feature graph sub-matrix of the feature extractor based on deep learning corresponds to a feature extraction path. The deep learning feature extractor is responsible for connecting the visible layer unit and the hidden layer unit, that is, K 3 × 3 feature extractors. The hidden layer units are divided into K, which are called sub-matrices of the feature map. The specific features extracted in each visible layer 3 × 3 neighborhood unit are represented by each hidden layer unit. Identical features in different places in visible cells are represented by cells in a feature map [9]. The preprocessing part, sentence class hypothesis and detection, and semantic block composition are used as processing links, input natural language sentences, and then output the corresponding sentence class representation and word description. The specific extraction steps are as follows: first, do preprocessing, then make assumptions about possible information resources based on the concepts neutralized by all information in the resources, and judge the type of information resources based on the conceptual knowledge contained; And according to the content of the third part, use semantic blocks to judge resources. It is assumed that failure to pass the above steps during the extraction process will lead to traceability. At this time, “hypothesis” and “detection” need to be carried out again before the successful test is completed. Query extraction technology is the key technology of semantic retrieval. By adding words or concepts related to the query semantics of the original query language, the query time is longer than the original query, thus improving the efficiency, recall and accuracy of document retrieval [10]. The semantic information related to the formation of professional knowledge is extracted through the comparison of relevant content and user needs; For the phrases not found in the resource library, use the universal semantic dictionary to expand their cross language semantics, and present them to users in the form of tables for self recognition. The query
412
Y. Liao and L. Han
string is expanded into a search engine query, and the query results are clustered and presented to the user.
3 Resource Annotation Based on Approximate Matching In the process of retrieving massive music information resources, the principle of tree matching is introduced, and a multi-level and multi-modal approximate matching model is established. In this model, the principle of tree matching is applied, which is to realize the modal matching of different data through constraint mapping by mapping the nodes between two trees. Combined with the structural characteristics of massive music information resources, the concepts of structure-based search and semantic search are proposed, and the affinity constraint principle is introduced into this concept, and a retrieval framework based on approximate matching model as shown in Fig. 2 is constructed.
Data layer
Service layer
Application layer
Multi-data service mode
Resource management Resource backup
Resource management Resource backup
Registered interface
Registered interface
egistered agent1
egistered agent2
Ă
Resource management Resource backup Registered interface
Ă
egistered agentN
Fig. 2. Retrieval framework based on approximate matching model
Massive music information resource retrieval is divided into three levels according to the content in Fig. 2, structure and function. The first layer is the application layer, which provides an accessible interface for users to query information resources. The second layer is the service layer. During the retrieval of information resources, multiple servers are used to define user access rights, ensure network security, set identity registration interfaces, and provide data backup and management functions. The third layer is the data layer, which includes information of multiple data brokers. The expression of information resources is diversified. Under the information resource retrieval framework, it uses the approximate matching principle to calculate the connection value between query tree resources and standard tree information resources through data association. The calculation formula is as follows: ϑ θ λ= √ 2 a + b2
(3)
Intelligent Retrieval Method of Massive Music Information Resources
413
In formula (3), a and b represent the data values in the horizontal and vertical directions of the query tree respectively; ϑ stands for massive music information resources; θ represents the deflection angle between the query tree and the massive music information resources. According to the actual demand of information resources, the standardized description method is adopted to analyze the standard tree of massive music information resources, set the standard threshold range, uniformly classify massive music information resources, and determine the relationship between query tree data and multimodal data of the standard tree. Using deep learning technology to detect the matched resources, it can adapt to scanning, photographing, etc. in various occasions, and has high recognition accuracy. Since the network structure of the deep learning algorithm is relatively simple, overfitting to specific resources can be avoided. According to the feature map of the backbone network, a block is generated, and after a series of subsequent processing, a resource area is finally formed, and the effective resources in this area are marked. The detailed marking process is as follows: Step 1: Obtain feature resource databases of different sizes by sampling input resources at different levels, and construct a feature pyramid according to the feature resource database to obtain a feature resource database of uniform size; Pre process the marked candidate resources according to the weight of the index feature components, and determine the features of the feature resource library: R=
ω1 x1 + ω2 x2 + ω3 x3 ω1 + ω2 + ω3
(4)
In formula (4), ω1 , ω2 and ω3 respectively represent the weights of the input layer, hidden layer and output layer in the deep learning network; x1 , x2 , x3 respectively represent the input features of the input layer, hidden layer and output layer in the deep learning network. Step 2: Use the feature resource database under the same scale for training, and the training results can effectively distinguish between effective resources and invalid resources, reducing the overlap between information resources; Step 3: The boundary supervision is introduced into the threshold resource database and used as the threshold of the unified scaling feature resource database; Step 4: Annotate the resource library based on in-depth learning of the target annotation model; Step 5: Obtain the specific content of multiple resource databases; Step 6: In each contract section, determine the feature vectors of the paragraphs respectively, and combine them in a certain order to generate the corresponding paragraph feature vectors; Step 7: Adopt the paragraph feature vector sequence as the input value of the paragraph labeling mode, so that the labeling model outputs a label sequence consistent with the prediction result, and the prediction structure of each resource segment is determined according to the order of the music information resources; Step 8: In the resource library, the length of the resource is equal to the length of the predicted structure label, and the label includes the title, the content of the clause and the content of the resource.
414
Y. Liao and L. Han
4 Intelligent Hierarchical Index Construction In order to obtain massive music information resources, it is necessary to filter out abnormal data, and the abnormal behavior of massive music information resources is resource anomaly. Therefore, a hierarchical index construction method based on deep learning is proposed. Abnormal resources are data generated by abnormal indexing behavior. The research focus of abnormal index tracking is to find the abnormal index path and locate the abnormal resources. When the resource index is running stably, resource exceptions occur within time. For this situation, the time interval of resource sampling should be calculated to locate the abnormal resource location in time. The size of the resource buffer obtained based on the deep learning method determines the sampling interval, which is the time of index construction. If the length of the buffer area is set to be L, the probability that the resources in the buffer area are not full within the time t can be expressed as: m
P2 =
(p1 t)t
n=1
n
(5)
In formula (5), n represents the resource statistics results; p1 indicates the index success rate after user request; m indicates the number of indexes. The sampling interval of the buffer length record of the entire buffer area can be expressed as: T = e−L
n /l
(6)
In formula (6), l represents the unit record length. According to the above formula, the sampling interval can be obtained, that is, the index construction time based on deep learning. If the index of a certain period is classified, the information resources of each stage can be directly obtained in the query process, which saves a lot of time for query and evaluation. In this way, the index information can be obtained effectively at any time without over-expanding the index, and the phenomenon of over-expansion of the index size will not occur. Based on this, an inverted index based on time domain is constructed, as shown in Fig. 3. It can be seen from Fig. 3 that the inverted index structure based on time domain actually contains three kinds of data, which are composed of inverted table (kn, tn), term dictionary (kn) and term index (tn). When querying music information resources for a period of time, we can obtain an original music information resource by using common query techniques according to the term dictionary and term index. Then merge the two inverted tables to keep the order of the first table, so as to obtain the files related to the time period. In order to shorten the construction time of hierarchical index, the sampling interval is divided into k sub time period t1 , t2 , · · · tk , and the dynamic statistical rules are applied to all sub time periods to obtain k statistical results, thus forming the temporal variation relationship of music information resources. For the division of sub time periods, when the dynamic statistical time is the same, the statistical results of each sub time period
Intelligent Retrieval Method of Massive Music Information Resources
k1,t1
k1,t2
Ă
k1,tn
t1
k2,t1
k2,t2
Ă
k2,tn
t2
k3,t1
k3,t2
Ă
k3,tn
t3
kn,tn
tn
415
Ă kn,t1
kn,t2 k1
Ă
k2
Ă
kn
Fig. 3. Inverted index structure based on time domain
are consistent, so that the division results can be quickly obtained, the index grading time can be shortened, and the purpose of quickly building a hierarchical index can be achieved.
5 Intelligent Retrieval Based on Deep Learning 5.1 Information Resource Preprocessing Using sparse reconstruction technology to preprocess information resources, taking the sparse representation of information resources as a prerequisite, reasonably constructing the measurement matrix and using the signal reconstruction algorithm are the keys to achieve accurate estimation. In the wireless positioning mode, the signal can be represented in a sparse format by selecting an appropriate sparse representation matrix, which can be directly constructed in a sparse manner. Divide the entire time domain into m equal parts according to the delay. Assuming that any division of the delay corresponds to a potential path, the resulting sparse gain vector can be expressed as: (7) z = z0 e−ucg0 , z1 e−ucg1 , · · · zm e−ucgm In formula (7), [z0 , z1 , · · · zm ] represents sparse vector; u represents the number of divisions; c represents carrier frequency; gm represents the time domain division result. Each sparse vector corresponds to a potential path. In order to reflect the sparsity of information resources in the time range, the number of potential multipath required is more than the number of actual multipath required. Therefore, the gain sparse vector is a sparse information resource gain coefficient, thus completing the information resource preprocessing. 5.2 Pseudo-Correlated Feedback In order to obtain more query results, based on the initial query content input by the user, expansion words with similar meanings to the keywords can be selected, and combined
416
Y. Liao and L. Han
with the input content to form expanded query terms, so as to improve the richness of the query content, accuracy and completeness. The pseudo-relevant feedback technology can effectively realize query expansion, and the expansion words obtained through this technology are the results of maximizing the expansion of the query content. Extract expansion words to further enrich the query content. It can be seen that the number and accuracy of expansion words are determined by this part of online music information. The pseudo correlation feedback technology has an assumption that the sorted online music information in the query results obtained according to the original query content of the user is indeed related to the keyword sentences entered by the user. The process of initial sorting of query results based on this is as follows: Step 1: Sort the query results for the first time according to the original content entered by the user, and select the network music information with the highest rank according to the relevance of the content. Step 2: Extract content keywords from the above k pieces of online music information, and use the top two words with the most occurrences as expansion words related to the user input. Step 3: A second query is performed according to a new query phrase composed of a combination of keywords input by the user and an expanded word, and a new query result is obtained. Step 4: Use the query likelihood model of the search engine to initially sort the above results, and the sorting basis, that is, the correlation, is calculated by the following formula: log σ (S) =
m i=1
log
χx + νpi |S| + ν
(8)
In formula (8), χx represents word frequency; ν represents the smoothing parameter required by Dirichlet; pi represents the distribution probability of key information in the data set; S represents the music similarity of the initial sorting. 5.3 Intelligent Search With the support of the approximate matching process of massive music information resources, the design of the retrieval process of massive music information resources is as follows: Firstly, the user specifies the matching type in the query tree. After preprocessing, using the required preprocessing information, the corresponding matching algorithm is called to find the cost that matches the standard tree. According to the cost, the result of the matching cost less than the set threshold is fed back to the user, and the threshold can be set as: ϕ · ωi (9) ε= i∈v
In formula (9), ϕ represents the approximate matching cost; ωi represents the node weight; i represents the number of nodes; v represents the label value. This threshold is equivalent to one half of the cost of removing the entire query tree. It is a preset
Intelligent Retrieval Method of Massive Music Information Resources
417
threshold. When no specific type is specified, the resource target metadata specification scheme tree is first used to preprocess the query tree. On this basis, based on the matching preprocessing information obtained, the retrieval tree and the standard tree are approximately matched, compared with the query tree embedding results, and the results are fed back to the user. Due to erroneous data in the intelligent retrieval process of resources, it needs to be corrected using deep learning methods. The attention model is input into the feedforward neural network for training. The structure of the feedforward neural network based on deep learning is shown in Fig. 4. Input layer
…
Middle layer Output layer Fig. 4. Correction structure based on deep learning
It can be seen from Fig. 4 that the structure of feedforward neural network based on deep learning is composed of input layer, hidden layer and input layer, a characteristic matrix can be obtained from the structure training results, thus completing the information correction of the input part. The information resource set is taken as the input sample set, and the information resource dictionary is obtained through the same coding processing steps. The dictionary is used to encode the sample set of information resources, and the processed results are trained for word embedding to obtain the tag embedding matrix with location coding. In this part, we need to use two attention models, stack the two models and input them into the feedforward neural network, and take the output of the network as the input value of softmax function to obtain a probability. The prediction result with the highest probability is selected. By comparing with the dictionary, the corresponding information resources can be obtained and the output part of the correction model is completed. Code the selected sample to obtain the required sample; By embedding the input samples, the marked embedding matrix is obtained; The input is an input sample set, while the output is an embedded matrix; A modified model is obtained through training steps; The data processing method based on input model is adopted, the speech to be corrected is vectorized, and it is input into the trained correction model to obtain the corresponding correction information resources, so as to obtain the intelligent correction results. In a given set of massive music information resources, according to the principle of maximum likelihood estimation, the following log-likelihood function can be obtained: f (x) = arg max i
j i=1
log γ Xj | x, θi
(10)
418
Y. Liao and L. Han
In formula (10), Xj represents the modal data under j training times; γ semantic concept prior distribution. The optimal estimation result of the prior parameters can be obtained through the maximization formula (10). Since the set of information resources follows the polynomial distribution of the prior parameters, the prior parameter estimates can be obtained according to the Lagrangian operator. In order to make the multimodal resource generation process can be estimated efficiently, it is necessary to follow the Gaussian distribution when generating multimodal resources from semantic vectors. In all resource sets, semantic concepts follow the Gaussian distribution, and these characteristic covariance matrices are consistent with the set covariance matrix, thus ensuring that the retrieval process has an optimal solution. In the multimodal joint retrieval, the acquired resources are all multimodal. Compared with traditional retrieval methods, intelligent retrieval of music information resources has higher retrieval efficiency. Suppose the query resource set is xn composed of n data, the resource to be retrieved is yk composed of k resources, and the similarity between xn and yk can be calculated by the following formula: Sim(xn , yk ) =
p(xn , yk , xk , yn ) p(xn , yk )
(11)
In formula (11), xk represents k resources; yn indicates that it is composed of n resources. When the relationship between the target retrieval resources and the query resources is obtained, the resources are sorted according to the order of similarity from large to small. The resources that do not have repeatability and are ranked in the top few items are the retrieval results.
6 Experiments 6.1 Experimental Setup Converting the query interface into a user program calling interface can generate indexes independently, reduce the impact of users on the system, and enable the system to better focus on indicators. Based on this, the structure of the retrieval system is constructed, as shown in Fig. 5. It can be seen from Fig. 5 that the system is mainly composed of index server and Web query interface. The index server is an independent system that allows users to set index type, index data and index record. The index server will index the relevant data according to the way set by the user without affecting other users; The Web query interface can be seen as an interface that can be directly embedded into the user’s application program without modification, mainly including the program page, including index query, query application, and data query page. The main function is to index the Web query, index desktop application query, and database index query. Different program interfaces are different for different users, and users can make an index mark, It allows users to write their own query statements according to the index marks, and also make some small changes to the index marks and embed them in their own systems.
Intelligent Retrieval Method of Massive Music Information Resources Index Web query
Index desktop application query
Database index query
Indexing query
Application of query
Data query
419
Web query interface
Index server
Fig. 5. Retrieval system structure
6.2 Experimental Dataset Select the music query results of the HIFIVE network platform to establish an experimental sample set. The platform database saves 2.8 million pieces of music data. Each piece of data contains rich content information and various social information added by users. These information can be divided into 2 types Tag domain, one is music content information such as < publisher > < title >, and the other is social information such as < similarproducts > < tags >. From 2017 to 2020, a certain number of query records are screened from the platform database to build the training set and test set of the system, and their respective data compositions are shown in Table 1. Table 1. Experimental dataset Group
Training set
Test set
A
250 queries (2017)
100 queries (2018)
B
250 queries 100 queries (2017–2018)
400 queries (2018)
C
250 queries 100 queries 700 queries (2017–2018)
700 queries (2019)
6.3 Importing Information Sources Since the information source is external data, the basic parameters of the information source must be introduced into the research. Figure 6 shows the input process of the information source.
420
Y. Liao and L. Han Start External data source parameters Data parser loads
Is the data normal? N
Y Get connected
Get the parse table
Identify the participants
Request for parameters Analytical data End
Fig. 6. Information source import implementation process
It can be seen from Fig. 6 that the user fills in the parameter information of the information source of the HIFIVE network platform into the view layer, and transmits it to the model layer in the form of URL. This method obtains the URL by calling the Controller function, analyzes it, and passes the analysis result to the model layer as the returned value. The model layer analyzes the information through the appropriate loading amount according to the judgment result of the returned value. 6.4 Experimental Indicators For massive music information resources, the more index resources are, the greater the expansion coefficient is, the faster the retrieval speed is. If no index retrieval is created for all massive music information resources, the expansion coefficient is 0; If the index resource is as large as all resources, the expansion coefficient is 1. The index expansion coefficient formula can be expressed as: α=
M N
(12)
In formula (12), M represents the size of index information resources; N represents the size of all information resources. The evaluation criteria for intelligent resource retrieval are the classic indicators in data retrieval, namely recall and retrieval error. The formula is: PN =
Na × 100% N
(13)
Intelligent Retrieval Method of Massive Music Information Resources
e = hx − hx
421
(14)
In formula (13) and (14), Na represents the required data retrieved; hx represents the number of retrieved data; hx represents the amount of data retrieved. 6.5 Experimental Results and Analysis Using the retrieval method based on approximate resource matching, the retrieval method based on the combination of on-chain and off-chain, and the retrieval method based on deep learning, the retrieval speed is compared and analyzed, and the comparison results are shown in Fig. 7.
Expansion coefficient
Approximate resource matching driven retrieval method The retrieval method based on the combination of chain up and chain down Retrieval method based on deep learning 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0
2
4 Searches
6
8
10
Fig. 7. Comparative analysis of retrieval speed of different methods
It can be seen from Fig. 7 that using the retrieval method driven by approximate resource matching, the expansion coefficient is the largest at the sixth retrieval number, which can reach 0.80, and the smallest at the second retrieval number, which is 0.70; Based on the retrieval method of combining on chain and off chain, the expansion coefficient is the largest at the sixth retrieval number, which can reach 0.75, and the smallest at the tenth retrieval number, which is 0.55; Using the retrieval method based on depth learning, the expansion coefficient is the largest in the 10th retrieval number, which can reach 0.99, and the smallest in the 2nd retrieval number, which is 0.90. It has a fast retrieval effect. For retrieval integrity verification, three methods were used to compare retrieval recall, and the results are shown in Fig. 8. It can be seen from Fig. 8 that the retrieval results of the approximate resource matching driven retrieval method and the combination of on chain and off chain retrieval method are inconsistent with the actual data, and the maximum difference between the retrieval results and the actual data is 5.0 kB in the fourth retrieval. The retrieval results using the method studied are basically consistent with the actual data, and the maximum error with the actual data is 0.30 kB only in the 10th retrieval.
422
Y. Liao and L. Han
Information resources /kB
Actual value Retrieval method based on deep learning Approximate resource matching driven retrieval method The retrieval method based on the combination of chain up and chain down
145.0 140.0 135.0 130.0 125.0 120.0 115.0 110.0 105.0 100.0 0
2
4
6 8 10 Searches
12
14
16
Fig. 8. Comparative analysis of retrieval recall rate of different methods
The retrieval error is analyzed by three methods respectively, and the results are shown in Fig. 9. Approximate resource matching driven retrieval method The retrieval method based on the combination of chain up and chain down Retrieval method based on deep learning
Error of retrieval/%
Searches
Fig. 9. Comparative analysis of retrieval errors of different methods
As can be seen from Fig. 9, using the retrieval method driven by approximate resource matching, the maximum retrieval error is 60%; Using the retrieval method based on the combination of on-chain and off-chain, the maximum retrieval error is 65%; Using the retrieval method based on deep learning, the maximum retrieval error is 10%.
7 Conclusion In order to effectively improve the quality of online music query results, an intelligent retrieval method of massive music information resources based on deep learning is proposed and designed. The pseudo correlation feedback technology is used to initially
Intelligent Retrieval Method of Massive Music Information Resources
423
expand the search scope of online music and obtain the initial sorting results. The depth learning theory is introduced into this matching process, and then the convolution depth confidence network model in depth learning is studied in depth. The retrieval efficiency based on the depth learning model is also compared with the experiment results. The results show that this method greatly improves the accuracy of network music information query, and provides users with a good experience. In the future, we need to think about how to recommend the search results to music lovers, so as to improve the efficiency of users’ use of online information resources, realize the full use of online music information resources, and promote the further development of music information resources retrieval.
References 1. Chen, L., Liu, Y.: Design of interactive information retrieval algorithm based on user demand mining. Comput. Simulat. 39(5), 418–422 (2022) 2. Wang, H., Liu, L., Lin, M., et al.: Music personalized recommendation algorithm based on k-means clustering algorithm. J. Jilin Univ. Eng. Technol. Edn. 51(5), 1845–1850 (2021) 3. Li, W., Zhang, Z., Wang, J.: Storage and retrieval method of massive small files based on EHDFS. Comput. Eng. Design 43(2), 376–383 (2022) 4. Chen, Z., Zhang, M., Guo, M., et al.: Music lyrics-melody intelligent evaluation algorithm based on sequence model. CAAI Trans. Intell. Syst. 15(1), 67–73 (2020) 5. Jin, S., Qin, J.: Music visualization intelligent design based on image style migration. Packaging Eng. 41(16), 193–198 (2020) 6. Li, Y., Guo, J., Yu, Z., et al.: Improve low-resource cross-language information retrieval by constructing pseudo query sentences based on word mapping . J. Shanxi Univ. (Nat. Sci. Edn.) 45(2), 322–331 (2022) 7. Liang, S., Zhu, H., Wu, D.: Cross-lingual information retrieval based on named entity recognition and translation of public digital cultural resources. Library Develop. 1, 87–95 (2022) 8. Lu, W., Tao, C., Li, H., et al.: A unified deep learning framework for urban functional zone extraction based on multi-source heterogeneous data. Remote Sens. Environ. 270(1), 112830– 112842 (2022) 9. Wang, W., Sohail, M.: Research on music style classification based on deep learning. Comput. Math. Methods Med. 22(10), 1–8 (2022) 10. Furner, M., Islam, M.Z., Li, C.T.: Knowledge discovery and visualisation framework using machine learning for music information retrieval from broadcast radio data. Expert Syst. with Appli. 182, 115236.1–115236.11 (2021)
Reliability Evaluation Method of Online Japanese Teaching Software Based on Bayesian Network Jiayi Sun1 , Yue Wang1(B) , and Chao Song2 1 School of Foreign Languages, Dalian University of Science and Technology, Dalian 116000,
China [email protected] 2 School of Innovation and Entrepreneurship, Dalian University of Science and Technology, Dalian 116000, China
Abstract. Software reliability is an important factor for evaluating software quality. Therefore, a reliability evaluation method of online Japanese teaching software based on Bayesian network is proposed. This method first constructs an index system for the reliability impact of online Japanese teaching software, then uses the weight factor judgment table method and the entropy weight method to calculate the combined weight of the evaluation index, and finally uses Bayesian network to calculate the reliability probability of online Japanese teaching software, compares the division standards, and obtains the reliability degree of online Japanese teaching software. The results show that the reliability of the three online Japanese teaching software tends to decline with the extension of application time; The reliability is above 0.9, and the reliability degree is the highest; The first software maintains the best reliability, followed by the second, and finally the third. Keywords: Bayesian Network · Online Japanese teaching software · Weight Calculation · Reliability Evaluation · Entropy Weight Method
1 Introduction Network technology has achieved great development, which has a profound impact on many fields such as learning and life. Higher education is also deeply affected by network technology, and is actively innovating in teaching mode, thus entering a new stage of development. In order to further improve the quality and effect of Japanese teaching and make it have a high level of personality and diversity, many colleges and universities are actively innovating the traditional teaching mode and making scientific use of the “Internet plus” technology. In specific teaching practice, Japanese teachers make scientific use of network technology and carry out online Japanese teaching [1]. Therefore, online Japanese teaching software has been applied, which can better promote Japanese teaching innovation and further improve Japanese teaching effect.
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 424–437, 2024. https://doi.org/10.1007/978-3-031-50571-3_31
Reliability Evaluation Method of Online Japanese Teaching Software
425
However, the online Japanese teaching software is free of charge and allows users to improve the software, resulting in the failure elimination process of online Japanese teaching software largely affected by the change in the number of users. In addition, the rapid iteration of small versions of online Japanese teaching software and the continuous incremental development lead to the changing trend of failure data. In a period of time after the release of online Japanese teaching software, the reliability of the software will fluctuate greatly, so using a single reliability analysis model to describe the failure process of online Japanese teaching software often does not conform to the actual troubleshooting situation [2]. At present, many software reliability models with change points are used to analyze the failure data of online Japanese teaching software. Here, a change point based method is used to add new interpretation and evaluation indicators to the traditional reliability analysis model of online Japanese teaching software to describe the different troubleshooting processes of online Japanese teaching software. Software reliability is an important field in computer engineering. Software reliability evaluation is an important part of software reliability research. Through software reliability evaluation, software reliability related parameters are quantified to build confidence in the use of software. The research of software reliability model is the core of software product reliability evaluation research, and is an important tool to evaluate, predict and analyze software reliability. The existing models describe software reliability to a certain extent, but their universality is limited due to the harsh assumptions or insufficient use of prior information. Bayesian network has the characteristics of graphical modeling method, which can carry out incomplete data reasoning and uncertainty reasoning. Compared with traditional modeling methods, Bayesian network has a significant progressiveness. This research focuses on the application of Bayesian network in software reliability evaluation, and proposes a reliability evaluation method for online Japanese teaching software based on Bayesian network.
2 Research on Reliability Evaluation of Online Japanese Teaching Software 2.1 Reliability Evaluation Technology Online Japanese teaching software has been widely used because of its short development cycle, many online versions, fast update speed and other characteristics. Therefore, the reliability analysis of online Japanese teaching software has also received more and more attention. The essence of the development process of online Japanese teaching software is the development process voluntarily carried out by developers. The determination of the software release time is mostly determined by the core maintenance personnel according to the software development situation. Compared with the traditional closed source software, the debugging process of online Japanese teaching software largely depends on the community members and users. There will be some huge fluctuations in the failures detected after the release of the new version. It is difficult to express software reliability with a measurement parameter [3]. And different parameters may be used for different software and different applications. Like hardware reliability measurement, software reliability measurement can also apply methods and techniques of probability theory
426
J. Sun et al.
and mathematical statistics, because software failures show randomness. In the field of software reliability, software reliability modeling is one of the earliest researches, the most abundant research results, and the most controversial aspects. Software reliability modeling aims to give the estimated value or predicted value of software reliability by statistical method according to the data related to software reliability (failure), which is one of the keys to understanding the behavior of software reliability in essence. It can be said that software reliability modeling is the basis of software reliability engineering practice and theoretical research. At present, in software reliability evaluation technology, there are mainly two evaluation methods that are concerned: (1) Verification method based on software reliability test The software reliability verification method is a test to verify whether the current reliability level of software meets the user’s requirements under a given statistical confidence, that is, when the user receives the software, he determines whether it meets the reliability index specified in the software specification [4]. This method is generally carried out in the software acceptance stage and implemented with the participation of the software demander. The main process is to quantitatively evaluate the reliability of some or some software by using the reliability acceptance model recognized by both the supplier and the demander according to the fault conditions in the field test, so as to judge whether the software has reached the reliability agreed in its requirements specification. (2) Method based on software reliability modeling This method is also known as software reliability growth modeling, is the main content of current software reliability modeling. This method is mainly used in the development stage of software, and its failure situation also depends on the test. Different from the first method, the failure is modified while the test is carried out, and the collected failure behavior is modeled and analyzed, thereby estimates the actual level of software reliability and guides developers to the next steps. Testing is related to the elimination of defects. It is generally arranged to be executed in the system testing phase of the development process. The failures determined by the test are submitted to the developer for analysis and modification, and a new version of the software is established, and then the next test is performed. In this iterative process of “testing-debugging-new version”, because the software bugs found in the software testing phase are continuously eliminated, the software reliability shows an increasing trend, so it is called software reliability growth modeling. The reliability evaluation process of online Japanese teaching software is shown in Fig. 1. 2.2 Japanese Teaching Software Reliability Impact Index System With the deepening of software reliability engineering research, software reliability evaluation has become an indispensable and important research content. The effectiveness of reliability assurance measures can not be judged without accurate evaluation methods. The severity of the influence of reliability factors on software reliability is closely related to the level of software reliability. One is that these factors are taken from various
Reliability Evaluation Method of Online Japanese Teaching Software
427
Fig. 1. Reliability Evaluation Process of Online Japanese Teaching Software
stages of the software development process, which can truly reflect that software reliability is affected by them from different perspectives; The level of sexuality is shown [5]. Therefore, on the basis of referring to relevant data, the principal component analysis method is used to summarize and summarize it, and a comprehensive evaluation index system for building software reliability is obtained, as shown Fig. 2. The system is divided into two levels. The first level comprehensively evaluates software reliability from five major aspects, and the second level lists several main factors affecting each major aspect of software reliability, which is a detailed reliability factor. According to the degree of influence of these factors on software reliability, the metrics used are light, light, medium, heavy and heavy. According to the degree of influence, it can be determined that the software reliability belongs to the corresponding high, high, medium, low and low degrees. The more seriously the software reliability is affected, the lower its level [6]. For example, if the degree of software reliability affected is “heavy”, the software reliability level is “low”. Because there are many factors affecting a certain aspect of software reliability, and there are primary and secondary factors among them, the influence of the main factors can be highlighted through weight distribution, so as to achieve different degrees of importance.
428
J. Sun et al.
Fig. 2. Japanese teaching software reliability impact index system
2.3 Calculation of Reliability Impact Index Weight The weights can also be called weighting coefficients or weights, which are used to indicate the relative importance of each evaluation index in the population. When determining the weight of each indicator in the indicator system, it is necessary to start from the entire indicator system, coordinate the relationship between the evaluation indicators, and then allocate their weights reasonably according to the role and effect of each indicator on the whole. It can be seen from the previous chapter that the established software reliability evaluation index system is a multi-indicator evaluation system including 5 secondary indicators. The different proportions of different indicators in the reliability evaluation will affect the evaluation results. Therefore, how to determine the weight of indicators is particularly important. There are usually two ways of empowerment, one is subjective empowerment, and the other is objective empowerment. Since subjective empowerment reflects the subjective will of decision-makers, its empowerment results are too subjective. However, the objective weighting method is too dependent on the sample data, and the calculation steps are complex, which can not reflect the subjective will of decision makers. In the
Reliability Evaluation Method of Online Japanese Teaching Software
429
field of software reliability evaluation, there are many factors affecting software reliability, and it is impossible to scientifically and comprehensively weight the indicators only by relying on personal experience or individual sample data. Therefore, this paper considers to adopting a comprehensive method of determining the weight combining subjective and objective factors. (1) The subjective weighting method includes the weight factor judgment table method and the analytic hierarchy process. The weight factor judgment table method mainly relies on the more authoritative or experienced evaluators in the professional field, who form an evaluation expert group, fill in the corresponding weight factor judgment table according to the established rules, and then pass the weights filled in by the experts. The factor judgment table is used to determine the weight value. In the process of filling in the weight factor judgment table by the expert group, there may be some contradictions in which the importance does not match. Therefore, it is necessary to check the consistency of the weight factor judgment table filled in by the expert group to ensure the importance of each index. Coordination [7]. (2) Objective weighting method mainly includes entropy weight method and factor analysis method. In the entropy weight method, entropy is a measure of the uncertainty of the system. If the information entropy of an indicator is smaller, the greater the amount of information contained in the indicator, and the greater its proportion in the evaluation. Since it does not rely on subjective judgment, the entropy weight method is more objective and accurate. It can be used to screen out indicators that do not contribute much to the indicator system, and finally obtain more objective indicator weight values. The combination of subjective and objective weighting method has the advantages of both subjective weight and objective weight. It analyzes and makes decisions on various indicators that affect software reliability. The results not only reflect the opinions of the expert group, but also conform to the objectiveness reflected in the sample data. The fact makes the indicator weights more scientific and reasonable. In this paper, the weight factor judgment table method is used subjectively, and the entropy weight method mentioned above is used objectively, and then the two weighting methods are organically combined, and finally the comprehensive weight value is obtained. Weight Factor Judgment Table Method The process of weight factor judgment table method is as follows: Step 1: Determine the quantitative standard of indicators. The weight factor judgment table method mainly judges the importance of each index according to the judgment matrix. The methods to determine the importance of each index can be divided into proportional scale method and exponential scale method. This paper adopts the proportional scale method, which is based on the evaluation standard of the difference between matter and substance, and uses the 1–9 scale method to judge the grade for evaluation, as shown in Table 1. Step 2: Construct the judgment matrix.
430
J. Sun et al. Table 1. 1–9 proportional scaling method
Value meaning
Scale
Indicates that the two factors are equally important
1
Indicates that one factor is slightly more important than the 3 other Indicates that one factor is significantly more important than the other
5
Indicates that one factor is more important than the other
7
Indicates that one factor is extremely important compared with the others
9
The median value of the above two adjacent judgments
2, 4, 6, 8
The ratio of one element to another
Reciprocal of the above numbers
The expert fills in the weight factor judgment table, and then the expert scores the pairwise comparison of the metrics to obtain the judgment matrix G. ⎞ g11 g12 ... g1n ⎜ g21 g22 ... g2n ⎟ ⎟ = gij G=⎜ nn ⎠ ⎝ ... gn1 gn2 ... gnn ⎛
(1)
In the formula, gij represents the relative importance score of Japanese teaching software reliability impact indicator i and Japanese teaching software reliability impact indicator j; n represents the number of indicators. Step 3: Check the consistency of the judgment matrix. The actually obtained judgment matrix does not necessarily strictly meet the consistency requirements. In practical applications, it is not strictly required that all judgment matrices strictly meet the consistency requirements. When the degree of inconsistency is within the allowable range, subsequent operations can be performed. In practical applications, the consistency ratio C is mainly used to detect whether the judgment matrix meets the consistency requirements. The consistency judgment criterion in actual use is: when the consistency ratio C < 0.1, the judgment matrix satisfies the inequality, then the matrix satisfies the consistency, otherwise it is judged that the matrix does not meet the consistency. Step 4: calculate the weight. Make statistics on the judgment matrix G that has passed the consistency test. 1) Calculate the score Aik of each line of evaluation index. Aik =
n
gij
i=1
In the formula, k represents the expert number.
(2)
Reliability Evaluation Method of Online Japanese Teaching Software
431
2) Calculate the average score Bi of the evaluation index. Bi =
N
Aik k=1
N
(3)
In the formula, N represents the number of experts. 3) Calculate the evaluation index weight wi . wi =
Bi n Bi
(4)
i=1
Entropy Weight Method The entropy weight method is a kind of objective weighting method, which has been widely used at home and abroad in recent years. The smaller the entropy value of the index, the greater the variation of the index value, the greater the amount of information provided, and the greater the weight of the index [8]. Step 1: Assuming that the number of research objects is m, the indicator matrix is (5) H = hpj mn Step 2: Normalize each indicator in the matrix and record it as hˆ pi , then establish the proportion of each evaluation indicator and record it as Rpi . Rpi =
hˆ pi m hˆ pi
(6)
p=1
In the formula, Rpi represents the proportion of the p evaluation index. Step 3: Calculate the entropy value of the indicator. Calculated as follows: m 2 Rpi − ln Rpi
Dpi =
p=1
ln m
(7)
In the formula, Cij represents the entropy value of the evaluation index. Step 4: calculate the difference coefficient of the index and record it as Cpi . Step 5: calculate the weight of the evaluation index. The calculation formula is as follows: vi = n−
Cpi n i=1
(8) Cpi
432
J. Sun et al.
In the formula, vi represents the weight of the i indicator. Combining Weights Combining the weights calculated by the above two methods, the combined weights are obtained, and the results are as follows: ui =
wi vi n
(9)
(wi + vi )
i=1
In the formula, ui represents the combined weight of the i indicator. This chapter shows the importance of adopting the combination of subjective and objective weighting method by summarizing the shortcomings of the subjective and objective weighting method, and finally introduces the calculation steps of the subjective and objective combination weighting method used in this paper, and determine the weight value of each first-level index in the software reliability evaluation index system. 2.4 Evaluation of Bayesian Network Model Bayesian network is a probabilistic network. Based on Bayesian formula, the network based on probabilistic reasoning is visualized. The so-called probabilistic reasoning is the process of obtaining the probability information of other unknown evaluation indicators by analyzing the relationship between evaluation indicators after knowing the information of some evaluation indicators. Bayesian networks based on probabilistic reasoning are proposed to solve the problems of uncertainty and incompleteness, and are widely used in many industries. They have great advantages in solving the damage state assessment of complex systems [9]. Bayesian Networks is a tool that comprehensively uses probability theory and graph theory to analyze and reason uncertain events. It is an organic combination of Bayesian methods and graph theory, and can be expressed as a directed acyclic graph of the probability dependence between evaluation indicators. The development of Bayesian statistics and graph theory provides a solid foundation for Bayesian network theory, while the development and wide application of computer technology and artificial intelligence promote the improvement and development of Bayesian networks. The basic theoretical framework of Bayesian networks consists of three parts, including network representation, network modeling and network reasoning. In the modeling of Bayesian networks for complex large-scale systems, it is necessary to first complete the two key steps of network parameter learning and network structure learning. How to learn to establish Bayesian networks from known data is one of the hot issues in current Bayesian network research. The modeling of Bayesian network can be generally divided into three steps: first, determine the evaluation index set and evaluation index domain, then determine the network structure, and finally determine the node condition probability table. In the process of modeling, we generally need to make a compromise in two aspects: on the one hand, in order to achieve sufficient accuracy, we need to build a large and rich network model; On the other hand, it is necessary to consider the cost of building and
Reliability Evaluation Method of Online Japanese Teaching Software
433
maintaining models and the complexity of probabilistic reasoning [10]. In fact, building a Bayesian network is often an iterative and interactive process of the above three steps. (1) Determine the evaluation indicator set and evaluation indicator field. It is mainly to select an appropriate set of evaluation indicators under the guidance of experts in the field. In some cases, certain strategies are required to select important factors from the evaluation indicators provided by experts. At the same time, it is also necessary to determine the evaluation indicator field, that is, the possible values of these evaluation indicators. (2) Determine the network structure. Based on the evaluation index node, the Bayesian network structure is constructed in two ways: the network structure is specified by expert knowledge; A large number of training sample data are used to learn and determine the Bayesian network structure, which expresses the causal relationship between evaluation indicators. Based on the above process, a Bayesian network for software reliability evaluation is established. Through historical data statistics and experimental data, study the value range and discretization method of each impact index, quantify the causal relationship between each index and reliability, and establish a risk reasoning network with software operation as the main line. In combination with the development law and risk element division results, the connection node is established to determine the inference network structure, as shown in Fig. 3.
Fig. 3. Bayesian network model for software reliability evaluation
434
J. Sun et al.
According to the results, the targeted rectification suggestions are given, the online Japanese teaching software is optimized, and the reliability evaluation is completed.
3 Evaluation Test 3.1 Test Cases A one month evaluation test was conducted with 3 online Japanese teaching software as the object. (1) The first one is well compatible with a variety of GNSS standard formats and protocols, GNSS received specific data protocol formats and some external communications, and can meet the requirements of real-time PPP applications. (2) The second one is the GNSS data processing and analysis software package that can run on Linux and Windows operating systems. (3) The third one is an object-oriented multi-functional software library that can meet the needs of various end users and support multithreading processing. Based on the weight factor judgment table method and entropy weight method, the calculated weights of the evaluation indicators are shown in Table 2. 3.2 Reliability Assessment Results Bayesian network model is used to test the reliability of three Japanese teaching software. The results are shown in Fig. 4. The following three points can be seen from Fig. 4: (1) With the extension of the application time, the reliability of the three Japanese teaching software shows a downward trend; (2) The reliability of the three Japanese teaching software is above 0.9, and its reliability is the highest. (3) The reliability of the three Japanese teaching software is the best maintained by the first software, followed by the second, and finally by the third.
4 Conclusion The further development of science and technology urgently needs high reliability software and hardware. The increasing use of computers makes the consequences caused by computer failures more and more serious. For example, in people’s daily life, software failures cause a lot of inconvenience and even damage. For example, in the field of public infrastructure, the failure of subway or aircraft operation systems may lead to delays or even major casualties. For example, in the financial field, software failures may lead to major economic losses. Therefore, its reliability has become a widespread concern. Therefore, a Bayesian network based reliability evaluation method for Japanese teaching software is studied. The results are as follows: the model makes full use of prior information, takes into account incomplete error removal and error removal time, and introduces the influence of reliability factor coefficient, which expands the application scope of reliability assessment and improves the accuracy of software reliability assessment. At the same time, because it is based on Bayesian classical mathematical theory, it enhances the stability of reliability assessment.
Reliability Evaluation Method of Online Japanese Teaching Software
435
Table 2. Evaluation index weight table Level III indicators
The first Japanese teaching software
The second Japanese teaching software
The third Japanese teaching software
Software complexity
0.25
2.32
1.84
Developer quality
0.36
1.25
3.22
Software type
0.42
1.22
1.25
Reserve funds
1.22
1.21
0.12
Current technical level 1.72
1.36
0.15
Programming language selection
0.85
2.02
1.41
Design method
1.36
1.21
2.36
Requirement analysis
1.52
1.01
2.10
Detailed design
2.33
0.21
0.21
Development management
2.58
1.62
0.40
Programmer skills
0.25
1.54
3.21
Proportion of highly 0.14 qualified programmers
2.32
1.25
Scale of development team
0.54
4.21
1.41
Working pressure and strength
0.63
2.32
1.55
Application domain expertise
0.23
0.14
2.87
Testing environment
0.20
2.36
5.21
Test method
1.52
2.20
4.12
Configuration of test resources
4.21
1.05
2.36
Test coverage
4.30
0.25
1.66
Test tools
3.22
1.21
1.35
Processor
2.82
2.33
1.22
Storage device
0.41
2.41
0.64
Input / output equipment
2.84
2.74
0.41
Communication equipment
3.84
2.12
4.15
There are many aspects of software reliability models that require further research:
436
J. Sun et al.
Fig. 4. Reliability Evaluation Results
(1) The model aspect can be further studied, and more candidate reliability growth models can be proposed for different data sets. (2) The evaluation index of model fitting prediction effect can be further expanded. In addition, for different Japanese teaching software, different weights can be added to the theoretical optimal model index, and a more suitable model for the software can be selected. (3) Further comparative research can be carried out on the specific effects of combined and non-combined models in prediction. At the same time, according to the change characteristics of different cumulative data curves of failures, combined models are used for classification research. Aknowledgement. The Design and Development of “Zhaozhi Japanese” Public Welfare Teaching Platform for the Elderly.Source: Undergraduate Innovation and Entrepreneurship Training Program of Dalian University of Science and Technology.
References 1. Juangsih, J., Emzir, E., Rasyid, Y.: The needs analysis of four primary language skills in developing japanese teaching materials for tourism purposes. Jurnal Pendidikan Bahasa dan Sastra 20(2), 185–196 (2021) 2. Liu, C., Chen, Q.: Current situation and suggested measures of japanese teaching in colleges and universities based on computer aid. J. Phys. Conf. Ser. 1744(3), 32–41 (2021) 3. Zhang, T.: The application of positive psychology in japanese teaching. Clausius Scientific Press 3(6), 88–93 (2021) 4. Zheng, G., Zhu, J.: Japanese translation teaching corpus based on bilingual non parallel data model. J. Intell. Fuzzy Syst. Appli. Eng. Technol. 40(2), 3731–3741 (2021)
Reliability Evaluation Method of Online Japanese Teaching Software
437
5. Liu, X., Xie, N.: Grey-based approach for estimating software reliability under nonhomogeneous Poisson process. Syst. Eng. Electron. Technol. (English) 33(2), 360–369 (2022) 6. Qiu, H., Yan, X., Peng, R.: Research on software reliability model considering multiple types of faults. Operat. Res. Manag. Sci. 31(4), 104–108 (2022) 7. Fang, H., Li, C.: Component software reliability prediction method based on copula function model. Comput. Simulat. 39(5), 352–355,365 (2022) 8. Liu, Y., Yang, Y.: Software reliability analysis method based on uncertain fault tree. J. Tianjin Univ. Sci. Technol. 37(1), 52–55,63 (2022) 9. Li, K., Lei, Y., Zhang, Z.: Software reliability evaluation method based on component impact factor. Comput. Eng. Design 43(1), 165–170 (2022) 10. Zhang, B., Hai, S., Wei, J.: Research on embedded software reliability prediction model based on continuous collaborative machine learning algorithm. Microcontrollers Embedded Syst. 22(1), 39–42,47 (2022)
Intelligent Analysis Method of Multidimensional Time Series Data Based on Deep Learning Zhongwei Chen and Xiaofeng Li(B) College of Information Engineering, Guangxi University of Foreign Languages, Nanning 530222, China [email protected]
Abstract. In order to improve the intelligent analysis performance of multidimensional time series data, an intelligent analysis method of multidimensional time series data based on deep learning is proposed. The deep learning network model is used to establish the distribution function of correctly classified and wrongly classified data. According to the calculation process of the deep learning network, the outliers of multi-dimensional time series data are mined. The feature space of multi-dimensional time series data is determined using the k-nearest neighbor method. The multi-dimensional time series data is reduced in dimension by calculating the covariance matrix of multi-dimensional time series data, and the time series data is classified according to the preference level, extract the characteristics of multidimensional time series data, and combine with the design of multidimensional time series data analysis algorithm to achieve intelligent analysis of multidimensional time series data. The experimental results show that the method in this paper can reduce the mean absolute error and root mean square error to within 0.25 and 0.3 respectively in the intelligent analysis of multidimensional time series data, and improve the analysis performance of multidimensional time series data. Keywords: Deep Learning · Multi Dimensional Time Series Data · Intelligent Analysis · Dimension Reduction Processing
1 Introduction Time series data refers to time series data, which is a data column recorded by the same indicator in time sequence [1]. The analysis of time series data mainly predicts unknown time series data by analyzing the existing time series data and constructing a time series model. The difficulty of time series data analysis lies in how to grasp the characteristics of historical time series data, and then more effectively predict future time series data. With the extensive application of IoT technology in smart cities, Industry 4.0, supply chains and home automation, it has generated massive data. According to a research report of Cisco, by 2021, the number of devices connected to the Internet of Things will reach 12 billion, which means that the data traffic generated each month will exceed 49 exabytes. Whether in the industrial field or in the life field, the use of sensors is © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 438–452, 2024. https://doi.org/10.1007/978-3-031-50571-3_32
Intelligent Analysis Method of Multidimensional Time Series Data
439
everywhere. A large amount of data and information generated by these sensors has gradually become the most common data form in the application of the Internet of Things [2]. Therefore, in the application of the Internet of Things, the analysis and processing of these data has become increasingly important. The whole IoT architecture consists of data perception layer, data transmission layer, data processing and storage layer, joint learning and analysis and IoT application layer. Through the analysis and processing of the data collected by these sensors, more valuable and meaningful information can be obtained, thus providing guarantee for more intelligent decision-making, deployment and supervision of these IoT applications. With the rapid development of Internet of Things technology and artificial intelligence technology, people’s life and work are becoming more and more intelligent, and more and more sensors appear in daily work and life. Due to the high requirements of real-time performance in the Internet of Things society, it is necessary to ensure The normal operation of the IoT system can reduce the loss caused by failure. For this reason, it is necessary to make more accurate predictions of the time series data generated in the IoT, and to detect the abnormal data that may be generated during the operation of the IoT system to reduce the risk of failure. The loss caused by abnormal data. In the domestic research, Tao Tao et al. [3] proposed an anomaly detection method based on deep learning to identify abnormal refueling vehicles. First, the automatic encoder was used to extract the features of the relevant data collected by the refueling station, and then the Seq2Seq model embedded in two-way short-term and short-term memory was used to predict refueling behavior. Finally, the threshold of the abnormal point was defined by comparing the predicted value and the original value. Xia Ying et al. [4] conducted anomaly detection through data analysis, which helps to accurately identify abnormal behaviors, thus improving service quality and decision-making ability. However, due to the spatiotemporal dependence of multidimensional time series data and the randomness of abnormal events, existing methods still have certain limitations. Aiming at the above problems, this paper proposes a multi-dimensional time series data anomaly detection method MBCLE which integrates a new statistical method and bidirectional convolution LSTM. The method introduces stacked median filtering to deal with point anomalies in the input data and smooth data fluctuations; design a predictor combining bidirectional convolutional long short-term memory network and bidirectional long short-term memory network for data modeling and prediction; bidirectional circular index Weighted moving average smoothes prediction error; uses dynamic thresholding method to compute thresholds to detect contextual anomalies. However, the above two methods have the problem of large average absolute error in the intelligent analysis of multidimensional time series data. In foreign research, Meng C et al. [5] proposed a multi-dimensional time series outlier detection framework based on time convolution network automatic encoder, which can detect outliers in time series data, such as identifying equipment failures, dangerous driving behaviors of vehicles, etc. A feature extraction method is used to transform the original time series into a time series with rich features. The proposed TCN-AE is used to reconstruct time series data with rich characteristics, and the reconstruction error is used to calculate outliers. Although this method can complete the detection and analysis
440
Z. Chen and X. Li
of outliers, the root-mean-square error of the analysis results of this method is large, and the performance of this method has some room for improvement. In order to solve the problem of large average absolute error and root mean square error in the traditional methods mentioned above, an intelligent analysis method of multidimensional time series data based on deep learning is proposed.
2 Intelligent Analysis of Multi-dimensional Time Series Data 2.1 Mining Multidimensional Time Series Data Outliers Due to the small scale and wide range of time series data, it is very easy to regard abnormal values as wrong or invalid data during analysis, which will also affect the overall accuracy of multidimensional time series data, cause misunderstanding and increase the difficulty of analysis. Therefore, we use deep learning network technology to mine outliers of time series data. The deep learning network model is shown in Fig. 1. Input layer
Hide Layers
Hide Layers
Output layer
x1 x2
y
x3 x4
Fig. 1. Deep learning network model
From the above figure, it can be found that the composition of the deep learning network model mainly includes the input layer, the hidden layer and the output layer, corresponding to the input value, output value, weight and output function. The basic relationship between the different components is shown in the following formula: y∗ = g(ξ x∗ + φ)
(1)
In the formula, y∗ represents the output value, g represents the transfer function, φ represents the offset, ξ represents the weight, and x∗ represents the input value. The steps to mine multidimensional time series data outliers using deep learning networks are as follows: Step 1: Initialize the deep learning network, randomly initialize the weights and biases of each layer, and the number of neurons in the input layer is determined by the number of data attributes in the dataset. The neighborhood range of the detection object has been obtained through the above process. Assuming that there are M attributes in
Intelligent Analysis Method of Multidimensional Time Series Data
441
the data set within the neighborhood range, the number of neurons in the input layer is set to M ; Step 2: Obtain the input and output vectors through the given training set, and set them as vectors x∗ and y∗ respectively; Step 3: Specify the number of nodes, hide and output the number of nodes; Step 4: Obtain the actual output value of the deep learning network according to the given data forwarding output data; Step 5: Process the output value, which can fully reflect the distribution of the data set. According to the output value of the deep learning network, the multi-dimensional time series data can be judged by the entropy value. The entropy value represents the multi-dimensional time series data in a certain category. Uncertainty [6]. The greater the entropy, the higher the uncertainty of multi-dimensional time series data, and the more likely abnormality of multi-dimensional time series data. It is proposed that when the entropy value exceeds a certain threshold, the multidimensional time series data is an outlier. When the threshold is small, set a threshold value of f , whose range is between 0 and 1. Therefore, the evaluation function f is given, which is related to the number of multidimensional time series data of two categories, namely: (2) f = γ αPtrue , βPfalse In the formula, α and β represent the weights of multi-dimensional time series data respectively, Ptrue represents the multi-dimensional time series data classified as correct, Pfalse represents the multi-dimensional time series data classified as wrong, and γ represents the validity of using a certain threshold to judge abnormal points. The value of γ is closely related to the mining effect of outliers. The larger the value, the better the mining effect. On the contrary, the worse the mining effect. The value of γ is inversely proportional to the number of multidimensional time series data with correct classification, that is, the number of multidimensional time series data with wrong classification is proportional to the value of γ . Therefore, the formula of evaluation function f is as follows: f = −αPtrue + βPfalse
(3)
In order to improve the mining accuracy, the following formula is used to reduce the f error, and its expression is: ε = −δ
∂f ∂ jk
(4)
In the formula, δ represents the deep learning coefficient, which is the learning speed in the process of deep learning network training, namely the learning rate. Assume that fa (x) represents the multidimensional time series data density function with correct classification in the dataset, and fb (x) represents the multidimensional time series data density function with incorrect classification in the dataset, as shown in Fig. 2. According to the distribution function in Fig. 2, we get: 1 fa (x)dx (5) Pa = f
442
Z. Chen and X. Li
Fig. 2. Distribution function of correctly classified and incorrectly classified data
1
Pb =
fb (x)dx
(6)
f
According to the distribution function of formula (5) and formula (6), we can get: P(f ) = −αPa + βPb
(7)
Judging the efficiency of outlier mining of multi-dimensional time series data through P(f ), the value of this value is proportional to the mining efficiency. The larger the value is, the higher the mining efficiency is, and the worse the mining effect is. Therefore, when P(f ) takes a maximum value, the value of entropy is the best. Step 6: According to the actual output and expected output of the deep learning network, calculate the output error of the deep learning network and judge the stop condition of the deep learning network. If yes, stop training and exit the deep learning network to evaluate the outliers of multidimensional time series data; if not, return to Step 2. Step 7: Evaluation of multidimensional time series data outliers, evaluate the detected outliers, and find out the reasons for the multidimensional time series data outliers. After the outliers are identified and verified, the outliers need to be post-processed in order to accurately serve educational decision-making [7]. First, the causes of outliers are analyzed from a technical point of view; if there are technical reasons or human input errors, such abnormal data needs to be eliminated to reduce the difficulty of post-processing and improve the accuracy of multi-dimensional time series data. Second, the influence of subjective assumptions eliminates technical error factors, and appropriate intelligent mining algorithms are used to mine abnormal points, establish an analysis model, and determine an appropriate abnormal range to reduce the subjectivity of abnormal points and reduce the correlation of abnormal points. error effects. Third, the analysis results of abnormal phenomena are presented in an intuitive form, so that the causes of abnormal phenomena can be analyzed in detail in combination with the specific education and
Intelligent Analysis Method of Multidimensional Time Series Data
443
teaching situation, and corresponding measures and plans can be put forward in a targeted manner to make multi-dimensional time series data outliers. The point detection algorithm plays a greater practical value. The above calculation process based on deep learning network is iterated continuously until all multidimensional time series data outliers are mined, and then the algorithm designed this time is stopped, so as to complete the outlier mining of multidimensional time series data based on deep learning technology through the above process. The specific process of outlier mining in multidimensional time series data is shown in Fig. 3. Start Initialize deep learning parameters Get input and output vectors Node hiding and output processing Evaluation function
Are the stop conditions met?
Output mining results End
Fig. 3. Outlier mining process of multidimensional time series data
2.2 Dimensionality Reduction Processing Multidimensional Time Series Data First, the k-nearest neighbor method is used to determine the domain of sampling points in the z-dimensional time series, and then the nearest neighbor graph G is constructed. The nodes in Figure G correspond to the points in {xt }, and the edges in Figure G represent the nearest neighbor relationship between points. Calculate the approximate geodesic distance Lij , select two nodes xi and xj in Figure G, and measure the approximate geodesic distance of the shortest path between them to obtain the matrix L2 = L2ij ∈ γ N ×N .
444
Z. Chen and X. Li
The process of constructing the nuclear matrix H˜ is shown in formula (8): 1 H˜ = H L2 + 2ζ H (L) + ζ 2 2
(8)
In the formula, ζ ≥ ζ ∗ , so as to ensure that the matrix H˜ is positive semi-definite. After calculating the r eigenvalues of H˜ , the eigenvalue matrix ∂ ∈ γ r×xr can be obtained, and the corresponding eigenvector is q ∈ γ r×r . Finally, the embedded coordinates of the n sampled data points in the r-dimensional space can be obtained as: T 1 Xˆ = xˆ 1 , xˆ 2 , · · · , xˆ n = ∂ 2 qT
(9)
According to the calculation, H˜ is the kernel matrix, so the (i, j) element of H˜ can be expressed as: (10) H˜ ij = k xi , xj = T (xi ) xj Then in the low-dimensional feature space, the formula for calculating the covariance matrix of multi-dimensional time series data is: J =
1 (ϕD)T N
(11)
where, = [ (x1 ), (x2 ), (xn )], the node coordinates in the low dimensional feature space, can be obtained by mapping the central matrix to the eigenvector of matrix . Select a new test sample xl ∈ γ b whose coordinate in the low dimensional feature space is xˆ l ∈ γ a , then: n
xˆ l j = qij k(xi , xl )
(12)
i=1
Among them, qij is the i element of the feature vector qj , and xˆ l j represents the j element of xl , k(.) represents the coordinate transformation function. According to the above process, the k-nearest neighbor method is used to reduce the dimension of multi-dimensional time series data. 2.3 Extracting Multi-dimensional Time Series Data Features The multi-objective decision-making theory in operational research is a challenging and active branch field. Based on the decision-making background, multi-objective decision-making considers several evaluation indicators that may conflict and diverge with each other, and combines optimization theory, statistics theory, management philosophy and operational research methods to optimize and rank multiple alternatives [8]. The proposed method extracts the characteristics of time series data on the basis of multi-objective decision theory, and the specific steps are as follows: The standard decision matrix C is constructed according to the extracted interval extreme point sequence. The rows and columns in the decision matrix C are the extreme
Intelligent Analysis Method of Multidimensional Time Series Data
445
points existing in the time series data and the object attributes corresponding to the extreme points, that is, the multi-objective criterion. Let the vector A = (a1 , · · · , an ) be a set composed of n extreme points, and the vector B = (b1 , · · · , bm ) be a set composed of m extreme point attributes, which is the evaluation index. After standardizing the decision matrix, the decision objects corresponding to different indicators are compared by the following formula: (13) dk ai , aj = ck (ai ) − ck aj In the formula, dk ai , aj represents the difference between extreme points ai and aj on evaluation index ck . The difference dk ai , aj is replaced by the standardized preference ϑk ai , aj through the preference function, namely: (14) ϑk ai , aj = ψk dk ai , aj In the formula, ψk represents the preference function. The time series data feature extraction algorithm based on multi-objective decision-making selects the preference function on the basis of linear features: ⎧ ⎪ if x < vk ⎨ 0, k , if vk < x < uk (15) ψk (x) = ux−v −v ⎪ ⎩ 1,k k if x > u k where, uk represents the preference threshold and vk represents the indifference threshold. The above two thresholds can predict the distribution of preference. The preference matrix W is constructed according to the calculated preference degree corresponding to the extreme point on the rating index. The time series data feature extraction algorithm based on multi-objective decision-making introduces the weight index [9] to measure the weight relationship between targets. The preference matrix W is weighted and normalized by the following formula: W =
v
fk (x) · δk
(16)
k=1
In the formula, δk represents the relative weight corresponding to the evaluation W δk = 1. index ck . Usually the relative weights are in line with δk > 0 and k=1
Through the above analysis, it can be seen that the multi-objective preference degree existing between decision object a and decision object b conforms to the following formula: ϑ ai , aj > 0 (17) ϑ ai , aj + ϑ aj , ai < 1 Let + represent the positive preference flow. On all decision objectives, the preference level corresponding to the positive preference flow + of decision object ai is the
446
Z. Chen and X. Li
highest, and − represents the negative preference flow. On all decision objectives, the preference level corresponding to the negative preference flow − of decision object ai is the lowest. The final decision results are obtained through positive preference flow + and negative preference flow − , and the preferences are sorted. The calculation formulas of positive preference flow + and negative preference flow − are as follows: ⎧ 1 + ⎪ ϑ ai , aj ⎨ (a) = n−1 x∈A (18) − (a) = 1 ⎪ ϑ aj , ai ⎩ n−1 x∈A
In extreme cases, the corresponding negative preference flow value and positive preference flow value of the optimal decision object are 0 and 1 respectively; if the corresponding negative preference flow value and positive preference flow value are not 0 and 1, it is the worst decision object. Time series data features based on multi-objective decision-making Other algorithms realize time series data feature extraction according to the results of preference flow sorting. Obtain the net preference flow from the negative preference flow and the positive preference flow: (a) = + (a) − − (a) The net preference flow generally meets the following conditions: ⎧ ⎨ (ai ) ∈ [−1, 1] (ai ) = 0 ⎩
(19)
(20)
ai ∈A
The extreme value points are sorted according to the calculated net preference flow (ai ). The higher the value of the net preference flow, the higher the preference level corresponding to the extreme value points. The time series data are classified according to the level to achieve feature extraction of multi-dimensional time series data. 2.4 Designing Multi-dimensional Time Series Data Analysis Algorithms On the basis of completing the feature extraction of multi-dimensional time series data, grid is introduced as index calculation to grid the activity space of multi-dimensional time series data in the network, that is, the division method of longitude and latitude is used to discretize the activity space of network multi-dimensional time series data into For several grids of size × z, after the grid division is completed, the active position points of the network multi-dimensional time series data are mapped to the corresponding cells one by one. After the grid division, the length reference value
and width reference value z of each cell can be Calculated by the following formula: maxpi × x − pj × y (21) =
maxpi × x − pj × y z = (22) z
Intelligent Analysis Method of Multidimensional Time Series Data
447
Among them, ∀pi , pj ∈ , i = j and represent the movement trajectory of the multi-dimensional time series data of the network, pi and pj represent the coordinate points time series data in the activity space, and (pi × x, pi × y) of the multi-dimensional and pj × x, pj × y represent the coordinate values of pi and pj in the latitude and longitude directions, respectively. After completing the grid division of the network activity space, the binary normal density kernel function is used to calculate the density estimation value of any cell 5 [10]. The specific calculation formula is as follows: n |χ − pi |2 1 1 exp − (23) d (χ ) = 2 nλ 2π 2λ2 i=1
λ=
1 1 −1 2 2 n 6 εx + εy2 2
(24)
where, λ represents the moving track smooth parameter of dynamic time series data, and n represents the total number of dynamic time series data contained in the network; |χ − pi | represents the distance between cells χ and pi , and the standard deviation of x coordinate and y coordinate corresponding to all position points in Table of εx and εy . According to the above calculation, the abnormal area of the network can be regarded as the area connected by adjacent cells with the same or similar density values that meet certain conditions. The judgment formula for the abnormal area is as follows: |d (χ1 ) − d (χ2 )| > σ
(25)
Among them, d (χ1 ) and d (χ2 ) represent the density estimates of two adjacent cells in the network, and σ represents the abnormal area judgment threshold. After obtaining a series of abnormal areas of the network according to the above steps, it is necessary to further obtain the time-varying law of the activity position of multi-dimensional time series data in an abnormal area, so as to complete the intelligent analysis of multi-dimensional time series data in the abnormal area. Assuming that Oi and Rj represent any multi-dimensional time series data and any abnormal region in the network, they will be converted into binary sequences, namely: D = d1 d2 · · · di · · · dn
(26)
where, dk = 1 represents that the multi-dimensional time series data visited the abnormal area Rj at time k, and dk = 0 is the opposite. The spectrum sequence function X fk/N can be obtained by discrete Fourier transform of the binary sequence D in the above abnormal area. The specific calculation formula is as follows: N −1
2π
dn e− N n=0 X fk/N = √ N
kn
(27)
In the formula, the subscript k/N represents the frequency captured by the coefficients of each discrete Fourier series, and n represents the imaginary unit, then the
448
Z. Chen and X. Li
periodogram of the abnormal area of the network can be obtained as: 2 k = X fk/N
(28)
A series of candidate periodic charts have been obtained due to the spectrum leakage of multidimensional time series data in the network or other reasons. In order to avoid incorrect and false alarms, the candidate periodic charts are checked by introducing an autocorrelation function to determine the change cycle of multidimensional time series data in abnormal areas, and the intelligent analysis of multidimensional time series data in abnormal areas of the network is completed. The specific calculation formula is as follows: f (τ ) =
N −1
dτ dn+τ k
(29)
n=1
In summary, deep learning is used to mine outliers in multidimensional time series data, process multidimensional time series data through dimensionality reduction, extract features of multidimensional time series data, and combine multidimensional time series data analysis algorithm design to realize intelligent analysis of multidimensional time series data.
3 Experimental Analysis 3.1 Experimental Environment and Settings The experiment was run on Windows 10 operating system, with AMD Ryzen3600X CPU, 16 GB memory and NVIDIA Ge Force RTX 2060 graphics card. The deep learning platform Keras2.0.8 is used for network model building and training, and Tensor Flow GPU 1.4 is called to complete the accelerated operation. The parameter settings of each layer of Bi ConvLSTM Bi LSTM predictor are shown in Table 1, where filters and kernel size represent the number and size of convolution cores, activation is the activation function used, and merge mode represents the combination mode of bidirectional RNN output. In order to prevent accidental experiment, all experimental results are the average values obtained from five experiments. 3.2 Experimental Dataset The Yahoo Webscope dataset is an open source time series data analysis dataset. The dataset consists of 4 sub-datasets A1 to A4, with a total of 367 time series, each of which consists of 1420 to 1680 instances. This paper uses 50 time series data in the A2 sub-dataset with strong data periodicity as the training set and 50 time series data as the test set to compare the performance of our method and other data analysis methods. Since the A2 sub data set only contains point exceptions, in order to increase the test demand for context exceptions, this paper randomly selects an instance in each dimension of the test set as the starting position, replaces the original data in this dimension with continuous random values that fluctuate up and down no more than 30% of the value,
Intelligent Analysis Method of Multidimensional Time Series Data
449
Table 1. Bi-ConvLSTM-Bi-LSTM predictor parameter settings layer name
units
filters
kernel size
return sequences
activation
merge mode
Bidirectional Conv LSTM2D
-
32
(3,3)
true
Re LU
concat
Bidirectional Conv LSTM2D
-
32
(3,3)
true
Re LU
concat
flatten
-
-
-
-
-
-
Repeat Vector No. of prediction
-
-
-
-
-
Bidirectional LSTM
80
-
-
true
-
ave
Time Distributed Dense
No. of prediction
-
-
-
-
-
and simulates context exceptions in practical applications, and the length of all context exceptions is between 50 and 120 instances. Use the above method to add 0–2 context exceptions to each dimension of the test set, and all context exceptions are independent in each dimension. The modified test set contains 99 point exceptions and 51 context exceptions. 3.3 Evaluation Indicators In order to evaluate the performance of intelligent analysis of multidimensional time series data, three error indicators, mean absolute error (MAE) and root mean square error (RMSE), were used to evaluate the results of intelligent analysis of multidimensional time series data. Its formula is as follows: 1 |X − Xˆ | MAE = n i=1 n 1 RMSE = (X − Xˆ )2 n n
(30)
(31)
t=1
where, X is the real time series data, Xˆ is the analyzed time series data, and n is the length of the time series data. 3.4 Result Analysis In order to highlight the advantages of the method in this paper, the analysis method based on bidirectional LSTM, the fusion statistics method and the bidirectional convolution
450
Z. Chen and X. Li
LSTM analysis method are introduced for comparison, and the following results are obtained.
Method in text Fusion Statistical Method and Analysis Method of Bidirectional Convolution LSTM
1 0.9
Analysis method based on bidirectional LSTM
Mean absolute error
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
5
10
15
30 40 20 25 35 Number of time series data/piece
45
50
Fig. 4. Mean absolute error test results
From the results in Fig. 4, it can be seen that in the mean absolute error test of the intelligent analysis of multi-dimensional time series data, compared with the analysis method based on bidirectional LSTM, the fusion statistical method and the analysis method of bidirectional convolution LSTM, the method in this paper has better performance. The average absolute error value is within 0.25, indicating that the method in this paper is more suitable for intelligent analysis of multi-dimensional time series data. The reason why the method in this paper has lower average absolute error is that it accurately excavates outliers of multidimensional time series data through deep learning network, and constructs covariance matrix in the feature space to complete the dimensionality reduction of data. Extract the characteristics of multidimensional time series data and complete the intelligent analysis of data. However, traditional methods ignore the process of dimension reduction and feature extraction, resulting in high average absolute error of analysis. The test results of the three methods in the intelligent analysis of the root mean square error of multi-dimensional time series data are shown in Fig. 5. It can be seen from the results in Fig. 5 that the root mean square error of the method in this paper when analyzing multidimensional time series data is between 0.1 and 0.3. When the analysis method based on bidirectional LSTM, fusion statistics method and bidirectional convolution LSTM are used, the minimum root mean square error of multidimensional time series data is 0.4 and 0.6 respectively. With the increase of data volume, the root mean square error of multidimensional time series data analysis gradually increases. When the data volume reaches 50, the root mean square error of data analysis reaches 0.77 and 0.95, which shows that the method in this paper has better performance in analyzing root mean square error. The reason why the method in this paper has lower root mean square error in the process of data analysis is that the method
Intelligent Analysis Method of Multidimensional Time Series Data Method in text Fusion Statistical Method and Analysis Method of Bidirectional Convolution LSTM
1
Analysis method based on bidirectional LSTM
0.9
Root mean square error
451
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
5
10
15
20
25
30
35
40
45
50
Number of time series data/piece
Fig. 5. Root mean square error test results
in this paper constructs a deep learning network model, accurately excavates the outliers of multi-dimensional time series data based on the distribution function of correctly classified and incorrectly classified data, thus reducing the root mean square error of data analysis.
4 Conclusion In this paper, deep learning is applied to the intelligent analysis of multi-dimensional time series data. Experimental tests show that this method can reduce the average absolute error and root mean square error of data analysis when analyzing multi-dimensional time series data. However, there are still many deficiencies in this research. In the future research, we hope to introduce wavelet coefficients to decompose and reconstruct multidimensional time series data to avoid data distortion.
References 1. Shaukat, F., Mir, J., Linsen, L., et al.: Interactive visual analytics tool for multidimensional quantitative and categorical data analysis. Inf. Vis. 19(3), 234–246 (2020) 2. Zhao, P., Chang, X., Wang, M.: A novel multivariate time-series anomaly detection approach using an unsupervised deep neural network. IEEE Access 9(2), 109025–109041 (2021) 3. Tao, T., Zhou, X., Ma, B., et al.: Abnormal time series data detection of gas station by Seq2Seq model based on bidirectional long short-term memory. J. Comput. Appl. 39(3), 924–929 (2019) 4. Xia, Y., Han, X.: Multi-dimensional time series data anomaly detection fusing statistical methods and bidirectional convolutional LSTM. Appl. Res. Comput. 39(05), 1362–1367+1409 (2022) 5. Meng, C., Jiang, X.S., Wei, X.M., et al.: A time convolutional network based outlier detection for multidimensional time series in cyber-physical-social systems. IEEE Access 56(2), 451– 455 (2020)
452
Z. Chen and X. Li
6. Zhao, L., Shi, Z., Shi, Q., et al.: Network traffic classification method based on time series features. J. North Univ. China (Nat. Sci. Edn.) 43(03), 221–228 (2022) 7. Lu, W., Shan, D., Pedrycz, W., et al.: Granular fuzzy modeling for multidimensional numeric data: a layered approach based on hyperbox. IEEE Trans. Fuzzy Syst. 27(4), 775–789 (2019) 8. Fei, G., Shaoxu, S., Jianmin, W.: Time series data cleaning under multi-speed constraints. J. Softw. 32(03), 689–711 (2021) 9. He, Y., Liu, C., Yang, Z.: Temporal dependency mining from multi-sensor event sequences for predictive maintenance. J. Chin. Comput. Syst. 42(11), 2307–2312 (2021) 10. Gan, X., Tang, X.: Mining model of association rules for temporal data based on CNN. Comput. Simul. 38(03), 282–285+326 (2021)
An Intelligent Teaching System Based on Mobile Terminal for the Simulation of Legal Education Scenarios Xiaomei Yang(B) Guangxi Vocational Normal University, Nanning 530000, China [email protected]
Abstract. The situational simulation teaching method is an effective method of legal practice teaching. It starts from the psychological characteristics of students, guides students to study actively, and cultivates students’ ability to understand and apply laws and regulations in practice and work. Combined with modern mobile technology, an intelligent teaching system of law education scene simulation based on mobile terminal is designed. According to the overall design principles of the system, under the overall framework design of the system, the mobile terminal functional modules are designed, including login module, scenario simulation teaching module, evaluation module, consulting center module, personal center module and system setting module. According to the user’s needs, the system database is designed, and combined with the functional module design, the method education scenario simulation intelligent teaching system is designed. The experimental results show that the system functions and performance meet the requirements through black box and white box tests, which proves that the system has application value. Keywords: Mobile Terminal · Legal Educascene Simulation Teaching Method · Teaching System
1 Introduction At the Fourth Plenary Session of the 18th CPC Central Committee, it was proposed to “incorporate legal education into the national education system”. The importance of legal education has been pushed to a new height, and the focus of legal education has shifted to the cultivation of legal thinking. The construction of a country ruled by law requires not only leading cadres to have legal thinking, but also tens of thousands of citizens. Otherwise, the goal of building a country ruled by law will be in vain. College students will be the main force of national construction, so it is the requirement of the times to cultivate their legal thinking. In 2016, at the National Conference on Ideological and political work in Colleges and universities, general secretary Xi Jinping stressed that “all kinds of courses and ideological and political theory courses should work together to form a synergy effect.” The content of cultivating college students’ legal thinking is an © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 453–467, 2024. https://doi.org/10.1007/978-3-031-50571-3_33
454
X. Yang
important part of the ideological and political theory course, and its importance has also been raised to an unprecedented height. However, the current legal education has not achieved good results. The main reason is that the teaching tends to be theoretical and lacks practicality. The legal education classroom theory teaching is relatively abstract and difficult to understand. To combine theory with practical operation in the teaching process, you should go to the site unit to observe the actual operation. However, because the working environment is not suitable for the concentrated internship in the contract field, the seriousness of the work, the confidentiality of the work and the lack of internship funds and other conditions, the possibility and effect of practical operations on the spot are difficult to achieve. So, how can theory and practice be combined in teaching? Relevant personnel have carried out research on this. Zhu L [1] has conducted research on intelligent teaching mode, intelligent lesson preparation, intelligent teaching and intelligent customization through intelligent system. Teachers can accurately understand the learning interest and style of each student, and formulate personalized teaching programs targeted to achieve good educational results. Liu Y H [2] first analyzed the current research progress of the intelligent teaching system, established the overall structure of the intelligent teaching system, then focused on the big data recommendation module and the teaching quality evaluation module, and finally tested the effectiveness and superiority of the intelligent teaching system. However, after the practical test of the above methods, students’ satisfaction is low. In view of the above problems, this paper designs a scenario simulation teaching system of legal education based on mobile terminal. Situational simulation teaching method is an effective way of legal practice teaching. Starting from the psychological characteristics of students, it guides students to study actively and cultivate their ability to understand and apply laws and regulations in practice and work. Finally, the effectiveness of the system is proved by verification.
2 Design of Scenario Simulation Teaching System for Legal Education Legal education situational simulation teaching refers to the simulation or virtual reproduction of the environment and process in which events or things occur and develop by students, so that students can find and solve problems in the set situation, so as to understand the teaching content, and then in a short time A cognitive method of improving ability. Compared with traditional classroom teaching methods, scenario simulation teaching has the characteristics of novel forms, flexible methods, pertinence and adaptability. This teaching method sets up a certain simulation scenario according to the professional teaching content, allowing students to simulate the role of a professional post in a certain scenario, and use the knowledge to analyze and solve practical problems in accordance with the procedures and methods required by the occupation. So as to fully mobilize students’ learning enthusiasm, enable students to deepen their understanding of professional knowledge, and cultivate their professional skills and spirit of cooperation. As socialist builders and reliable successors, college students shoulder the important task of promoting the process of national legal construction. Therefore, how to effectively improve the cultivation of the rule of law of college students, so that they have the rule
An Intelligent Teaching System Based on Mobile Terminal
455
of law thinking, advocating the spirit of the rule of law, the use of the rule of law to solve problems properly become an important issue of College Students’ legal education. In this context, this paper designs a scenario simulation teaching system for legal education. According to the needs of teaching content and legal skills training, the system can select and design a case, simulate a certain environment and plot, let students respectively play a role in the scene, according to the necessary information provided by the case, students can play and develop the behavior of this role according to their own knowledge, and communicate, exchange and view through mutual cooperation the point of confrontation, collision, to solve certain legal practical problems. In the teaching of legal education, the teaching method has strong adaptability. Whether it is the substantive sector law or the procedural law, it can be carried out for a certain legal problem in the teaching process, and is not restricted by the practical teaching location and facilities. Of course, the system can not replace the off campus training activities. Compared with the off campus social training, it has many shortcomings. For example, the training effect is limited by the design cases, and the conditions are fixed and academic. It does not have the actual conflict of interests of the parties involved in judicial practice activities, the unpredictability of the development of things and the complexity of the legal relations. However, it has the characteristics of flexible mode, simple procedure, strong pertinence and operability. It is also one of the ways to solve the dilemma of legal practice teaching under the condition of lacking external conditions. 2.1 System Overall Design Principles (1) The Principle of High Simulation The design of the legal education situational simulation experiment teaching system should have simulation, which is mainly reflected in the simulation of the experiment subject, the simulation of the experimental data, the simulation of the experimental equipment, the simulation of the experimental environment, the simulation of the experimental role, and the simulation of the experimental program. (2) The Principle of Combining Comprehensiveness and Importance The design of the content of the legal education scenario simulation experiment teaching system should have a certain breadth and depth, including not only the legal common sense part, but also a set of related content such as court trials, and as much as possible the knowledge and skills of the main legal courses. Enable students to get comprehensive and systematic operation training. In addition, although the content of simulation experiment teaching comes from actual units, it is not all copied, but after careful selection and design, representative cases are selected for experimental teaching to cultivate students’ ability to build a strong house. (3) Principle of Operability For any scenario simulation experiment design, the operability will be the first. The construction of experimental teaching system should fully consider the feasibility of the implementation under the existing teaching hardware and software conditions, whether it can carry out actual simulation under limited conditions, whether it can achieve the
456
X. Yang
expected teaching purpose, whether it is conducive to evaluation, analysis and summary. Only when the system can run stably can the design of such teaching system have practical significance. (4) Student Centered Principle The situation simulation experiment teaching of legal education should emphasize the students’ dominant position, give full play to the students’ initiative in the experimental teaching process, and embody the students’ innovative spirit. Students should have a variety of opportunities to apply their knowledge in different situations, so that students can form an understanding of objective things and solutions to practical problems according to the feedback information of their own actions. In the experimental process, teachers only play the role of observation, recording, supervision, guidance and evaluation [3]. 2.2 System Framework Design (1) Development Architecture Design In order to make the system modular, consider adopting the MVC three-tier architecture to design the overall legal education and teaching system, as shown in Fig. 1. Model encapsulates application state, responds to state query and provides notification of application function change query state
Change state Access function
Change notification
The view model represents the update passing of the query model, and user behavior allows control over the selection of views
Select View
User behavior
Controller defines the application behavior, updates the view, selects the view that responds to the user, and selects the model according to the user behavior
Fig. 1. Mvc three tier architecture
View: the view layer is the display layer of the distance learning system. In this system, an interface that can interact with the user is mainly formed through a browser, and at the same time, a request operation is submitted to the server. In the view layer, it is mainly responsible for processing user input and output to the user, but it is not responsible for how to implement these business functions [4].
An Intelligent Teaching System Based on Mobile Terminal
457
Controller: after the user submits the request, it is necessary to encapsulate the user’s data according to the requirements of business logic layer, and call the response business interface to complete the business function. But it is not complicated. The specific business processing mainly plays a role of a link between the upper and lower levels. Through the link it establishes, the page of view layer can be combined with the data of business layer. Let the business layer concentrate on the business, while the layer focuses on the presentation. It plays a role of loose coupling. Model layer (Model): The model layer is mainly responsible for business processing, data modification, storage, and query operations. Among them, the data access function converts the data passed in by the controller into expressions that can be recognized by the data, such as SQL statements, to achieve database access [5]. In this system, the front page is provided by web browser, and the background service request is processed at the same time. In fact, the client does not communicate and interact with the database directly, but the interface is provided by the middle layer controller, and then interacts with the database through the model layer. Through this design, we can effectively avoid a large number of changes in the whole program code due to small changes in business logic. We only need to modify in the business logic layer, which can enhance the reusability and scalability of the code. (2) Network Architecture Design In order to improve performance, the remote legal education scenario simulation teaching system deploys the database server and the system server on two different servers. Deploy a firewall in front of the system server, and then connect the server to a local area network accessible by users [6]. When the system is in the early stage of operation, we temporarily adopt the single server operation mode. When the single server cannot meet the system requirements, and the system runs slowly or crashes, we add a separate database server to manage and process system data. When the two servers cannot meet the system requirements, multiple database servers can be added, and multiple internal program processing servers can be added to separate the data from reading and writing, and the client requests to be trained on the internal server for processing. The network architecture design is shown in Fig. 2 below. The legal education scenario simulation teaching system is a learning platform based on the Internet. It processes data through a database server, and the Web server provides online information browsing services. The backup server performs corresponding storage and backup procedures for the generated data and information, and communicates with the Internet through the university hardware firewall, Connected distance learning system [7]. The server of legal education scene simulation teaching system based on mobile terminal adopts Windows Server 2008 operating system, and the hardware configuration is 8-core CPU, 6.0. 16 GB memory, 250 hard disk space. The server middleware used by the system is Tomcat 6.0. The database server adopts Windows Server 2008 operating system, and the hardware configuration is 8-core CPU, 16g memory and 250g hard disk space. The database adopts my SQL.
458
X. Yang mobile phone Database server Switch
System network application server
firewall internet
System web server Router
remote access
Terminal
user
Fig. 2. Network architecture design
2.3 Mobile Terminal Functional Module Design (1) Login Module The user login function is the first function of the user contact system, which mainly realizes the user login to the client, through the user authentication, save the user name. The encryption technology and key are used to encrypt the data, and then the data is transmitted to the server for user verification. The user is a student registered in school. If the login is successful, it will jump to the topic page, displaying the first level columns such as course center, contact examination, discussion and answer, information consultation, personal center and more [8]. (2) Scenario Simulation Teaching Module The complete situational simulation teaching method teaching process can include four links: situation and role design, establishment of performance requirements and scoring points, grouped role interpretation, group discussion and summary of relevant knowledge. In the first step, in the teaching process, according to the course progress and course standards, the corresponding real events can be selected to compile cases with conflicts and implicit laws and regulations, and different stakeholders can be selected for role playing. In the second step, teachers should establish clear performance requirements and scoring points. Since the purpose of scene simulation teaching is still to return to the curriculum standards and curriculum goals, it is necessary to pay attention to guiding
An Intelligent Teaching System Based on Mobile Terminal
459
students to avoid focusing on pure deductive work. In addition, before scenario simulation, teachers should standardize the role behavior and language of the staff through detailed scoring standards. The third step is to introduce the competition between groups in the group and role deduction to stimulate students’ autonomous learning. Teachers can refer to the mode of debate and competition to carry out pairwise confrontation between groups. Because the designed scenario is ultimately in line with the students’ practice and work environment, and the problems encountered in the deduction process must be solved immediately on the spot, which stimulates and mobilizes the enthusiasm and initiative of students to the greatest extent, and urges students to actively collect relevant problems and contradictory methods before playing on the stage Laws and regulations are used as countermeasures to deal with the challenges and difficulties from the “confrontation” group. Finally, in response to the performance of each on-field group’s situational simulation, teachers should guide students to brainstorm together and think about whether there are better solutions and language. Through group discussion and review, each contradiction and conflict event is finally returned to the knowledge points of laws and regulations in the textbook. After that, teachers should also assist students in making summaries and refinements to consolidate knowledge points, strengthen students’ ability to apply knowledge points flexibly after entering the field of work, and ultimately achieve the teaching purpose of improving students’ legal literacy. (3) Evaluation Module The main function of the evaluation module is to train the students to make questions for the scene simulation teaching. The training methods include basic training, simulation examination and online examination. If the students choose the basic training mode, they can choose the question type for training. In the process of making questions, they can collect questions. The system stores wrong questions by default. The collected questions and wrong questions can be viewed in the personal center. At the same time, students can carry out a simulated test before the exam to be familiar with the type and time of the test. The simulation test completely simulates the online examination process and adopts the real-time timing and scoring method. When the system announces the examination information, students will participate in the test at the corresponding time [9]. (4) Consulting Center Module The main function of the consultation module is to update news announcements from time to time so that students can keep abreast of course arrangements, exam arrangements and other matters of the college. Important information in this part of the function will remind students to prevent students from missing exam time and other important matters with sound or vibration mode. (5) Personal Center Module After the students log in successfully, they can also choose the question type training or simulation test through this center. The collected questions and wrong questions in the training process can be viewed in the personal Center for the next targeted training.
460
X. Yang
At the same time, the final scores of the students’ examinations are also viewed in the personal Center [10]. (6) System Setting Module Design The system setting module mainly includes whether to use only Wifi to download, whether to automatically enter the next question, sync data according to the server settings, the server settings are mainly session time management and attachment directory settings. The attachment directory is only used for the copy and transfer path of files. The session time management is used to control the automatic jump time of the answer. The working steps of the session time management are as follows: Step 1: set the session time of a service to monitor the system all the time. When there is user operation, the session time will be timed from 0 again. Step 2: when the user has no operation, record the time when there is no operation, and it will increase with the increase of the time of no interaction. Step 3: check the size of the recorded time and session time. If it is less than the session time, continue to record. If it is done within the recorded time, repeat operation 1. If there is no user operation during the recording time, repeat operation 2. Step 4: when the recording time is greater than or equal to the set session time, the client will automatically jump to the next question. 2.4 Database Design Database is the infrastructure for the normal operation of the whole system, which directly affects the success or failure of the system design, especially the function of the system operation. According to user needs, the design of the system database mainly involves three core businesses such as course management, evaluation management, and online communication. At the same time, according to the operating requirements of the system, it also includes operations such as system login and personal center. To this end, we must first clarify the goal of database design, abstract the data involved in daily business, and complete the database model design. According to user needs, the database of this system mainly includes the following data types: (1) Student information: mainly includes student number (student number), student name, student gender, age, major, contact information, home address, online status and other related attributes. (2) Course information: mainly including course number, course name, course category, specialty and other related attributes. (3) Comment information: stores the information records of students’ online communication. In the system, the course reviews are arranged in positive order by default, that is, the comments closest to the current time are arranged at the top, including the comment content, comment date, comment narration and other related attributes. (4) Question bank information: Store questions of various question types, mainly including question number, question type, question score, question answer, etc. (5) Examination information: the relevant data of various examinations published by the storage system, the examination information includes relevant attributes such as
An Intelligent Teaching System Based on Mobile Terminal
461
examination number, examination name, examination method, examination object, examination time, examination location, and major. (6) Answer information: The answer data in the test specified by the designated system is stored. The answer information includes the answer number, student number, question number, test number, student answer and other related attributes. (7) News announcement information: including information number, title, content, date, publisher and other related attributes. In the process of data analysis, the design of E-R model is the key. Through the design of E-R model, the content and relationship of data involved in the system can be intuitively reflected. On this basis, it is convenient to convert the required content into data and store it in the database. Among them, the rectangular box represents the content of each entity, the diamond box represents the relationship between the entities, 1/N represents the one to many relationship, and N/N represents the many to many relationship. The entity relationship model of scenario simulation teaching system terminal for legal education is shown in Fig. 3.
News announcement n see
1
examination
subject
n
n
participate in
answer
n
n
student 1 n belong to 1
major
n sign up
publish
n
n
curriculum
comment
Fig. 3. The entity relationship model of the database
3 System Test In the software development process, testing is an essential and important step. Software testing is the process of running or measuring a system by manual or automatic means, in order to verify whether the software under test meets the specified requirements or find
462
X. Yang
the difference between the actual results and the expected results. Use tests to measure the quality of the software, eliminate hidden dangers during use, and ensure the stability of the software system. 3.1 Testing Environment The operating system of the test server is Windows 7, the development language Java uses the development environment of JDK 1.6, and the Web server uses Tomcat 6.0. The system database environment is MySql5.7. The server is deployed in the local PC. Considering the compatibility of different models, the client is divided into Android and IOS versions. The Android model mainly used for testing is Meilan 2, the main screen size is 5 inches, the main screen resolution is 1280 * 720, and the operating system is Android 6.4; the IOS model is iPhone 6, the main screen size is 4.7 inches, the main screen resolution is 1334 * 750, and the operating system is ios10.2. The network used for data transmission is campus laboratory network. 3.2 Testing Method System testing methods generally include white box testing and black box testing. (1) Black Box Test Black box testing, also known as functional testing, data-driven testing, or specification-based testing, is a test from the user’s point of view. Testers generally treat the program under test as a black box. The main types of errors detected in the black box test are: incorrect or missing functions; interface and interface errors; performance errors; data structure or external data access errors; initialization or termination conditions errors, etc. The commonly used black box test methods are: equivalence class division method; boundary value analysis method; cause and effect diagram method; scene method; orthogonal experiment design method; decision table driven analysis method; error inference method; function diagram analysis method. (2) White Box Testing The white box test is to check whether the internal actions of the product are carried out normally in accordance with the design specifications, and check whether each path in the program can work correctly according to the predetermined requirements. White box testing is generally used to analyze the internal structure of the program, which is transparent to the tester. The tester can see the source code of the program under test and analyze its internal structure. Therefore, white box testing is also called structural testing or logic-driven testing. The principles of white box testing are as follows: ensure that all independent paths in a module are tested at least once; all logical values need to be tested for true and false conditions; check the internal data structure of the program to ensure the validity of the structure; and run within the upper and lower boundaries and operational range. White box testing methods include: code inspection method, static structure analysis method, static quality measurement method, logic coverage method, basic path test method, domain test, symbol test, path coverage and program mutation.
An Intelligent Teaching System Based on Mobile Terminal
463
3.3 Black Box Test Results Choose three modules: login module, scenario simulation teaching module, and evaluation module for testing. (1) Login Module Test Purpose: To verify whether the module operates in accordance with normal business logic. Test content: Whether the user can log in normally. Test results: as shown in Table 1. Table 1. Login module function test results Use cases
Test Results
Do not enter ID, password, verification code
Please enter ID, password, verification code
Input wrong username, password, verification code
Remind the input ID password, the verification code is wrong
Close the client’s network connection
Reminder please check the system network connection
Shut down the server
Connection timed out, server-side exception, please contact the administrator
Information input is accurate and network connection is normal
Successfully enter the main interface of the system
It can be seen from Table 1 that only when the information input is accurate and the network connection is normal, can you successfully enter the main interface of the system, which means that the login module can effectively prevent non-registered users from logging in and protect information data. (2) Scenario Simulation Teaching Module Test Purpose: To verify whether the module normally displays courses, plays courses, and evaluates courses. The test content and test results are shown in Table 2 below. According to Table 2, the scene simulation teaching module can play video and audio online according to the learning records; Downloadable video and audio files; You can evaluate the course and set the word limit. And the test results are consistent with the expected judgment, which can effectively realize the situational simulation teaching. (3) Evaluation Module Test Purpose: To verify whether the module normally displays courses, plays courses, and evaluates courses. The test content and test results are shown in Table 3 below. According to Table 3, the evaluation module can initialize the question bank and display the question type according to the selected course; Realize the timing function; Realize the scoring function; Click “Favorites” to add the question to the bookmark; The wrong answer can automatically match the wrong question. It has effectively realized the scene simulation teaching.
464
X. Yang Table 2. Function test results of scenario simulation teaching module
Test content
Anticipatory judgment
Test Results
Scene teaching online play
Can play online
Video and audio can be played online, and according to learning records
Situational teaching course download
Can download the courseware
Downloadable video and audio files, the default storage address is the memory card
Situational teaching curriculum evaluation
Can the course be evaluated
Courses can be evaluated, and word limit should be set
Table 3. Functional test results of the evaluation module Test content
Anticipatory judgment
Basic question training
Whether to display the question Initialize the question bank and type according to the major display the question type according to the selected course
Timer function
Can it be timed
Enter the test timing function to open, in seconds
Scoring function
Can score
Enter test scoring function, real-time scoring
Collection topic
Can the title be collected?
Click Favorite to successfully bookmark the question
Collection wrong question Can the wrong question be collected?
Test Results
The wrong answer matches the wrong question automatically
3.4 White Box Test Results As a legal education scenario simulation teaching system for teachers and students, in addition to designing corresponding functions that meet the needs, the performance of the system should also be considered. Performance issues indicate the quality of the system and the user’s desire to use it. For high concurrent access by multiple users, software should be used for stress testing. Under the influence of network and system design, the time delay index of data interaction is more important. Take the mobile terminal login function as an example to conduct a stress test. Using Apache’s open source testing tool JMeter, create thread groups to simulate multi-user dynamic assembly of request data, send sign in requests through HTTP protocol, set different number of threads to simulate concurrent users for testing, and the test results are shown in Table 4.
An Intelligent Teaching System Based on Mobile Terminal
465
Table 4. Stress test results Concurrent number of mobile terminals/piece
Response time / S
100
0.68
200
1.56
300
2.05
400
2.95
500
4.50
In the case of more than 500 concurrent requests, the response time of the system is still within the acceptable range, indicating that in terms of performance, the entire system also maintains a good level, basically reaching the expected effect at the beginning of the design. 3.5 Student Satisfaction Test In order to further verify the practicability of the system in this paper, taking student satisfaction as the experimental index, the system in this paper, the system in document [1] and the system in document [2] are used for comparative test. The test results are as follows (Fig. 4). 100 90
Student satisfaction/%
80 70 60 50 40 This system 30
Document [1] system
20
Document [2] system
10 1
2
3
5 6 7 4 Number of experiments/time
8
9
10
Fig. 4. Comparison results of student satisfaction test
As can be seen from the above figure, the student satisfaction of the system in this paper is up to 98%, and is higher than 90%. The student satisfaction of the literature [1] system is up to 88% and the lowest is 60%. The student satisfaction of the system of literature [2] is as high as 81% and as low as 62%. It can be seen that the student
466
X. Yang
satisfaction of this system is significantly higher than that of the comparison method, indicating that this system is practical.
4 Conclusion The combination of the technology represented by the system and the traditional classroom is the exploration and innovation of the teaching mode, the supplement of online education and distance education, to make up for the deficiencies and deficiencies, and to provide teachers with a set of teaching software suitable for the current situation. The improvement of students’ learning level can effectively improve the school’s work efficiency and management level, and has a certain impact on the improvement of school teaching quality. Combining with modern mobile technology, this paper designs an intelligent teaching system of law education scene simulation based on mobile terminal. Through black-box and white-box tests, it is verified that the function and performance of the system in this paper meet the requirements. Through the student satisfaction experiment, it is verified that the student satisfaction of the system in this paper is up to 98%, and both are higher than 90%, which proves that the system has application value. So far, the development of the system has realized the basic functions required. However, due to the limited technical level and development time, the system still needs further improvement in many aspects and practical functions. Further update through user feedback, establish a reasonable model, continue to learn the latest theories and technology-related system development at home and abroad, strengthen learning management, and provide more perfect service software for teachers and students.
References 1. Zhu, L.L.: Application of online intelligent teaching system in colleges and universities based on artificial intelligence. J. Jiangxi Electr. Power Vocat. Tech. College 34(2), 20–21 (2021) 2. Liu, Y.H.: Intelligent teaching system based on big data analysis technology. Mod. Electron. Tech. 44(7), 178–182 (2021) 3. Ma, J., Niu, L., Li, X.: Design of key universities’ laboratory intelligent teaching system based on multimedia and network technologies. Mod. Electron. Tech. 44(20), 1–6 (2021) 4. Diaz, J.M., Costa-Castello, R., Dormido, S.: Closed-loop shaping linear control system design: an interactive teaching/learning approach. IEEE Control. Syst. Mag. 39(5), 58–74 (2019) 5. Jiang, H.: Design and application of intelligent teaching system based on 5G communication. Guangxi Educ. 35, 172–173 (2021) 6. Xiao, L.: Research on intelligent teaching system based on multi-modal deep learning technology. China Comput. Commun. 34(16), 227–230 (2022) 7. Li, G., Wang, F.: Research on art innovation teaching platform based on data mining algorithm. Clust. Comput.. Comput. 22(2), 13867–13872 (2018) 8. Granjo, J.F.O., Rasteiro, M.G.: LABVIRTUAL—a platform for the teaching of chemical engineering: the use of interactive videos. Comput. Appl. Eng. Educ.. Appl. Eng. Educ. 26(5), 1668–1676 (2018) 9. Martinez, L.G., Marrufo, S., Licea, G., Reyes-Juarez, J., Aguilar, L.: Using a mobile platform for teaching and learning object oriented programming. IEEE Latin America Trans. 16(6), 1825–1830 (2018)
An Intelligent Teaching System Based on Mobile Terminal
467
10. Merayo, N., Ruiz, I., Debran, J., et al.: AIM-mobile learning platform to enhance the teachinglearning process using smartphones. Comput. Appl. Eng. Educ.. Appl. Eng. Educ. 26(5), 1753–1768 (2018) 11. Yao, K., Li, L.: Mobile terminal network survivable database security anti-tampering simulation. Comput. Simul. 37(1), 456–459+483 (2020)
Security Analysis of Car Driving Identification System Based on Deep Learning Xiaogang Wei1(B) and Rong Zhang2 1 Chongqing Vocational Institute of Tourism, Chongqing 409000, China
[email protected] 2 Jiangmen Polytechnic, Jiangmen 529000, China
Abstract. The implementation of the car driving identification system will help to improve the safety and efficiency of navigation. The safety analysis technology can filter the attack events of the automatic identification system of car driving and reduce the error rate of safety analysis. To this end, a deep learning-based safety analysis method for car driving identification system is proposed. Based on the deep learning theory, a safety behavior analysis model of the car driving identification system is constructed. Through data collection, security analysis and response processing, the identification of abnormal communication security data strength is completed. A Cartesian coordinate system is established, and a heterogeneous data processing model is constructed. Based on the deep learning analysis process of security data, based on deep learning, by accessing system operation data, stream processing and data mining of security data, complete system security data defense control, and realize system security analysis. The experimental results show that the method in this paper has strong security analysis ability, and its matching range is large, which can match all security behaviors and reduce the error rate of security analysis. Keywords: Deep Learning · Car Driving · Automatic Identification System · System Security · Security Analysis
1 Introduction In recent years, the automatic identification system of car driving has been extended to the field of navigation, the confidential information in the automatic identification system of car driving has increased greatly, and the importance of the safety technology of automatic identification system of car driving has become increasingly prominent. How to effectively analyze the information and ensure the data security of the car driving identification system has become an urgent problem to be solved. As the number and severity of AIS attacks increases as the number of AIS users and information increases. Security analysis technology is an effective security method to discover a series of malicious behaviors that threaten the integrity, confidentiality and availability of information resources. For the large amount of event data of the car driving identification system, the © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 468–484, 2024. https://doi.org/10.1007/978-3-031-50571-3_34
Security Analysis of Car Driving Identification System
469
safety analysis technology can accurately classify normal and abnormal events while considering the best classification rate. In order to achieve the purpose of filtering the attack events of the automatic identification system of car driving and reducing the false alarm rate [1]. At present, scholars in related fields have carried out research on system security analysis. Reference [2] proposes an aero-engine system safety analysis method based on the Simscape model. On the basis of the general characteristics of model-based safety analysis fault expansion, two ways of external fault expansion and internal fault expansion of modeling language in Simscape environment are analyzed to establish a coupled fault model. Taking the full authority digital engine control main fuel control subsystem as a research example, a formal system safety analysis of independent and coupled faults is carried out. This method ensures the consistency of design and safety analysis, but its safety analysis error rate is high. Reference [3] proposed an IMA system security analysis method based on AADL and HiP-HOPS. The AADL language is used to describe the system architecture and fault information, and the AADL architecture model is established. In order to further analyze its safety, the conversion of AADL model to HiP-HOPS model is proposed. Using HiP-HOPS, fault tree can be generated and IF-FMEA combined failure analysis can be performed, and safety reliability analysis of IMA flight planning system fault propagation can be carried out. This method can effectively reduce the error rate, but its security analysis ability is weak. Aiming at the above problems, this paper proposes a safety analysis method for the automatic identification system of car driving based on deep learning. Establish a system safety behavior analysis model, and complete the communication anomaly safety data intensity identification through data collection, safety analysis and response processing. Build a heterogeneous data processing model, deeply learn and analyze the process based on security data, access system operation data, stream processing and data mining security data, complete system security data defense control, and realize system security analysis. The effectiveness of the method is verified by experiments.
2 Constructing the Analysis Model of the Safety Behavior of the Car Driving Identification System Safe sex behavior analysis model is an important part of safety analysis technology. Based on deep learning theory, a deep learning classification model is established and simplified. This method has the characteristics of simplicity, high speed and high classification accuracy [4]. Its core algorithm is as follows: Set the safety behavior as the sample A = (a1 , a2 , · · · , an ), where the sample is a n dimensional Boolean vector. Divide the events of the car driving automatic recognition system into C ∈ (C1 , C2 , · · · , Cm , f ), then there are m classification problems, where f is the mapping function. The training sample X1 , X2 , · · · , XN is obtained according to the mapping function, where X = (x1 x2 , · · · , xi ), it can be seen that X is the t dimensional Boolean vector. The calculation steps are:
470
X. Wei and R. Zhang
(1) Calculate the probability of training sample cj , expressed as P(cj ), and the calculation formula is: Ni cj i=1 P cj = Tj c
(1)
j=1
Among them, Ni cj represents the dataset of training samples cj , and Tj represents the total sample dataset for training. the feature ai in the training sample, and the relative probability value (2) Analyze P ai |cj of the feature value in the event category of the automatic identification system for car driving. The calculation formula is: S ai ∈ ∀cj i,j−1 P A = ai |cj = (2) c Nj cj j=1
them, S ai ∈ ∀cj represents the sample set where the feature is located, and Among Nj cj represents the total number of feature samples. (3) According to the above formula, the eigenvalues in the training samples are obtained, and the calculation formula is: S ai ∈ ∀cj A∪C A∪C i=1,j−1 (3) P ai |cj P cj = P(ai ) = P cj cj i=1,j−1 i=1,j−1 N
(4) Use the deep learning automatic recognition system for car driving to obtain independent assumptions and analyze the probability of safe sexual behavior. The calculation formula is: P cj P ai |cj j=1 P cj |ai = (4) P(ai ) In order to obtain the topology structure of the directed acyclic automatic recognition system for car driving that can truly reflect the relationship between samples, the structure of the automatic recognition system for car driving is studied. The car driving automatic recognition system in this paper is able to display potential condition-independent relationships and probability distribution functions in the data. According to the characteristics of the parameter learning method, the safety behavior analysis model of the car driving identification system can divide the parameter estimation into two categories: classical statistical estimation and deep learning statistical estimation. Two methods of moment estimation and maximum likelihood estimation are usually used for statistical parameter estimation. Maximum likelihood estimation is a common method in conditional probability table learning [5, 6].
Security Analysis of Car Driving Identification System
471
Deep learning is divided into two stages, namely structural learning and parameter learning. Structural learning is to realize information analysis through the topology structure of the car driving identification system, and explore the conditional probability of learning the node variables inside the car driving identification system. The deep learning car driving automatic recognition system can well train the sample data, and use the research to analyze the data and prior knowledge, so as to obtain the best car driving automatic recognition system topology. The reasoning methods of the deep learning car driving identification system include causal analysis, diagnostic analysis and support analysis to realize information reasoning. Causal reasoning adopts bottomup reasoning. After analyzing the cause, a conclusion is drawn, and the reasoning that different phenomena appear under different circumstances is verified according to the known evidence. The diagnostic reasoning is different from the causal reasoning, which uses the conclusion to analyze the cause, and determines the probability of the cause after determining the reasoning result. Supporting reasoning is the analysis of data by verifying the interaction between different causes. The deep learning car driving automatic recognition system is used as the probability of car driving automatic recognition system. Through statistical research on knowledge classification, in a large database, different attribute values are judged and the accuracy of the method is improved. The workflow of the safety behavior analysis model of the car driving identification system based on deep learning is shown in Fig. 1.
Fig. 1. Workflow of the system safety behavior analysis model
It can be seen from Fig. 1 that in the first stage, the flow data of the automatic identification system for car driving is analyzed. After different analysis types are identified, the mapping set is obtained as UTC = (Tk , Ck |∀Tk ∈ Ck ), 1 ≤ k ≤ n. Data discretization processing and feature selection are completed through training, that is, data preprocessing is realized. In the preprocessing, the effective data is filtered out, the prior probability P(Tk |Ck ), k = 1, 2, · · · , n is obtained according to the statistical results, and the centralized data set is determined through the mapping relationship, so
472
X. Wei and R. Zhang
that the internal safety behavior of the entire car driving identification system can be analyzed. In the second stage, the data in the whole frame is extracted, and the extracted data of the car driving identification system is visualized by the idea of discretization and feature selection, and the internal redundant data and unimportant feature data are simplified. By reducing the time complexity and space complexity of the safety of the car driving identification system, the accuracy of the safety of the car driving identification system can be improved [7].
3 Car Driving Identification System Communication Abnormality Safety Data Strength Identification In order to better deal with heterogeneous data, the data strength of abnormal communication safety of the automatic identification system for navigation is identified. Security data strength identification is divided into three parts: data collection, security analysis and response processing. The data strength identification structure of system communication anomaly security is shown in Fig. 2.
Fig. 2. System communication abnormal security data strength identification structure
According to Fig. 2, data collection is the collection of heterogeneous data, which is the basis of safety analysis. Security analysis is a core step in deep learning analysis methods. It processes the collected data and compares it with the original data to determine whether the data is normal and safe and whether it affects the overall operating state. If the data is abnormally safe, an alarm will be issued through response processing, and the on-duty personnel will extract the data through the original data stream and compare it with the stored data for correction. The deep learning analysis is carried out by integrating the information of the main car driving identification system and the information of several diversity engine car driving identification systems. According to
Security Analysis of Car Driving Identification System
473
the trajectory of heterogeneous data, its safety behavior is found, and the normal sample of the host is compared with the analyzed data sample, and the heterogeneous data is corrected to ensure the accuracy of system resources [8]. Therefore, in the identification of data intensity, it can be classified according to the analysis object and security method, and can be divided into two categories according to the difference of the analysis object. The main engine data and the car driving identification system data are divided into two types of security methods: abnormal data and input errors according to different security methods. The classification of safety data is shown in Table 1. Table 1. Classification of safety data Classification basis
Classification result
Analysis object
Host data
Security way
Data exception
Car driving identification system data Input error
In order to minimize data security, try to avoid errors when inputting computer programs, reduce system errors, and improve system accuracy. Classify the security methods of data anomalies, discover data anomalies and input errors in time, and avoid the impact on later data. According to the log and display data, the known abnormal data is turned into an attack code pattern and stored in the security simulation database. Then match the realtime correct data with the abnormal data in the security schema to identify the security data. The data encoding method is shown in Fig. 3.
4 Build a Heterogeneous Data Processing Model Because in the Cartesian coordinate system, the abnormal data samples and the normal data samples are inconsistent, so this paper completes the data processing work by establishing the Cartesian coordinate system and constructing the heterogeneous data processing model. Randomly select n data as the basic sample, draw the ROC curve through Matlab software, determine the exact value of the data, and repeat this operation. Sampling other samples to calculate the offset between the standard data and the measured data, storing the coincident data, and rechecking and integrating the different data until the ROC curve is a coincident line [9]. The established Cartesian coordinate system is shown in Fig. 4. The vertical axis of the ROC curve represents the analysis probability, and the horizontal axis represents the false alarm rate. According to the change of the curve, the false alarm rate is analyzed. The higher the threshold, the higher the accuracy of the system and the stronger the self-identification ability. If the ROC is not a smooth curve, the ROC curve needs to be divided into several segments to form several small trapezoids,
474
X. Wei and R. Zhang
on
Fig. 3. Data encoding method
Fig. 4. Data processing cartesian coordinate system
and the area of each trapezoid is calculated as the exact value of the data. By adding several trapezoidal areas, the number of data is subtracted from the number of normal data, and the false alarm rate of abnormal data is obtained. The optimal working point of the analysis was found in the ROC curve, and the positive likelihood ratio and Youden index were used to discriminate the error detection rate and the false alarm rate. Looking at Fig. 4, it can be seen that some abnormal sample points may be generated due to operational errors in system communication. In general, the isolated points that do not coincide with the ROC curve made are obtained due to operational errors. Therefore,
Security Analysis of Car Driving Identification System
475
in order to ensure the accuracy of the data, we must first ensure the accuracy of the data in the host system, correct the irregular data, and mark the heterogeneous data [10]. The networking of the car driving system is periodic, and each program is arranged in the order of abnormal data attack. According to different programs and different environments, call the program. In the car driving identification system, each node corresponds to a different program, and each path inside the program represents the process of each data transmission. Establish different data transmission channels to form a data network, implant deep learning analysis systems and alarm systems, and conduct security deep learning analysis under certain security strategies. Since each path in the car driving identification system has a starting point and an ending point, and has the function of storing data on the node, abnormal data often attacks each node through the loopholes in the program. By changing the data transmission path of the source program, the overall sequence is different from the normal execution sequence. Therefore, in the process of data processing, this paper builds a heterogeneous data processing model: 1 2 ej (n) 2 J
E(n) =
(5)
j=1
Among them, e is a constant value, and n is the number of abnormal data detected.
5 Security Data Deep Learning Analysis Deep learning analysis methods are mainly divided into five stages: database cleaning and integration, database storage, selection and transformation of specific data sets, data mining to form patterns, and evaluation and representation. The security data deep learning analysis process is shown in Fig. 5. Observing Fig. 5, it can be seen that the role of database cleaning and integration is to find abnormal data and integrate and clean the abnormal data. The role of database storage is to store the cleaned data and combine the data in the data source. The function of selecting and converting a specific data set is to filter and integrate the integrated data into a data package for specific storage. The data mining formation mode is to decompose the specific stored data, and the data is decomposed regularly. The function of evaluating and representing knowledge is to filter out meaningful pattern knowledge based on the decomposed data, and display it to the user by using the visualization of the data. The data is transmitted through the transmission network, and the small central processing unit at the end of the node reads, audits and records the data, analyzes it with the exevce/inetd system, generates new data, and the host system identifies the data itself. If the data is abnormal, return to the beginning, picket and retransmit the data. If the data and the original data continue to be transmitted downward, the data value of the path is obtained, and it is judged whether it is a correct program. If it is not reprogramming, if the data is weighted, the corresponding service privilege vector is obtained, and it is judged whether the PID exists in the vector. If it is not present in the vector, the system needs to be reprogrammed. If it exists in the vector and continues to transmit downwards, it is judged whether it is fork. If it continues to transmit to the
476
X. Wei and R. Zhang
Fig. 5. Security data deep learning analysis process
background, the output data will be stored in the corresponding data repository. If not, the data needs to be checked and corrected for retransmission. In this paper, the original data is mapped into the three-dimensional space by constructing a linear function, and the space is divided. The optimal solution is obtained by using the matrix to find heterogeneous data, and the minimum error average criterion E = 1 is adopted. Divide the overall data set into non-overlapping data blocks, so that the overall data forms a compact independent body. The segmentation process is shown in Fig. 6. The data independent body is obtained according to the data segmentation principle in FIG. 6. After obtaining the discrete attribute data, the original data is mapped into the three-dimensional space by constructing a new linear function. Divide the space and use the matrix to find the process of heterogeneous data to obtain the optimal solution. The deep learning analysis method mainly relies on the simulated database. If there is no normal data in the simulated database, the attack data cannot be analyzed. If the realtime encoding of the abnormal data and the normal data exceed a certain threshold, the normal data will also be attacked. The overall data is analyzed and processed to show the nonlinear relationship between normal data and abnormal data. The data is compared by random sampling, the HMM model is established, and the analysis function of the HMM model is used to identify abnormal data through the deep learning test, realize the distributed processing of information, and increase the adaptability of the processing process. The HMM model construction is shown in Fig. 7. Observing Fig. 7, it can be seen that the data transmitted on each channel can be referred to as a data stream, which has the characteristics of continuity, large capacity, and fast response. When heterogeneous data security, deep learning analysis is performed in
Security Analysis of Car Driving Identification System
477
3
b b
Fig. 6. Principle of data segmentation
Fig. 7. HMM model construction
the data stream, if the data is part of the event data, it is security data. There are three main types of data: point anomalies, sample anomalies, and sequence anomalies. The transmission data of the sensing data stream enhances the stability of the data stream algorithm, improves the analysis performance, and increases the stability of the analysis process.
6 Car Driving Identification System Security Data Defense Control The core concept of deep learning is to combine a variety of big data technologies to realize the functions of collecting, processing, analyzing, storing and retrieving massive data. In order to realize the precise defense control of the safety data of the car driving identification system. Based on deep learning, this paper performs stream processing and data mining on safety data by accessing system operation data to achieve highefficiency, low-latency, and high-accuracy car driving identification system safety data defense control. The workflow of the safety data defense control of the car driving identification system based on deep learning is shown in Fig. 8.
478
X. Wei and R. Zhang
Fig. 8. Security data defense control process of car driving identification system
(1) The host detector collects the operating data of the host operating system and the log of the application program. The data packets of the car driving identification system are obtained by the network detector. Identify the security data according to the corresponding security rules, and perform simple data processing on the data. When the safety data is analyzed, the alarm mechanism is activated to send the safety information to the central processing unit. (2) After the central processor receives the alarm signals and safety data from different detectors, it divides the safety data into R data sets X based on the principle of deep learning, and performs classification data training on the generated data sets. Using meta-classifiers to mark security data, assuming that a certain representation data is a, the labeling result of the r meta-classifier for data a can be expressed as: G(a) = b
R r=1
sign gr(a) = b
(6)
Security Analysis of Car Driving Identification System
479
where, G(a) is the labeling result, sign is the indicator function, gr(a) is the unlabeled data, and b is the labeling coefficient. When gr(a) = b, sign value is 1, otherwise it is zero. Calculate the confidence of the marked security data a as follows: 1 sign gr(a) = b R R
con(a) =
(7)
r=1
Considering the noise pollution carried by the security data, a generalization error algorithm is introduced to estimate the generalization error of the confidence error. The calculation method is as follows: u=
1 nsign(con(a)) n
(8)
(a,b)
Among them, n represents the number of data contained in the dataset X, and u represents the generalization error. According to the above error estimation, if the error value is greater than 1, the confidence calculation needs to be recalculated. If it is smaller, it means that the error is not enough to affect the final calculation result, which can be ignored and the next step is calculated. (1) Select the confidence threshold. According to the above confidence calculation results, the optimal threshold for high confidence and the optimal threshold for low confidence are selected to be represented by ε1r and ε2r , respectively. The selection of the confidence threshold affects the efficiency of data analysis. The optimal threshold for high confidence and the optimal threshold for low confidence should satisfy the following relationship: ε1r =
1 ε2r
(9)
(2) Divide the data according to the confidence threshold of the data, and divide the security data whose confidence is higher than the optimal threshold of high confidence into the training sample set, and perform data training on them. Analyze the correlation between data and extract security rules from the rule base, so as to clarify the types of security data. For the safety data whose confidence is lower than the optimal threshold of low confidence or between the thresholds, it is put back into the data set X, and the next data calculation is performed. (3) Match the appropriate defense mechanism from the database according to the identification result of the security data. The control unit of the central processing unit issues control instructions to control the operation of the corresponding defense mechanism. The data results and information of this time are stored in the memory to realize data analysis, which is convenient for the next safety signal analysis. The confidence degree of safety data is calculated by marking the safety data, and the generalization error is calculated by using the generalization error algorithm to calculate the optimal threshold, match the appropriate defense mechanism, and complete the defense control of the safety data of the vehicle driving recognition system.
480
X. Wei and R. Zhang
7 Experimental Studies In order to verify the effectiveness of the safety behavior analysis method of the car driving identification system based on deep learning proposed in this paper, a comparative experiment was set up to compare with the method of reference [2] and the method of reference [3] respectively. The experimental data selected in this paper comes from the KDDCUP security analysis data set. The internal data sources of the KDD security analysis data set mainly include two parts: (1) Seven weeks of training data, about 5,000,000 connection records of the automatic identification system for car driving. (2) Abnormal attack type. There are 22 types of attacks, which can be specifically divided into four main types of attacks. The description of exception types is shown in Table 2.
Table. 2 Description of abnormal types Type of attack
Description of attack
Probe
By monitoring or scanning the port, the attack type of the attack is realized
R2U
Attack the process host and count all unauthorized access
U2R
Analyze unauthorized local user access methods
DOS
All denial of service attacks
The experimental process is shown in Fig. 9.
Fig. 9. Analysis of the experimental process
The set experimental parameters are shown in Table 3.
Security Analysis of Car Driving Identification System
481
Table 3. Experimental parameters Parameter
Project
Operating system
Windows10
CPU frequency
5.2GHz
Memory
5GB
Hard disk storage space
200G
Programming tools
MATLAB8.0
Test analysis method
Trs/Tes
Fig. 10. Build experimental environment
Build experimental environment is set as shown in Fig. 10. Experiments were carried out according to the above experimental parameters and experimental environment, and the safety analysis capabilities of different methods were compared. The analysis results obtained are shown in Fig. 11. According to Fig. 11, compared with the method of reference [2] and the method of reference [3], the method in this paper has strong information matching ability. It can match all safety behaviors and accurately analyze all safety types, so as to realize behavior analysis. Because the training sample value inside the method in this paper is continuously expanded, the error of the conditional probability is gradually reduced, the continuous learning ability and expansion ability of the method itself are improved, and the security analysis ability is enhanced. After determining the safety analysis capability, the safety analysis matching range of different methods is shown in Table 4. It can be seen from Table 4 that the security analysis range of the method in this paper is much larger than the security analysis range of the method of reference [2] and the method of reference [3], which can realize the analysis and matching of data. Its
482
X. Wei and R. Zhang
(a) This paper method
(b) The reference [2] method
(c) The reference [3] method Fig. 11. Security analysis capabilities of different methods
matching range is large and can match all security behaviors. Because this method uses the generalization error algorithm to calculate the generalization error, thus obtaining the optimal threshold, which can match the appropriate defense mechanism and improve the range of security analysis. The safety analysis error rate results of different methods are shown in Table 5. It can be seen from Table 5 that the security analysis error rate of the method in this paper is significantly lower than that of the method of reference [2] and the method of reference [3]. Therefore, the method in this paper adopts deep learning analysis to better analyze the data and explore the internal strength of the data, thereby reducing the error rate of security analysis.
Security Analysis of Car Driving Identification System
483
Table 4. Safety analysis matching range of different methods Scope
The reference [2] method
The reference [3] method
This paper method
Decision tree
Cannot match
Can match
Can match
Support vector machines Cannot match
Cannot match
Can match
Manual car driving identification system
Cannot match
Can match
Can match
Neural car driving automatic recognition system
Can match
Cannot match
Can match
Genetic algorithm
Cannot match
Can match
Can match
Data information
Can match
Cannot match
Can match
Car driving identification Cannot match system information
Can match
Can match
Car driving identification Can match system attack
Can match
Can match
Safe sex
Cannot match
Can match
Can match
Table 5. Safety analysis error rate results of different methods Different methods
Security analysis error rate
The reference [2] method
3.6%
The reference [3] method
2.5%
This paper method
0.8%
8 Conclusion In this paper, a deep learning-based safety analysis method for automatic identification of car driving is proposed, and a safety analysis model of automatic identification of car driving is established based on deep learning, and a heterogeneous data processing model is constructed through the identification of abnormal safety data intensity of communication. Based on the deep learning analysis process of security data, access system operation data, stream processing and data mining of security data, complete system security data defense control, and realize system security analysis. The experimental results show that the method has strong security analysis ability, large matching range and low security analysis error rate. However, in this study, only a relatively rough analysis of the automatic identification system for automobile driving has been made, and the processing of automatic identification has not been realized by programming. In the
484
X. Wei and R. Zhang
future research, the research results of this time need to be applied to practice and constantly improved, so as to improve the safety performance of the automatic identification system for automobile driving. Acknowledgement. 1. Science and Technology Youth Program of Chongqing Education Commission in 2020:Design of A pillar blind zone vision system for passenger cars (Project No.: KJQN202004602). 2021 Chongqing Vocational Institute of Tourism Mass Innovation Space Project:Universal Vision Given by Science and Technology -- Development of Visual System for Blind Zone of Passenger Car A Pillar (Project No.: 2021DC01).
References 1. Sun, X., Shi, W., Cheng, Q., et al.: An LED detection and recognition method based on deep learning in vehicle optical camera communication. IEEE Access 9, 80897–80905 (2021) 2. Chu, N., Zhang, S., Gao, Y., et al.: Safety analysis method of aero-engine systems based on Simscape model. J. Aeros. Power 36(04), 885–896 (2021) 3. Yang, H., Sun, Y., Ruan, H.: Research on safety assessment method for IMA system based on AADL and HiP-HOPS. Aeron. Comput. Techn. 49(06), 85–88 (2019) 4. Alzubi, O.A.: A deep learning-based frechet and dirichlet model for intrusion detection in IWSN. J. Intell. Fuzzy Syst. 42(2), 873–883 (2022) 5. Wang, Z., Liu, Y., He, D., et al.: Intrusion detection methods based on integrated deep learning model. Comput. Secur. 103, 102177 (2021) 6. Lu, J., Liu, X., Zhang, S., et al.: Research and analysis of electromagnetic trojan detection based on deep learning. Secur. Commun. Netw. 2020(4), 1–13 (2020) 7. Riyaz, B., Ganapathy, S.: A deep learning approach for effective intrusion detection in wireless networks using CNN. Soft. Comput. 24(22), 17265–17278 (2020) 8. Mallick, M., Chang, K.C., Arulampalam, S., et al.: Heterogeneous track-to-track fusion in 2D using sonar and radar sensors. In: 2019 22th International Conference on Information Fusion (FUSION), pp. 1–8. IEEE (2019) 9. Singla, J.: Comparing ROC curve based thresholding methods in online transactions fraud detection system using deep learning. In: 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), pp. 9–12. IEEE (2021) 10. Sufen, L., Xueli, Y.: Mathematical modeling and simulation of data buffer replacement algorithm in heterogeneous network. Comput. Simul. 38(12), 286–290 (2021)
Construction of Intelligent Evaluation Model for Electric Power Marketing Inspection Status Based on Cloud Measurement Yanli Zhang1(B) , Xinlei Zhang1 , Lisai Wang1 , Renkai Niu1 , Wei Guo1 , and Chuangji Zhang2 1 Metering Center of Jibei Power Grid Co., Ltd., Beijing 100045, China
[email protected] 2 Guangzhou Huali Science and Technology Vocational College, Guangzhou 511325, China
Abstract. The reform of electricity market and the acceleration of the construction of smart grid impel electric power enterprises to change the traditional marketing mode, realize the low loss conversion between complex information, and reduce the loss in the process of information cognition. An intelligent evaluation model of electric power marketing inspection status based on cloud measurement is proposed. Based on the overview of cloud measure, this paper analyzes the audit risk of electric power marketing, determines the index weight of electric power marketing audit state by constructing the evaluation index system of electric power marketing audit state, and realizes the intelligent evaluation of electric power marketing audit state by constructing the early warning model of electric power marketing audit state. The results of the example show that the model in this paper has certain feasibility and practicability in evaluating the electric power marketing inspection status. Through the cloud measurement analysis based on the cloud center of gravity, the objective expression of the electric power marketing status evaluation and early warning conclusion is obtained, which provides a new idea for the electric power marketing management decision-making. Keywords: Cloud Measure · Power Marketing · Audit Status · Evaluation Model · Risk Analysis
1 Introduction With the deepening of power system reform, power marketing has changed from production-oriented to consumer-oriented, and the service mode of power enterprises is bound to change accordingly [1]. Power supply enterprises are basic and public welfare enterprises involving the national economy and people’s livelihood. Under the new situation, with the deepening of power enterprise reform, power supply enterprises should not only improve the quality of power products, but also provide high-quality, universal and differentiated services. Service is the basic attribute of power supply enterprises, and providing quality services to customers is the key to their current and future development. The natural monopoly background of power supply enterprises and the “seller’s © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 485–499, 2024. https://doi.org/10.1007/978-3-031-50571-3_35
486
Y. Zhang et al.
market” in which the power market is in short supply for a long time make power supply enterprises form a style of not paying attention to users’ needs, which leads to the existence of production-oriented power marketing concept in enterprises [2]. In the future competitive power market environment, in order to achieve rapid development and establish a good corporate image, we must improve the service level. Therefore, under this background, the marketing business of power supply enterprises shoulders the heavy responsibility related to the future development of enterprises. Perfecting business processes, improving service quality and strengthening risk management are the future development directions of power marketing. Li Dongsheng et al. [3] proposed an offensive and defensive CNN network intrusion detection model to address the network security problem after the increase of network traffic at the power marketing terminal under the mobile Internet. Aiming at the problem that traditional intrusion detection models only have a single classifier, which leads to low accuracy, the concept of base classifier is proposed. In order to find the optimal weight of each base classifier, a mathematical model of Stackelberg game is established, and genetic algorithm is used to find the optimal solution in the model. In the experimental simulation, the NSL-KDD dataset is used to verify the effectiveness of the model. Zheng Qian et al. [4] put forward a comprehensive evaluation method of comprehensive AHP Entropy and fuzzy comprehensive evaluation method to solve the problem of low efficiency of investment analysis of power grid marketing service outlets, which is limited to subjective experience or only to objective data evaluation. Aiming at the comprehensive evaluation goal of the rationality of investment in power grid marketing service outlets, they designed a complete set of evaluation index system, and used the method of combining subjective and objective AHP Entropy to calculate the index weight, A set of multi-objective fuzzy mathematical model suitable for evaluating the investment rationality of power grid marketing service outlets is established. This model can provide effective support for the evaluation of the investment rationality of marketing service outlets, and can provide effective auxiliary support for the merger and withdrawal of marketing service outlets. Wang P et al. [5] to solve these problems, according to the control accuracy of UAV, the point cloud data in flight space is segmented by using ArcGIS software and grid method. By extracting the grid coordinate information and mapping it into the three-dimensional matrix, the model can be built accurately. Taking the minimum energy consumption as the objective function, a path planning model based on UAV performance and natural wind constraints is established. The improved ant colony algorithm and A algorithm are used to design the algorithm to obtain faster solutions. In other words, the improved ant colony algorithm is used to quickly find a near optimal trajectory covering all viewpoints with the minimum energy consumption. The improved A algorithm will be used for local planning of adjacent orbits passing through obstacles. Under the designed simulation environment, the simulation results show that the improved algorithm can save 62.88% energy compared with manual shooting of aerial transmission line UAV inspection images under the condition of ensuring the same components. In addition, it can save 9.33% energy compared with the shortest track. In addition, ACO-A * algorithm saves 96.6% time compared with A * algorithm. In today’s booming market economy society, the competition among enterprises is becoming more and more fierce, and no enterprise can always sit idly by and ignore the
Construction of Intelligent Evaluation Model for Electric Power
487
development of the market and the changes in the competitive environment. Marketing is very important for every modern enterprise. Without good marketing work, it is very difficult for enterprises to improve market share, sales growth rate and sustainable development ability. Although most enterprises realize the importance of marketing and vigorously carry out marketing work, few enterprises identify and manage the risks in marketing work accordingly, thus making enterprises face great risks. The power marketing work of power supply enterprises is very different from other enterprises. Because of the particularity of the industry, its operation and development are related to the good economic operation of the country and the security and stability of the society. Therefore, the quality of its marketing work and the response to marketing risks have extraordinary economic, social and political significance. Based on the above research background, this paper applies cloud measurement to the construction of intelligent evaluation model of power marketing inspection status, so as to adapt to China’s power grid construction and power system reform. The innovation of the research content is to put forward an overview of cloud measurement, which reveals the dynamic development law of electric power marketing audit state. Based on this law, the state evaluation and early warning analysis of all factors with non-zero aggregation degree can be realized, which provides a convenient means for internal marketing management and peer benchmarking of electric power enterprises and has certain practical value.
2 Design of Intelligent Evaluation Model for Electric Power Marketing Inspection Status 2.1 Overview of Cloud Measurement First of all, suppose that x is a random production of the finalized concept D on the quantitative universe V represented by an accurate numerical value, then the certainty of x for D is v(x) ∈ [0, 1], where the distribution of x on V is the cloud model, and the separate x is the cloud entropy. Cloud uses expected Ex, entropy En and super entropy He as characteristics to quantitatively characterize qualitative concepts, in which expected Ex is the domain value relative to cloud gravity center Z; Entropy En is composed and determined by the fuzziness and randomness of concepts, which can reflect the number field range accepted by qualitative concepts in the universe; Hyperentropy He is the uncertainty measure of entropy En. According to the theory, there are many forms of cloud models, among which the most representative is the normal cloud form established on the basis of normal distribution. This form has very good practicability in terms of natural and social phenomena [6]. On this basis, this paper mainly studies the normal cloud model. Figure 1 shows a quantitative example of the qualitative concept of normal cloud. 2.2 Analyze the Risk of Electric Power Marketing Inspection Power marketing means that in the changing power market, focusing on the requirements of power customers, through the relationship between power supply and consumption, power users can use safe, reliable, qualified and economic power commodities, get
488
Y. Zhang et al. 1.0 0.9 0.8 0.7 0.6
En
0.5 0.4 0.3 0.2 0.1
Ex
0 10
15
20
25
30
35
40
Fig. 1. Quantification of qualitative concepts by normal clouds
thoughtful and satisfactory services, and return as much profit as possible to meet the needs of enterprise survival and development. Among them, the changing power market, the contradiction between power supply and demand, the process of establishing the relationship between power supply and consumption, the quality of power commodities, the link of verification and collection [7, 8], and the level of service will all bring certain risks to power marketing. Therefore, it is necessary to conduct in-depth and detailed analysis of power marketing risks. The marketing management of power supply enterprises is sorted out and analyzed by using the analytic hierarchy process, and the evaluation index system of electric power marketing inspection status is established as shown in Table 1. 2.3 Determine the Weight of Power Marketing Inspection Status Evaluation Index According to the established evaluation index system of electric power marketing inspection status, the probability quantitative method of electric power marketing inspection status is proposed, and the system is processed to obtain the importance of each node of electric power marketing inspection status [9]. Through the importance value obtained, the weight of the power marketing inspection status is calculated. The specific operations are as follows: According to the evaluation index system of electric power marketing inspection status, the weight information of electric power marketing inspection status is obtained: pn = u1 × Vos + u2 × Vsor + u3 × Vvul
(1)
In Formula (1), u1 , u2 and u3 respectively represent the weight vector of power marketing inspection status, and Vos , Vsor and Vvul respectively represent the inspection status of power marketing under different importance.
Construction of Intelligent Evaluation Model for Electric Power
489
Table 1. Evaluation Index System of Power Marketing Inspection Status Target layer
Main criteria layer
Subcriteria layer
Factor layer
Power marketing inspection status evaluation
Competitive power
Market share A11
Market share A111 Market share change A112 Price competitiveness A113 Competitive concentration A114 Industry entry difficulty A115 Market demand intensity A116
Sales A12
Cost performance ratio of alternative energy A121 Growth rate of electricity sales A122 Growth rate of electricity sales revenue A123 Sales profit margin A124 Average price compliance rate A125 Electricity selling expense rate A126 Charge recovery rate A127 Growth rate of green energy sales A128 Growth rate of green energy sales revenue A129 Average green energy price compliance rate A1210 Proportion of green energy in total electricity sales A1211
Product quality A13
Line loss compliance rate A131 Frequency qualification rate A132 Voltage qualification rate A133
490
Y. Zhang et al.
According to the calculation result of the above formula, the predicted value of the importance of the electric power marketing inspection status is: ri,j = bn (i, d ) × bn (d , l) . . . bn (k, j)
(2)
g(i, d ), g(d , l), . . . , g(k, j) = gmin (i, j)
(3)
wherein, bn is described as the predictive value of the importance of the electric power marketing inspection status, and the calculated value of the importance of the electric power marketing inspection status is obtained by using formula (4): pi × k × ri,j , i = j (4) ui,j = pi × k, i = j According to the above calculation results, the importance information value of the electric power marketing inspection status is obtained, and the corresponding results are obtained by weighting according to the relevant indexes. By comparing the influence of the importance on the electric power marketing inspection status, the equation for predicting the electric power marketing inspection status is judged: ej =
m
ui,j
(5)
i=g
On the basis of formula (5), the following formula is used to calculate the inspection status of power marketing, namely: Saj = rj × aj
(6)
In the formula, rj is described as the physical data in power marketing, and the actual status of the inspection in power marketing is calculated: T = {t1 , t2 , . . . tn }
(7)
Through calculation, the inspection status of each power marketing stage can be obtained: QEYI = qeyi,1 , qeyi,2 , . . . , qeyi,3 (8) After obtaining the inspection status of electric power marketing, the importance of the inspection status can be calculated: pn = u1 × Vos + u2 × VSoT + u3 × VCON
(9)
In Formula (9), Vos , VSoT and VCON respectively represent the correlation between the power marketing inspection states. According to the calculation result of formula (9), the weight of power marketing inspection status is obtained: u1 × 2c + u2 × 2i + u3 × 2a (10) ω = lb 3
Construction of Intelligent Evaluation Model for Electric Power
491
In Formula (10), c represents the level of electric power marketing inspection status, i represents the integrity of electric power marketing inspection status, and ω and W = (u1 , u2 , u3 ) describe the weight of electric power marketing inspection status in different situations. Using the result of formula (10), the power marketing inspection status weight is calculated. According to the established evaluation index system of electric power marketing inspection status, the weight information of electric power marketing inspection status is obtained, and the importance of electric power marketing inspection status is predicted. By calculating the actual inspection status of electric power marketing, the importance of the inspection status is calculated, and the weight of the evaluation index of electric power marketing inspection status is determined. 2.4 Building the Early Warning Model of Electric Power Marketing Inspection State Evaluation In this paper, the electric power marketing status is first graded, and then seven different electric power marketing statuses are obtained, which are extremely poor, poor, poor, good, better, excellent, and excellent. Considering the convenience, this is corresponding to 1 to 7 figures. On the basis of the evaluation set of the cloud model, the evaluation factors of the electric power marketing state are unified, so that the duality of the random and fuzzy factors is unified as a whole. In this way, the soft differentiation of different levels is realized, and on this basis, the actual distribution of data can be met. Under the 1–7 electric power marketing state system, the mathematical statistical analysis method will be used to divide the factors of each grade, among them, mathematical statistics analysis is a branch of mathematics. Based on probability theory, statistical methods are used to analyze the data and study and derive its conceptual regularity, and the qualitative factors and the comprehensive state will be defined as Z. After the grade division is completed, cloud model description shall be carried out for the evaluation set of sub sections with bilateral constraints, which is based on the randomness and fuzziness of the boundary and moderately expanded [10, 11]; For the partition interval with a single boundary, half cloud is mainly used for description, so that the constraint values at the left and right ends are taken as their respective expected values Ex, and 1/2 of the corresponding symmetrical cloud entropy value is taken as their respective entropy En. The comprehensive evaluation cloud generator obtained from the normal cloud model is shown in Fig. 2. In this paper, the qualitative factor is directly used as the input of the comprehensive evaluation cloud, and the cloud digital eigenvalue number corresponding to the evaluation result is obtained; The quantitative factor directly takes the quantitative value as the input of its own evaluation cloud. On the basis of the maximum correlation theory, the qualitative evaluation is obtained, and then the comprehensive evaluation cloud digital characteristic value of the quantitative factor is obtained through the qualitative factor measurement method. The intelligent evaluation steps of power inspection status are shown in Fig. 3. Assume that x is the number of power marketing inspection states to be evaluated, and y is the importance of the power marketing inspection states to be evaluated. Based
492
Y. Zhang et al. 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 ood 0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Fig. 2. Cloud Generator for Power Marketing Comprehensive Evaluation
Fig. 3. Flow chart of intelligent evaluation of electric power inspection status
on the comprehensive evaluation cloud digital eigenvalue, the steps to evaluate the power marketing inspection states are as follows: Step 1: Take the quantitative factor as the input of its own evaluation cloud, and build the power marketing inspection status weight matrix. Let iab represent the evaluation value of the a(a = 1, 2, · · · , x) th power marketing inspection status under b(b = 1, 2, · · · , y) importance indicators, and the calculation
Construction of Intelligent Evaluation Model for Electric Power
formula of the power marketing inspection status weight matrix is as follows: ⎤ ⎡ i11 i12 · · · i1y ⎢ i21 i22 · · · i2y ⎥ ⎥ I =⎢ ⎣··· ··· ··· ···⎦
493
(11)
ix1 ix2 · · · ixy
Step 2: Use cloud measurement theory to normalize all indicators: jab = iab
(12)
In electric power marketing, each indicator of the importance of electric power marketing inspection status evaluation is normalized through the qualitative factor calculation method, and the expression is as follows: Sab =
jab x a=1
(13) 2 jab
Step 3: Quantifying the importance of inspection status in power marketing; In order to reduce the impact of indicators on the electric power marketing inspection status, the cloud measure is used to reveal the dynamic development law of the electric power marketing inspection status. Based on this law, the electric power marketing inspection status is quantized, and the expression is as follows: Step 4: Construction of Decision Matrix for Importance of Power Marketing Inspection Status; Let un represent the importance weight of the entropy method, then the expression of the importance decision matrix of the power marketing inspection status is as follows: Qab = ub Sab
(14)
Step 5: Calculate the actualstatus and gain status of power marketing inspection; + + + Set v = v1 , v2 , · · · , vy+ to represent the set of actual state coefficients of power marketing inspection. The calculation formula of the actual state of power marketing inspection is as follows: (15) vb+ = max{vab }(b = 1, 2, · · · , y) Set v− = v1− , v2− , · · · , vy− to represent the gain state coefficient set of electric power marketing inspection. The calculation formula of electric power marketing inspection gain state is as follows: vb− = min{vab }(b = 1, 2, · · · , y)
(16)
Step 6: Calculate the importance of the actual status of the electric power marketing inspection. The formula is: y 2 Da+ = ua vab − vb+ , (a = 1, 2, · · · , x) (17) b=1
494
Y. Zhang et al.
Step 7: Calculate the importance of the electric power marketing inspection gain state, and the formula is: y 2 − Da = ua vab − vb− , (a = 1, 2, · · · , x) (18) b=1
Step 8: Calculate the closeness corresponding to the importance of the electric power marketing inspection status; The higher the importance of the electric power marketing inspection status, the better the electric power marketing inspection status is, and the better the comprehensive evaluation result of the electric power marketing inspection status is. The calculation formula of the closeness degree corresponding to the importance of the electric power marketing inspection status is as follows: Wb =
Db−
Db+ + Db−
(19)
To sum up, the intelligent evaluation of power marketing inspection status has been completed, and the evaluation of power marketing inspection status is realized according to this principle.
3 Example Analysis This paper takes an electric power enterprise as the research object, and uses the original data shown in Table 2 to analyze the power marketing competitiveness for half a year. The raw data comes from the OpenCorporates data set, which is the largest enterprise open database in the world. It contains the records of more than 170 million companies, covering diversified enterprise information, including basic enterprise registration information and announcements of important enterprise information, including financial situation, high-level changes, investment and financing, etc. All announcement documents can be found in the details. The database supports web-based search, and provides many tools for data investigators through its application programming interface to help them find, obtain and connect various enterprises in automated workflow. According to the original data in Table 2, the correlation function and each factor evaluation cloud are used to obtain the status values of each factor, as shown in Table 3. The improved AHP method is used to determine the weight value of each factor of power marketing competitiveness. To facilitate the flexible combination analysis of marketing competitiveness from multiple levels, the calculation results of the evaluation factor weight values are shown in Table 4. For any target object, the value of each evaluation factor in theideal state has been determined, so the integrated cloud gravity center vector W 0 = W10 , W20 , · · · , Wm0 t ideal state can be calculated, the integrated cloud gravity center vector W = in the t t t W1 , W2 , · · · , Wm in the actual state of the target object at time t can be calculated, and the integrated cloud gravity center vector WiT at time t can be normalized to obtain: 0 W − Wit /Wi0 ; Wit < Wi0 i = 1, 2, · · · , m (20) WiT = it Wi − Wi0 /Wi0 ; Wit > Wi0
Construction of Intelligent Evaluation Model for Electric Power
495
Table 2. Raw Data Evaluation factors
Raw Data January
February
March
April
May
June
A111
78.51
70.12
53.24
49.93
5512
66.26
A112
4.25
3.02
-1.32
-2.21
-0.98
3.43
A113
1.83
1.27
1.54
1.12
1.28
1.12
A114
General
General
Lower
Lower
Low
Low
A115
Difficulty
Difficulty
Difficult
General
General
General
A116
Strong
Strong
General
General
General
General
A121
General
General
Lower
Low
Low
Lower
A122
19.87
16.46
11.87
3.24
2.21
-1.08
A123
28.98
27.16
19.33
3.23
0.98
-4.13
A124
High
Low
Low
General
General
Higher
A125
117.21
122.01
101.32
96.04
89.09
93.17
A126
6.17
8.43
2.74
1.57
5.45
3.21
A127
96.97
87.98
86.86
84.21
80.98
82.16
A128
-2.98
-3.05
3.22
3.21
4.79
4.76
A129
-3.48
-4.59
1.09
0.98
1.16
1.08
A1210
812.34
809.12
802.27
805.14
796.13
799.27
A1211
0.54
1.01
2.21
4.22
3.98
4.76
A131
102.4
98.6
102.2
106.4
104.6
99.8
A132
98.21
99.18
96.24
97.51
98.17
99.22
A133
99.02
97.89
98.99
99.87
98.65
98.24
The difference between the sum of the weighted values of the normalized integrated cloud barycenter vector and the ideal integrated cloud barycenter is the integrated cloud measure of the target object. The larger the comprehensive cloud measurement value is, the less the object deviates from the ideal state, and the more the actual state tends to the ideal state. The cloud measure is calculated as: ϑ =1−
m
ωi × WiT
(21)
i=1
Among them, i is the weight value of the i th evaluation factor, and WiT is the normalized value of the cloud gravity center of the i th dimension. According to the above calculation, the comprehensive cloud measurement results of power marketing competitiveness are shown in Table 5. According to Table 5, the comprehensive cloud measure value of power marketing competitiveness is 0.610216192, input the comprehensive cloud measurement value (ϑ
496
Y. Zhang et al. Table 3. Status Values of Evaluation Factors of Power Marketing Competitiveness
Evaluation factors
Status value of each evaluation factor January
February
March
April
A111
Excellent
Excellent
Excellent
Better
Excellent
Excellent
A112
Superexcellent
Excellent
Inferior
Extremely inferior
Inferior
Excellent
A113
Superexcellent
Good
Excellent
Good
Good
Good
A114
Good
Good
Better
Better
Excellent
Excellent
A115
Excellent
Excellent
Better
Good
Good
Good
A116
Better
Better
Good
Good
Good
Good
A121
Good
Good
Better
Excellent
Excellent
Better
A122
Superexcellent
Superexcellent
Superexcellent
Inferior
Inferior
Better
A123
Superexcellent
Superexcellent
Excellent
Superexcellent
Inferior
Better
A124
Excellent
Inferior
Inferior
Good
Good
Better
A125
Superexcellent
Excellent
Good
Inferior
Inferior
Inferior
A126
Better
Good
Better
Excellent
Inferior
Inferior
A127
Superexcellent
Superexcellent
Excellent
Good
Good
Good
A128
Inferior
Inferior
Better
Better
Good
Better
A129
Inferior
Inferior
Good
Good
Good
Good
A1210
Excellent
Excellent
Good
Good
Good
Extremely inferior
A1211
Extremely inferior
Inferior
Inferior
Superexcellent
Better
Superexcellent
A131
Good
Inferior
Good
Good
Good
Inferior
A132
Better
Excellent
Good
Better
Better
Excellent
A133
Better
Better
Better
Excellent
Good
Better
May
June
= 0.610216192) into the power marketing comprehensive evaluation cloud generator, activate the two cloud objects of level 4 and 5 (i.e., “good” and “better” status), and finally determine that the comprehensive status of the power marketing competitiveness of this example is “better” (i.e., level 5) according to the principle of great relevance, and its early warning status is indicated by “green light”, the model can realize the state evaluation and early warning analysis of all factors with non-zero aggregation degree, and has good results.
Construction of Intelligent Evaluation Model for Electric Power
497
Table 4. Power Marketing Competitiveness Evaluation Factor Weight Value Criteria level evaluation factor
Weight value
Factor level evaluation factor
Weight value
A11
0.1219
A111
0.2169
A112
0.3715
A113
0.1872
A114
0.0745
A115
0.043
A116
0.1078
A121
0.0195
A122
0.1641
A123
0.1531
A124
0.2023
A125
0.1225
A126
0.0586
A127
0.0954
A128
0.0548
A12
0.5584
A13
0.3197
A129
0.0436
A1210
0.0388
A1211
0.0474
A131
0.1428
A132
0.4286
A133
0.4286
Table 5. Comprehensive Cloud Measurement Analysis Results of Power Marketing Competitiveness Evaluation factors
Cloud digital characteristics of each factor
Normalization of cloud barycenter
Integrated cloud measure
0.610216192
Ex
En
A111
0.86356
0.1836
0.003607
A112
0.69101
0.1836
0.013993
A113
0.46227
0.22
0.012271
A114
0.5
0.3068
0.004541
A115
0.66799
0.2802
0.00174
A116
0.63488
0.3068
0.004798
A121
0.553059
0.3334
0.002786 (continued)
498
Y. Zhang et al. Table 5. (continued)
Evaluation factors
Cloud digital characteristics of each factor
Normalization of cloud barycenter
Ex
En
A122
0.834904
0.2822
0.008661
A123
0.713101
0.1248
0.014043
A124
0.5
0.3068
0.322338
A125
0.487188
0.2767
0.020083
A126
0.428014
0.3334
0.010716
A127
0.649524
0.2417
0.010689
A128
0.552365
0.3467
0.007842
A129
0.431223
0.3467
0.007928
A1210
0.521157
0.255
0.00594
A1211
0.475969
0.2116
0.007941
A131
0.48314
0.3467
0.041243
A132
0.65962
0.2718
0.081463
A133
0.5942
0.3201
0.097119
Integrated cloud measure
4 Conclusion This paper studies the application of cloud measurement to the construction of the intelligent evaluation model of power marketing inspection status. The power marketing status evaluation and early warning model based on cloud measurement can analyze the quantitative results and qualitative conclusions of the status evaluation and early warning according to the principle of bottom-up and level by level evaluation, and can realize the status evaluation and early warning analysis of all factors with a degree of aggregation, It provides a convenient means for electric power enterprises to carry out internal marketing management and peer benchmarking, and has certain practical value. The example verifies the feasibility and rationality of the evaluation and early warning model proposed in this paper, and provides a new idea for the comprehensive state analysis of power marketing. However, there are still many deficiencies in this study. In future research, we hope that this model can be applied to customer risk and supply risk, and expand the scope of application of this model.
References 1. Tiernan, T.: Danly dissents on penalty, enforcement action against NRG power marketing. Foster Nat. Gas Rep. 3332, 5–7 (2021) 2. Loda, A., Romano, D.: Digital technology in the retail electric power market: state of the art and further challenges. Micro Macro Mark. 1(1), 221–250 (2021)
Construction of Intelligent Evaluation Model for Electric Power
499
3. Li, D., He, Y., Peng, X., et al.: Application of CNN intrusion detection algorithm in power marketing system. Comput. Eng. Des. 42(6), 1585–1591 (2021) 4. Zheng, Q., Luo, Y., Liu, K., et al.: Research on fuzzy assessment of investment rationality of electric supply network marketing service outlets based on AHP-entropy method. Power Syst. Clean Energy 36(1), 41–45 (2020) 5. Wang, P., Zhang, C., Jiao, Y., et al.: Research and application of improving the field service ability of electric power marketing measuring mobile operating based on cloud computing technology. Int. J. High Perform. Syst. Archit. 9(2/3), 117–124 (2020) 6. Fang, Z., Yang, J., Yang, H., et al.: Transaction behavior analysis for power marketing customer based on time series modeling. J. Shenyang Univ. Technol. 42(2), 127–131 (2020) 7. Liu, C., Hu, C., Wang, P., et al.: Research on multi-dimensional credit evaluation model of electric power customers based on big data of marketing. J. Southwest Univ. (Nat. Sci. Ed.) (06), 198–208 (2022) 8. Yang, D., Ji, M., Lv, Y., et al.: Research on zoning, optimization, stability, and nonlinear control of wireless network in power grid communication. J. Interconnect. Netw. 22(6), 1–10 (2022) 9. Wang, L., Fu, H., Yang, Y., et al.: Storage mechanism of electricity marketing data based on blockchain. J. Chongqing Univ. (08), 156–164 (2021) 10. Shahid, M.A., Khan, T.M., Zafar, T., et al.: Health diagnosis scheme for in-service low voltage Aerial Bundled Cables using super-heterodyned airborne Ultrasonic testing. Electr. Power Syst. Res. 180, 106162.1-106162.11 (2020) 11. Gao, C., Huang, Y., Ma, H.: Simulation of spatial load density distribution in medium voltage distribution network. Comput. Simul. 37(3), 56–60 (2020)
Author Index
A An, Haoran
20
C Cao, Zhiying 31, 362 Challagundla, Yagnesh 344 Chen, Liang 354 Chen, Xinlei 41, 370 Chen, Zhongwei 308, 438 Chintalapati, Lohitha Rani 344 D Demirelli, Hasan 3 Ding, Jinshun 31
M Ma, Lei 206 Ma, Xingfei 262 Ma, XingfeiMa 248 Maihebubai, Xiaokaiti 50 Mohanty, Sachi Nandan 344
F Fang, Shu 158 Fang, Weiqing 354 G Gu, Yi 31 Guo, Dandan 31 Guo, Wei 485 H Han, Lin 408 Huang, Zhengfeng I ˙I¸sler, Yalçın
N Niu, Renkai
377
485
P Pan, Xiafu 82 Pei, Pei 100 Peng, Zheng 277 Q Qi, Di 114, 128 Qin, Liushi 377 Qu, Mingfei 220, 235
3
K Kang, Yingjian
Liu, Hui 190 Liu, Jianxin 277 Liu, Jing 277 Liu, Lu 144 Liu, Qiangjun 394 Liu, Siyu 20 Liu, Wenjing 174 Liu, Wenyu 144 Liu, Xinran 144 Liu, Yudan 144 Liu, Zhijun 220, 235 Lu, Taowen 31
206
L Li, Xiaofeng 308, 438 Li, Ying 277 Lian, Xiaoyu 31 Liang, Yaping 114, 128 Liao, Yi 408
R Ren, Jun 100 S Song, Chao 424 Sun, Jiayi 424 Sun, Yanmei 65
© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 532, pp. 501–502, 2024. https://doi.org/10.1007/978-3-031-50571-3
502
V Vanami, Naga Venkata Jashwanth W Wang, Lisai 485 Wang, Mengke 325 Wang, Ningning 394 Wang, Wuguang 248, 262 Wang, Yixin 354 Wang, Yue 424 Wei, Xiaogang 294, 468 Wu, Chujun 362 Wu, Yingjie 20 X Xie, Liyi 190 Xu, Ya 65 Xu, Yanfa 144 Y Yang, Jianxing 206 Yang, Xiaomei 453
Author Index
344
Ye, Gang 41 Ye, Jiufeng 41, 370 Yi, Mancheng 277 Yong, Feng 20 Yu, Sifan 277 Yuan, Haidi 174 Yüce, Yılmaz Kemal 3
Z Zhang, Chuangji 485 Zhang, Rong 294, 468 Zhang, Xin 220, 235 Zhang, Xinlei 485 Zhang, Yanli 485 Zhao, Dongming 41, 370 Zheng, Chun 82 Zhong, Wei 41, 370 Zhou, Haishan 144 Zhu, Fanghui 158 Zhu, Wei 354 Zhuo, Shufeng 206