Multimedia Technology and Enhanced Learning: 5th EAI International Conference, ICMTEL 2023, Leicester, UK, April 28-29, 2023, Proceedings [3] 303150576X, 9783031505768


125 97 9MB

English Pages [439] Year 2024

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents – Part III
Data Mining and Machine Learning
Research on Pain Information Management System Based on Deep Learning
1 Introduction
2 Research Status
3 Research Content
4 Research Steps
4.1 Data Collection
4.2 Data Fusion
4.3 Knowledge Base Construction
4.4 Algorithm Model Construction
4.5 Algorithm Design
4.6 System Development
5 Functional Modules
6 Summarize
References
LS-SVM Assisted Multi-rate INS UWB Integrated Indoor Quadrotor Localization Using Kalman Filter
1 Introduction
2 EKF Algorithm
3 Proposed Fusion Strategy
3.1 LWLR Algorithm
3.2 LS-SVM Algorithm
3.3 Fusion Algorithm
4 Experiment and Analysis
5 Conclusion
References
Design of Digital Image Information Security Encryption Method Based on Deep Learning
1 Introduction
2 Digital Image Analysis Based on Deep Learning
2.1 Deep Learning Network Fundamentals
2.2 Learning Function Selection
2.3 Image Information Cracking
3 Security Encryption of Digital Image Information
3.1 Scrambling
3.2 Loop Index Table
3.3 Security Parameters
4 Experimental Analysis
4.1 Experiment Preparation
4.2 Steps and Processes
4.3 Analysis of Experimental Results
5 Conclusion
References
Design of Iot Data Acquisition System Based on Neural Network Combined with STM32 Microcontroller
1 Introduction
2 IoT Data Acquisition System Design
2.1 Control Module Design
2.2 Design of Data Acquisition Terminal
2.3 Sensor Module Design
2.4 Data Classification Module
3 System Test
3.1 Indoor Environment Data Collection
3.2 Data Acquisition Performance Test
3.3 System Communication Rate Test
3.4 Power Consumption Test of Data Acquisition Terminal
4 Conclusion
References
Research on Network Equilibrium Scheduling Method of Water Conservancy Green Product Supply Chain Based on Compound Ant Colony Algorithm
1 Introduction
2 Design of Network Equilibrium Scheduling Method for Water Conservancy Green Products Supply Chain
2.1 Build the Logistics Distribution Model of Water Conservancy Green Product Supply Chain
2.2 Obtain the Competitive Relationship Between Supply Chains
2.3 Prioritization Weight of Supply Chain Network Scheduling
2.4 Optimize the Supply Chain Network of Green Water Products
2.5 Construct the Network Equilibrium Scheduling Model of Water Conservancy Green Product Supply Chain
3 Comparative Analysis of Experiments
3.1 Build Supply Chain Logistics Network
3.2 Setting Evaluation Indicators
3.3 Result Analysis
4 Conclusion
References
Weak Association Mining Algorithm for Long Distance Wireless Hybrid Transmission Data in Cloud Computing
1 Introduction
2 Data Preprocessing of Long-Distance Wireless Hybrid Transmission Under Cloud Computing
2.1 Noise Data Elimination
2.2 Data Conversion
2.3 Data Truth Screening
3 Weak Association Mining of Mixed Transmission Data in Cloud Computing
3.1 Principle of Relationship Matching Between Data
3.2 Mining Model Construction
3.3 Mining Process of Transmission Data Weak Association
4 Experimental Analysis
4.1 Experimental Platform
4.2 Experimental Data Set
4.3 Standardized Processing of Experimental Data
4.4 Determination of Experimental Indicators
4.5 Analysis of Data Integration Results
4.6 Analysis of Data Mining Results
5 Conclusion
References
Detection Method of Large Industrial CT Data Transmission Information Anomaly Based on Association Rules
1 Introduction
2 Abnormality Detection of Large Industrial CT Data Transmission Information
2.1 Correlation Analysis
2.2 Denoising of Association Rules
2.3 Information Anomaly Detection
3 Experimental Test
3.1 Experimental Environment
3.2 Test Method
3.3 Test Result
4 Conclusion
References
Design of Intelligent Integration System for Multi-source Industrial Field Data Based on Machine Learning
1 Introduction
2 The Overall Structure Design of the System
3 System Hardware Structure Design
3.1 Multi-Source Industrial Field Data Synchronization Device
3.2 Field Data Intelligent Retrieval Engine
3.3 Integrated Channel Circuit Design
3.4 Real-Time Database Construction
4 Research on Key Technologies of the System
4.1 Integrated Task Scheduling Based on Machine Learning
4.2 Data Topology Adjustment
4.3 Integration Process Design
4.4 Integrated Data Integrity Detection Based on Machine Learning
5 System Test
5.1 Experimental Platform
5.2 Experimental Indicators
5.3 Analysis of Data Integration Effect
5.4 Data Integration Integrity Check
6 Conclusion
References
Link Transmission Stability Detection Based on Deep Learning in Opportunistic Networks
1 Introduction
2 Opportunistic Network Link Transmission Stability Detection
2.1 Link Node Model
2.2 Network Transmission Sending Barcode Status Detection
2.3 Link Backpack Box Simulation
2.4 Building a Multi-Layer Deep Learning Detection Model
3 Analysis of Experimental Results
4 Conclusion
References
Intelligent Mining Method of New Media Art Image Features Based on Multi-scale Rule Set
1 Introduction
2 Intelligent Mining of New Media Art Image Features
2.1 New Media Art Image Denoising
2.2 New Media Art Image Segmentation
2.3 Feature Mining
3 Experimental Test
3.1 Experimental Data Set
3.2 Experimental Environment
3.3 Evaluation Criteria for Experimental Results
3.4 Experimental Results
4 Conclusion
References
Data Security Sharing Method of Opportunistic Network Routing Nodes Based on Knowledge Graph and Big Data
1 Introduction
2 Research on Data Security Sharing Method of Routing Nodes
2.1 Analysis of Opportunistic Network Routing Protocols
2.2 Routing Node Affects the Propagation Model Construction
2.3 Routing Node Data Publishing/subscription Rules Formulation
2.4 Construction and Implementation of Data Security Sharing Architecture
3 Experiment and Result Analysis
3.1 Experiment Preparation Stage
3.2 Analysis of Experimental Results
4 Conclusion
References
Security Awareness Method of Opportunistic Network Routing Protocol Based on Deep Learning and Knowledge Graph
1 Introduction
2 Opportunistic Network Routing Protocol Security Awareness
2.1 Opportunistic Network Routing Protocol Data Collection
2.2 Security Perception
3 Experimental Tests
3.1 Experimental Method Design
3.2 Safety Perception Accuracy Test
4 Conclusion
References
Research on Pedestrian Intrusion Detection Method in Coal Mine Based on Deep Learning
1 Introduction
2 Pedestrian Intrusion Detection Model in Coal Mine
2.1 Preprocessing of Pedestrian Images in Coal Mines
2.2 Image Feature Extraction
2.3 Intrusion Detection Based on RBM
3 Experimental Tests
3.1 Experimental Dataset
3.2 Detection Indicators
3.3 Experimental Results and Analysis
4 Conclusion
References
Personalized Recommendation Method of College Art Education Resources Based on Deep Learning
1 Introduction
2 Definition of Deep Learning Framework of Art Education in Colleges and Universities
2.1 Basic Learning Architecture
2.2 Learning Architecture Model Upgrade
2.3 Improvement of Learning Architecture Model
3 Personalized Recommendation of Art Education Resources
3.1 Distribution of Education Weight
3.2 Resource Similarity
3.3 Recommended List
3.4 Analysis of Personalized Recommendation Characteristics
4 Example Analysis
4.1 Experimental Preparation
4.2 Test Steps
4.3 Data Processing and Experimental Results
5 Conclusion
References
Global Planning Method of Village Public Space Based on Deep Neural Network
1 Introduction
2 Feature Extraction of Village Public Space Global Planning
3 Overall Planning of Village Public Space Land
4 Experimental Study
5 Conclusion
References
A Multi Stage Data Attack Traceability Method Based on Convolutional Neural Network for Industrial Internet
1 Introduction
2 Multi Stage Consensus Mechanism of Industrial Internet
2.1 Convolutional Neural Network
2.2 Classified Training
2.3 Encryption Algorithm
3 Data Attack Traceability Scheme
3.1 Data Traceability and Tracking Framework
3.2 Workflow Metadata
3.3 Traceability Automatic Capture Mechanism
4 Example Analysis
4.1 Variable Description and Experimental Process
4.2 Experimental Results
5 Conclusion
References
Machine Learning Based Method for Mining Anomaly Features of Network Multi Source Data
1 Introduction
2 Network Multi-source Data Learning Abnormal Feature Mining
2.1 Construction of Multi-source Data Feature Recognition Model
2.2 Multi Source Data Feature Anomaly Recognition Algorithm
2.3 Implementation of Network Multi-source Data Mining
3 Analysis of Experimental Results
4 Conclusion
References
Data Anti-jamming Method for Ad Hoc Networks Based on Machine Learning Algorithm
1 Introduction
2 Anti-interference Method of Ad Hoc Network Data
2.1 Classification of Data Interference Types
2.2 Design of Anti-jamming Algorithm for Ad Hoc Network Data
2.3 Realization of Anti-interference of Ad Hoc Network Data
3 Analysis of Experimental Results
4 Conclusion
References
A Data Fusion Method of Information Teaching Feedback Based on Heuristic Firefly Algorithm
1 Introduction
2 Establish Data Entity Network to Be Integrated
2.1 Calculation of Fusion Data Correlation
2.2 Building an Entity Framework for Data Fusion
3 Data Fusion Method Design
3.1 Parameter Dynamic Control of Firefly Algorithm
3.2 Design Data Fusion Algorithm
4 Experimental Research
4.1 Experimental Environment and Experimental Data Set
4.2 Data Processing to Be Merged
4.3 Selection of Experimental Parameters
4.4 Time Efficiency of Data Fusion
5 Conclusion
References
Research on Database Language Query Method Based on Cloud Computing Platform
1 Introduction
2 Extracting the Semantic Web Hierarchy
3 Identify Potential Associated Features of Database Words and Sentences
4 Construction of Ontology Annotation Model Based on Cloud Computing Platform
5 Design Language Query Methods
6 Simulation Test
6.1 Test Preparation
6.2 Test Results
7 Conclusion
References
Reliability Evaluation Method of Intelligent Transportation System Based on Deep Learning
1 Introduction
2 Research on Reliability of Intelligent Transportation System
2.1 Construction of Evaluation Index System
2.2 Evaluation Index Weight Calculation
2.3 Evaluation Model Based on Deep Learning
3 Evaluation Method Testing and Analysis
3.1 Research Object and Background
3.2 Evaluation Index System
3.3 Comprehensive Weight of Evaluation Indicators
3.4 Reliability Analysis
4 Conclusion
References
Forecasting Method of Power Consumption Information for Power Users Based on Cloud Computing
1 Introduction
2 Prediction Model of Power Consumption Information for Power Users
2.1 Cloud Computing Technology Design Prediction Model Framework
2.2 Power Consumption Data Preprocessing
2.3 Selection of Influencing Factors for Power Consumption of Power Users
2.4 Prediction Model Construction
3 Method Validation
3.1 Power Consumption Data
3.2 Model Evaluation Criteria
3.3 Prediction Results
3.4 Prediction Accuracy Comparison
4 Conclusion
References
Power Consumption Behavior Analysis Method Based on Improved Clustering Algorithm of Big Data Technology
1 Introduction
2 Research on Analysis Method of User’s Electricity Consumption Behavior
2.1 Analysis on Influencing Factors of Users’ Electricity Consumption Behavior
2.2 Research on Characteristics of User Power Load
2.3 Clustering of Users’ Electricity Consumption Behavior
2.4 Analysis and Realization of Users’ Electricity Consumption Behavior
3 Experiment and Result Analysis
3.1 Experiment Preparation Stage
3.2 Analysis of Experimental Results
4 Conclusion
References
Anomaly Detection of Big Data Based on Improved Fast Density Peak Clustering Algorithm
1 Introduction
2 Big Data Anomaly Clustering Framework
3 Abnormal Feature Extraction of Big Data
3.1 Feature Extraction
3.2 Abnormal Threshold Detection
4 Optimal Scheduling Based on Improved Fast Density Peak Clustering Algorithm
4.1 Optimizing Clustering
4.2 Scheduling Model
5 Design of Multi Source Big Data Cross Source Scheduler
6 Experimental Study
7 Conclusion
References
Data Privacy Access Control Method Based on Ciphertext Policy Attribute-Based Encryption Algorithm
1 Introduction
2 Literature Review
3 Data Privacy Access Control
3.1 Formulation of Privacy Policy
3.2 Trusted Implementation of Privacy Policy
3.3 Privacy Access Control
4 Experimental Tests
4.1 Experimental Environment
4.2 Scalability, Fault Tolerance, No Third Party and Obtaining Authorization Test
4.3 Anti-attack Rate Test
4.4 Privacy Leakage Risk Rate Test
5 Conclusion
References
Evaluation Method of Enterprise Circular Economy Development Level Based on AHP Fuzzy Inference
1 Introduction
2 Research on the Evaluation Method of Enterprise Circular Economy Development Level
2.1 In-depth Exploration of the Circular Economy of Enterprises
2.2 Primary Selection of Evaluation Indicators for the Development Level of Circular Economy
2.3 Screening and Processing of Evaluation Indicators
2.4 Realization of Circular Economy Development Level Evaluation
3 Experiment and Result Analysis
3.1 Experiment Preparation Stage
3.2 Analysis of Experimental Results
4 Conclusion
References
Research on Performance Evaluation of Industrial Economic Management Based on Improved Machine Learning
1 Introduction
2 Related Deployment of Industrial Economic Management Performance Evaluation System
2.1 Performance Evaluation Indicator Model
2.2 Storage Architecture Design and Deployment
2.3 Network Environment Design and Deployment
3 Building an Industrial Economic Management Performance Evaluation System that Improves Machine Learning
3.1 Selection of Evaluation Indicators and System Construction
3.2 Evaluation Algorithm Implementation
4 Performance Test
4.1 Test Platform and Constraints
4.2 Fluency Test Results and Analysis
4.3 Sensitivity Test Results and Analysis
5 Conclusion
References
Application of Big Data Processing Technology in Power Consumption Information Acquisition
1 Introduction
2 Design of Power Consumption Information Acquisition Method
2.1 Construction of Electricity Network Model
2.2 Installation of Power Information Collector and Processor
2.3 Analog Digital Conversion of Electric Power Information Acquisition Signal
2.4 Synchronous Control Electricity Information Acquisition Procedure
2.5 Applying Big Data Processing Technology to Statistics of Electricity Information Parameters
2.6 Pretreatment of Initial Acquisition Power Information
2.7 Realize Power Consumption Information Acquisition
3 Application Experiment Analysis
3.1 Setting Up the Operation Environment of Power Consumption Information Acquisition Method
3.2 Selection of Power Consumption Information Acquisition Object
3.3 Preparing Power Consumption Information Collection Samples
3.4 Describe the Application Experiment Process
3.5 Setting Quantitative Test Indicators for Application Experiments
3.6 Application Experiment Results and Analysis
4 Conclusion
References
Task Scheduling Method of Wireless Sensor Multimedia Big Data Parallel Computing Based on Bee Colony Algorithm
1 Introduction
2 Wireless Sensing Multimedia Big Data Parallel Computing Task Scheduling
2.1 Wireless Signal Processing
2.2 Parallel Programming
2.3 Data Scheduling of Parallel Computing Tasks
3 Experimental Test
3.1 Construction of the Experimental Platform
3.2 Experimental Results and Analysis
4 Conclusion
References
Author Index
Recommend Papers

Multimedia Technology and Enhanced Learning: 5th EAI International Conference, ICMTEL 2023, Leicester, UK, April 28-29, 2023, Proceedings [3]
 303150576X, 9783031505768

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Bing Wang Zuojin Hu Xianwei Jiang Yu-Dong Zhang (Eds.)

534

Multimedia Technology and Enhanced Learning 5th EAI International Conference, ICMTEL 2023 Leicester, UK, April 28–29, 2023 Proceedings, Part III

Part 3

Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering Editorial Board Members Ozgur Akan, Middle East Technical University, Ankara, Türkiye Paolo Bellavista, University of Bologna, Bologna, Italy Jiannong Cao, Hong Kong Polytechnic University, Hong Kong, China Geoffrey Coulson, Lancaster University, Lancaster, UK Falko Dressler, University of Erlangen, Erlangen, Germany Domenico Ferrari, Università Cattolica Piacenza, Piacenza, Italy Mario Gerla, UCLA, Los Angeles, USA Hisashi Kobayashi, Princeton University, Princeton, USA Sergio Palazzo, University of Catania, Catania, Italy Sartaj Sahni, University of Florida, Gainesville, USA Xuemin Shen , University of Waterloo, Waterloo, Canada Mircea Stan, University of Virginia, Charlottesville, USA Xiaohua Jia, City University of Hong Kong, Kowloon, Hong Kong Albert Y. Zomaya, University of Sydney, Sydney, Australia

534

The LNICST series publishes ICST’s conferences, symposia and workshops. LNICST reports state-of-the-art results in areas related to the scope of the Institute. The type of material published includes • Proceedings (published in time for the respective event) • Other edited monographs (such as project reports or invited volumes) LNICST topics span the following areas: • • • • • • • •

General Computer Science E-Economy E-Medicine Knowledge Management Multimedia Operations, Management and Policy Social Informatics Systems

Bing Wang · Zuojin Hu · Xianwei Jiang · Yu-Dong Zhang Editors

Multimedia Technology and Enhanced Learning 5th EAI International Conference, ICMTEL 2023 Leicester, UK, April 28–29, 2023 Proceedings, Part III

Editors Bing Wang Nanjing Normal University of Special Education Nanjing, China Xianwei Jiang Nanjing Normal University of Special Education Nanjing, China

Zuojin Hu Nanjing Normal University of Special Education Nanjing, China Yu-Dong Zhang University of Leicester Leicester, UK

ISSN 1867-8211 ISSN 1867-822X (electronic) Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering ISBN 978-3-031-50576-8 ISBN 978-3-031-50577-5 (eBook) https://doi.org/10.1007/978-3-031-50577-5 © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.

Preface

We are delighted to introduce the proceedings of the fifth edition of the 2023 European Alliance for Innovation (EAI) International Conference on Multimedia Technology and Enhanced Learning (ICMTEL). This conference brought researchers, developers and practitioners from around the world who are leveraging and developing multimedia technologies and enhanced learning. The theme of ICMTEL 2023 was “Human Education-Related Learning and Machine Learning-Related Technologies”. The technical program of ICMTEL 2023 consisted of 119 full papers, including 2 invited papers in oral presentation sessions at the main conference tracks. The conference tracks were: Track 1, AI-Based Education and Learning Systems; Track 2, Medical and Healthcare; Track 3, Computer Vision and Image Processing; and Track 4, Data Mining and Machine Learning. Aside from the high-quality technical paper presentations, the technical program also featured three keynote speeches and three technical workshops. The three keynote speeches were by Steven Li from Swansea University, UK, “Using Artificial Intelligence as a Tool to Empower Mechatronic Systems”; Suresh Chandra Satapathy from Kalinga Institute of Industrial Technology, India, “Social Group Optimization: Analysis, Modifications and Applications”; and Shuihua Wang from the University of Leicester, UK, “Multimodal Medical Data Analysis”. The workshops were organized by Xiaoyan Jiang and Xue Han from Nanjing Normal University of Special Education, China and Yuan Xu and Bin Sun from University of Jinan, China. Coordination with the Steering Committee Chairs, Imrich Chlamtac, De-Shuang Huang, and Chunming Li, was essential for the success of the conference. We sincerely appreciate their constant support and guidance. It was also a great pleasure to work with such an excellent organizing committee, we appreciate their hard work in organizing and supporting the conference. In particular, the Technical Program Committee, led by our TPC Co-chairs, Shuihua Wang and Xin Qi, who completed the peer-review process of technical papers and put together a high-quality technical program. We are also grateful to the Conference Manager, Ivana Bujdakova, for her support, and all the authors who submitted their papers to the ICMTEL 2023 conference and workshops. We strongly believe that ICMTEL conference provides a good forum for researchers, developers and practitioners to discuss all science and technology aspects that are relevant to multimedia technology and enhanced learning. We also expect that future events will be as successful and stimulating, as indicated by the contributions presented in this volume. October 2023

Bing Wang Zuojin Hu Xianwei Jiang Yu-Dong Zhang

Organization

Steering Committee Imrich Chlamtac De-Shuang Huang Chunming Li Lu Liu M. Tanveer Huansheng Ning Wenbin Dai Wei Liu Zhibo Pang Suresh Chandra Satapathy Yu-Dong Zhang

Bruno Kessler Professor, University of Trento, Italy Tongji University, China University of Electronic Science and Technology of China (UESTC), China University of Leicester, UK Indian Institute of Technology, Indore, India University of Science and Technology Beijing, China Shanghai Jiaotong University, China University of Sheffield, UK ABB Corporate Research, Sweden KIIT, India University of Leicester, UK

General Chair Yudong Zhang

University of Leicester, UK

General Co-chairs Ruidan Su Zuojin Hu

Shanghai Advanced Research Institute, China Nanjing Normal University of Special Education, China

Technical Program Committee Chairs Shuihua Wang Xin Qi

University of Leicester, UK Hunan Normal University

viii

Organization

Technical Program Committee Co-chairs Bing Wang Yuan Xu Juan Manuel Górriz M. Tanveer Xianwei Jiang

Nanjing Normal University of Special Education, China University of Jinan, China Universidad de Granada, Spain Indian Institute of Technology, Indore, India Nanjing Normal University of Special Education, China

Local Chairs Ziquan Zhu Shiting Sun

University of Leicester, UK University of Leicester, UK

Workshops Chair Yuan Xu

Jinan University, China

Publicity and Social Media Chair Wei Wang

University of Leicester, UK

Publications Chairs Xianwei Jiang Dimas Lima

Nanjing Normal University of Special Education, China Federal University of Santa Catarina, Brazil

Web Chair Lijia Deng

University of Leicester, UK

Organization

ix

Technical Program Committee Abdon Atangana Amin Taheri-Garavand Arifur Nayeem Arun Kumar Sangaiah Carlo Cattani Dang Thanh David Guttery Debesh Jha Dimas Lima Frank Vanhoenshoven Gautam Srivastava Gonzalo Napoles Ruiz Hari Mohan Pandey Hong Cheng Jerry Chun-Wei Lin Juan Manuel Górriz Liangxiu Han Mackenzie Brown Mingwei Shen Nianyin Zeng

University of the Free State, South Africa Lorestan University, Iran Saidpur Government Technical School and College, Bangladesh Vellore Institute of Technology, India University of Tuscia, Italy Hue College of Industry, Vietnam University of Leicester, UK Chosun University, Korea Federal University of Santa Catarina, Brazil University of Hasselt, Belgium Brandon University, Canada University of Hasselt, Belgium Edge Hill University, UK First Affiliated Hospital of Nanjing Medical University, China Western Norway University of Applied Sciences, Bergen, Norway University of Granada, Spain Manchester Metropolitan University, UK Perdana University, Malaysia Hohai University, China Xiamen University, China

Contents – Part III

Data Mining and Machine Learning Research on Pain Information Management System Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qi Shen, Yixin Wang, Weiqing Fang, Liqiang Gong, Zhijun Chen, and Jianping Li

3

LS-SVM Assisted Multi-rate INS UWB Integrated Indoor Quadrotor Localization Using Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dong Wan, Yuan Xu, Chenxi Li, and Yide Zhang

11

Design of Digital Image Information Security Encryption Method Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Licheng Sha, Peng Duan, Xinchen Zhao, Kai Xu, and Shaoqing Xi

19

Design of Iot Data Acquisition System Based on Neural Network Combined with STM32 Microcontroller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xinyue Chen

33

Research on Network Equilibrium Scheduling Method of Water Conservancy Green Product Supply Chain Based on Compound Ant Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Weijia Jin and Shao Gong Weak Association Mining Algorithm for Long Distance Wireless Hybrid Transmission Data in Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simayi Xuelati, Junqiang Jia, Shibai Jiang, Xiaokaiti Maihebubai, and Tao Wang

46

62

Detection Method of Large Industrial CT Data Transmission Information Anomaly Based on Association Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiafu Pan and Chun Zheng

79

Design of Intelligent Integration System for Multi-source Industrial Field Data Based on Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shufeng Zhuo and Yingjian Kang

92

Link Transmission Stability Detection Based on Deep Learning in Opportunistic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Jun Ren, Ruidong Wang, Huichen Jia, Yingchen Li, and Pei Pei

xii

Contents – Part III

Intelligent Mining Method of New Media Art Image Features Based on Multi-scale Rule Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Ya Xu and Yanmei Sun Data Security Sharing Method of Opportunistic Network Routing Nodes Based on Knowledge Graph and Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Xucheng Wan and Yan Zhao Security Awareness Method of Opportunistic Network Routing Protocol Based on Deep Learning and Knowledge Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Yan Zhao and Xucheng Wan Research on Pedestrian Intrusion Detection Method in Coal Mine Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Haidi Yuan and Wenjing Liu Personalized Recommendation Method of College Art Education Resources Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Sen Li and Xiaoli Duan Global Planning Method of Village Public Space Based on Deep Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Xiaoli Duan and Sen Li A Multi Stage Data Attack Traceability Method Based on Convolutional Neural Network for Industrial Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Yanfa Xu and Xinran Liu Machine Learning Based Method for Mining Anomaly Features of Network Multi Source Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Lei Ma, Jianxing Yang, and Jingyu Li Data Anti-jamming Method for Ad Hoc Networks Based on Machine Learning Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Yanning Zhang and Lei Ma A Data Fusion Method of Information Teaching Feedback Based on Heuristic Firefly Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Yuliang Zhang and Ye Wang Research on Database Language Query Method Based on Cloud Computing Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Shao Gong, Caofang Long, and Weijia Jin

Contents – Part III

xiii

Reliability Evaluation Method of Intelligent Transportation System Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Xiaomei Yang Forecasting Method of Power Consumption Information for Power Users Based on Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 Chen Dai, Yukun Xu, Chao Jiang, Jingrui Yan, and Xiaowei Dong Power Consumption Behavior Analysis Method Based on Improved Clustering Algorithm of Big Data Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Zheng Zhu, Haibin Chen, Shuang Xiao, Jingrui Yan, and Lei Wu Anomaly Detection of Big Data Based on Improved Fast Density Peak Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Fulong Zhong and Tongxi Lin Data Privacy Access Control Method Based on Ciphertext Policy Attribute-Based Encryption Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Chuangji Zhang, Weixuan Lin, and Yanli Zhang Evaluation Method of Enterprise Circular Economy Development Level Based on AHP Fuzzy Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Aiqing Wang and Jie Gao Research on Performance Evaluation of Industrial Economic Management Based on Improved Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Jie Gao and Aiqing Wang Application of Big Data Processing Technology in Power Consumption Information Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 Jin Wang, Yukun Xu, Chao Jiang, Jingrui Yan, Bo Ding, and Qiusheng Lin Task Scheduling Method of Wireless Sensor Multimedia Big Data Parallel Computing Based on Bee Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Tongxi Lin and Fulong Zhong Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427

Data Mining and Machine Learning

Research on Pain Information Management System Based on Deep Learning Qi Shen, Yixin Wang(B) , Weiqing Fang, Liqiang Gong, Zhijun Chen, and Jianping Li Changshu No.2 People’s Hospital, Changshu 215500, Jiangsu, China [email protected]

Abstract. The hospital accelerates the construction of the “trinity” of electronic medical records, smart services, and smart management, and the construction of hospital information standardization is an important part of hospital construction and hospital development, because information technology can improve overall work efficiency and standardize technical processes., reduce management costs, enhance the image of the hospital and other benefits. Pain management information system is a postoperative analgesia management system based on deep learning for clinical information processing and wireless network, with wireless comprehensive pain assessment, interactive patient pain education, wireless comprehensive pain follow-up, individualized patient self-controlled analgesia, wireless Realtime PCA monitoring, analgesic equipment maintenance, wireless analgesic sign monitoring, simple airway management and first aid, pain information management and analysis, and branch hospital quality control management and other functions. Keywords: Deep learning · Internet of Things · Pain Management

1 Introduction Promoting the high-quality development of hospitals is an inherent requirement for the reform and development of public hospitals in the new era. In 2021, the “Opinions on Promoting the High-quality Development of Public Hospitals” issued by the General Office of the State Council (Guobanfa [2021] No. 18) proposes to lead the new trend of high-quality development of public hospitals and promote electronic medical records, smart services, and smart management “Trinity” smart hospital construction [1]. At present, all regions are actively exploring smart hospital construction plans in order to better improve the quality and efficiency of medical services. However, during the “14th Five-Year Plan” period, how to promote the construction of smart hospitals to achieve the high-quality development goals of public hospitals has become a hot topic in the current hospital management circle. This paper analyzes the connotation and construction ideas of smart hospitals, and explores the key issues in the construction of smart hospitals in combination with actual needs, so as to provide decision-making basis for the planning, construction and development of smart hospitals under high-quality development [2]. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 3–10, 2024. https://doi.org/10.1007/978-3-031-50577-5_1

4

Q. Shen et al.

At present, most of the tertiary hospitals in my country have completed the construction of clinical diagnosis and treatment, beneficiary services, clinical management, operation management and other systems based on the hospital information platform, and started a new round of smart hospital construction, focusing on building pre-hospital, inhospital, and post-hospital systems. The online and offline integrated convenient medical service system has vigorously developed Internet medical care, and the smart hospital service network has begun to take shape. This paper focuses on the subject of pain, introduces computer artificial intelligence, and builds a deep learning model through the mining and processing of clinical data to carry out informatization construction of pain information management [3–5].

2 Research Status The current status of pain control Theoretically, almost 95% of acute pain, 80%-85% of cancer pain and 50%-60% of chronic pain can be effectively controlled by existing drug treatment, which does not include other Some treatments, but not the case. Acute pain, especially postoperative pain, although highly effective analgesic drugs and high-tech analgesic techniques have been developed and applied in clinics, foreign literature reports that 50% to 70% of postoperative pain cannot be relieved most effectively [6]. According to survey results, the proportion of people suffering from chronic pain accounts for 30% of the total population in developed countries, and more than 1/2 of patients with moderate to severe chronic pain in the United States have not received adequate analgesic measures. Zhao Jijun conducted a pain survey on more than 5,000 outpatients. It was found that 40% of the patients had pain symptoms, and more than 50% of them had pain as the main symptom. Although a considerable number of patients had taken some analgesic measures, more than 50% of the patients still suffered from the physical and psychological distress caused by pain. Affect the quality of life of patients [7–9]. Artificial intelligence can use computer systems to simulate the characteristics of human intelligence processes, and can even achieve self-correction through supervised or unsupervised algorithms. These persistence enable it to accelerate the realization of personalized and predictive health care. With the diversification of data sources and the accelerated accumulation of data volume, health care data has the characteristics of multi-dimensionality, complexity and heterogeneity [10, 11]. These health and medical data specifically include imaging data, data acquired by wearable devices, electronic health records, proteomic data, and genomic data. In the face of high-dimensional medical data, many classic machine learning algorithms such as random forest and support vector machine can no longer handle it optimally. Deep learning can use multi-layer complex structures or multiple processing layers composed of multiple nonlinear transformations to process data, and can dig out abstract features at different levels from complex sample data, which enables it to accurately learn from the underlying regularity and pattern characteristics are mined from medical data. At this stage, deep learning has been widely used in the field of health care [12–15]. Affected by the great success of deep learning models in image classification tasks in 2012, deep learning has achieved many remarkable results in medical imaging diagnosis, including the segmentation and classification of different tissues, organs or lesions, etc. In some radiology diagnostic

Research on Pain Information Management System

5

tasks, the performance achieved by deep learning has reached or even exceeded the level of human experts. In recent years, due to the advancement of natural language processing technology, deep learning models have gradually shown excellent performance in the processing of non-image data such as signals and texts. Wearable devices and electronic health records contain a large amount of valuable non-image data, which can be used for patient health monitoring, disease risk prediction or disease diagnosis, which makes deep learning research in these areas gradually increasing. Deep learning can not only diagnose corresponding diseases through conventional imaging data, but also discover new diagnostic directions or provide new insights. For example, retinal image passbands are used to diagnose eye diseases such as diabetic retinopathy, but some studies have found that deep learning models using retinal images can also accurately diagnose Alzheimer’s disease, assess cardiovascular disease risk90 and predict gender etc., and this type of work likely represents a new class of scientific discovery methods [16]. In addition, the use of preoperative CT imaging is also helpful for noninvasive EGFR mutation prediction. Recently, deep learning has also achieved great success in micro-level research such as protein structure prediction, RNA three-dimensional structure prediction, and cell internal structure exploration. Research progress in diagnosis, drug development, etc. [17–20].

3 Research Content The pain management information system is an analgesic management system based on IoT sensors for data collection, hospital information system patient clinical data extraction, clinical information and sign data processing through deep learning models, and often wireless network transmission. With wireless comprehensive pain assessment, interactive patient pain education, wireless comprehensive pain follow-up, individual patient-controlled analgesia, wireless real-time PCA monitoring, analgesic equipment maintenance, wireless analgesic sign monitoring, simple airway management and first aid, pain information Management analysis, as well as functional modules and hardware such as branch quality control management. In the integration of postoperative analgesia management and painless delivery, wireless network technology is used to send the data of patient-controlled analgesia (PCA), pain evaluation, sign monitoring, pain education, and equipment information to a dedicated server, and integrate them through wireless network The platform saves, optimizes, and counts the data, and monitors the patient’s analgesic data and the doctor’s pain management work in real time on the monitoring platform in the doctor’s office, nurse’s desk, operating room and other places.

4 Research Steps 4.1 Data Collection Information collection mainly includes medical history (location, nature, process of pain, inducing factors, aggravating or relieving factors, accompanying symptoms, past history, etc.), physical examination (general examination, cranial nerve, motor nerve, sensory nerve, joint examination, etc.) etc.), laboratory tests, imaging tests, other auxiliary tests, etc.

6

Q. Shen et al.

4.2 Data Fusion Through the collection and filing of the above data, a large amount of data can be obtained, and big data technology can transmit and store it in the cloud to form a pain knowledge database, providing direction and basis for the diagnosis and treatment of individual targeted pain pathological factors. 4.3 Knowledge Base Construction Construction of the pain knowledge base: the pain knowledge base is used to store the knowledge in the diagnosis process of pain pathological factors. Including basic information, rules and other relevant information. The diagnosis process of pain pathological factors is carried out by simulating the way of thinking of clinical experts through the knowledge in the pain knowledge base. Therefore, the pain knowledge base determines the quality level of diagnosis of pain pathological factors. In the process of diagnosis and syndrome differentiation, production rules are mostly used. Production rules generally appear in the positive.TEIEN. Mode. When the preconditions are met, the system will derive the corresponding diagnosis and treatment prescriptions. In-depth learning simulates the process of diagnosis by clinical experts based on symptoms, signs, laboratory tests, imaging tests, etc., and between medical history, flow, signs, laboratory tests, imaging tests, etc. The content of the pain knowledge base needs to be used in the process of building the relevant relationship. The specific rules are combined with clinical practice, and are designed using the project combination method after discussions with external experts and members of the research team. The later system is continuously updated and improved according to the deep learning algorithm. The specific position of each item is determined according to the average score of the importance of the items by the experts: the average score of the expert opinion > 4.5 points is the main item day, the average score of the expert opinion is 4–4.5 is the secondary item day, and the average score of the expert opinion is 4 points for auxiliary items. 4.4 Algorithm Model Construction As a mathematical model based on probabilistic reasoning, deep learning is a new method for the expression and reasoning of uncertain knowledge. At present, deep learning has been widely used in intelligent systems dealing with uncertain information, including medical diagnosis, expert systems and other fields. Therefore, this system uses deep learning as an algorithm model in the diagnosis process of pain pathological factors. Deep learning mainly studies the method of uncertainty knowledge expression and reasoning. It is a directed acyclic graph with probabilistic annotations, which can predict unknown events by probabilistic methods based on knowledge base knowledge and existing data. The deep learning structure is composed of variable nodes and arrows connecting these variable nodes. Each node represents a random event variable, and the arrows represent the interdependence between nodes. The appearance of a cause node may lead to a certain result, which can be expressed as a conditional probability, not an inevitable situation. The conditional probability table is a collection of conditional probabilities used by deep learning to describe each variable node. Deep learning can quickly obtain

Research on Pain Information Management System

7

the probability combination of each basic event according to the network structure and the conditional probability table of the relationship between each node. 4.5 Algorithm Design The essence of the diagnosis of pain pathological factors is to obtain the patient’s specific “pain pathological factors diagnosis” based on various symptoms, signs, laboratory tests, imaging tests and other auxiliary examinations. There may or may not be a certain relationship between different symptoms, signs, laboratory tests, and imaging tests. Deep learning can well describe symptoms, signs, laboratory tests, imaging tests and “diagnosis of pathological factors of pain” this relationship between. This system uses deep learning to study the relationship between symptoms, signs, laboratory tests, imaging and other tests and “pain pathological factor diagnosis”. Pain symptoms, signs, laboratory examinations, imaging examinations and other medical record information sorted out from previous literature and clinical medical records, “pain pathological factors diagnosis” as each node in deep learning. According to the rules in the pain knowledge base, the system will repeatedly match the input symptoms, signs, laboratory tests, imaging tests, etc. According to the rules, the diagnosis results of corresponding pain pathological factors can be obtained. The system records and stores medical history information, diagnosis and syndrome differentiation results to form a new database. The computer uses the learning algorithm of deep learning to automatically learn, analyze the relationship between medical history information and “pain pathological factor diagnosis” and data characteristics, determine the probability distribution according to the statistics of the data, form the pain pathological factor diagnosis and deep learning structure and its prior Probability table. With the continuous increase of data, the computer constantly updates and improves the deep learning structure and its prior probability table, which will improve the accuracy of its application. See Fig. 1. 4.6 System Development The system is developed using the JAVA programming language, based on the BS architecture of J2EE, and supports multi-platform data docking and browser access. The backend adopts the mainstream framework in java: spring for transaction management, which is convenient for decoupling, simplifies development, supports AOP programming, supports declarative transactions, facilitates program testing, facilitates integration of various excellent frameworks, and reduces Java EE API The advantages of difficulty in use; mybatis has relatively high flexibility for underlying data operations; maven manages jar packages; the architecture uses spring mvc, and Spring Web MVC is also an implementation of the service-to-worker model, but it can be optimized, and the front-end controller is Dispatcher The Servlet application controller is actually split into a handler mapper (Handler Mapping) for processor management and a view resolver (View Resolver) for view management, and the page controller/action/processor is the implementation of the Controller interface (it can also be any POJO class); supports localization parsing, theme parsing, and file uploading, etc., provides a very flexible

8

Q. Shen et al.

Fig. 1. Architecture Diagram of Pain Information Management System Based on Deep Learning

data verification, formatting, and data binding mechanism, and provides powerful contractual programming support that conventions are greater than configurations (principle of convention priority).

5 Functional Modules Infusion monitoring: visualized and centralized monitoring of infusion status, real-time data collection and timely alarm. It can improve the safety of infusion and save the cost of medical management. Pain assessment uses deep learning algorithms and mobile terminal assessment tools to transmit pain assessment values remotely. It can reduce report registration, form electronic medical records, and improve work efficiency. Quality control management: provide statistics on the analgesic effect and workload of postoperative analgesia. It can assist the department to optimize the quality management of postoperative analgesia. Patient-side application: prompting patients to self-control analgesia and guiding patients to self-evaluate their pain. It can improve the quality of analgesia and reduce the insufficient analgesia.

Research on Pain Information Management System

9

6 Summarize Effective management of pain information, including research on the interaction between equipment and the Internet of Things, realization and optimization of postoperative analgesia and pain management processes, based on the above components to achieve analgesia, and provide effective services and assistance to patients; hospitals use artificial intelligence deep learning, Internet of things technology and other informatization methods can improve overall work efficiency, standardize technical processes, reduce management costs, and enhance the image of hospitals. Serve patients better!

References 1. Gupta, J., Pathak, S., Kumar, G.: Deep learning (CNN) and transfer learning: a review. J. Phys. Conf. Ser. 2273(1), 012029 (2022) 2. Kum, S., Oh, S., Yeom, J., Moon, J.: On designing interfaces to access deep learning inference services. In: International Conference on Ubiquitous and Future Networks, ICUFN, v 2022July, pp 89–91, ICUFN 2022. (2022) 3. Goh, H.-A., Ho, C.-K., Abas, F.S.: Front-end deep learning web apps development and deployment: a review. Appl. Intell. 53(12), 15923–15945 (2022) 4. Boulila, W., Driss, M., Alshanqiti, E., Al-sarem, M., Saeed, F., Krichen, M. Weight initialization techniques for deep learning algorithms in remote sensing: recent trends and future perspectives. In: ICAC In 2021. Advances in Intelligent Systems and Computing (1399), pp. 477–84 (2022) 5. Portillo, R., Aizel, A., Sanz, D., Díaz, A.: System for reminiscence therapy based on Telegram and Deep Learning. CISTI, v 2022-June (2022) 6. Marchenko, R., Borremans, A. Smart hospital medical equipment: integration into the enterprise architecture. Digitalization of society, economics and management: a digital strategy based on post-pandemic developments. Lect. Notes Inf. Syst. Organ. (53), 69–84 (2022) 7. Nguyen, V., Ngo, T.D.: Single-image crowd counting: a comparative survey on deep learningbased approaches. Int. J. Multimedia Inf. Retrieval 9(2), 63–80 (2020) 8. Kavitha, P.M., Muruganantham, B.: A study on deep learning approaches over Malware detection. In: Proceedings of the 2020 IEEE International Conference on Advances and Developments in Electrical and Electronics Engineering (ICADEE), p. 5 (2020) 9. Al-Eidan, R.M., Al-Khalifa, H., Al-Salman, A.: Deep-learning-based models for pain recognition: a systematic review. Appl. Sci. 10(17), 5984 (2020) 10. Ozturk, M.M.: On tuning deep learning models: A data mining perspective. arXiv, November 19 (2020) 11. Hu, C., Hu, Y.-H.: Data poisoning on deep learning models. In: 2020 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 628–32 (2020) 12. Santra, D., Sadhukhan, S., Basu, S.K., Das, S., Sinha, S., Goswami, S.: Scheme for unstructured knowledge representation in medical expert system for low back pain management. In: Smart Intelligent Computing and Applications. Smart Innovation, Systems and Technologies (SIST 105), pp 33–41 (2019) 13. Santra, D., Basu, S.K., Mandal, J.K., Goswami, S.: Rough set based lattice structure for knowledge representation in medical expert systems: Low back pain management case study. arXiv, October 2 (2018) 14. Wangzhou, A., et al.: A Pharmacological Interactome Platform for Discovery of Pain Mechanisms and Targets. SSRN, May 22 (2020)

10

Q. Shen et al.

15. Yu, M., et al.: EEG-based tonic cold pain assessment using extreme learning machine. Intell. Data Anal. 24(1), 163–182 (2020) 16. Moore, R.J., Smith, R., Qi, L.: Using computational ethnography to enhance the curation of real-world data (RWD) for chronic pain and invisible disability use cases. ACM SIGACCESS Accessibility Comput. (127), 4 (2020) 17. Hina, S., Dominic, P., Dhanapal, D.: Information security policies’ compliance: a perspective for higher education institutions. J. Comput. Inf. Syst. 60(3), 201–211 (2020) 18. Yaosheng, W.: Network information security risk assessment based on artificial intelligence. J. Phys. Conf. Ser. 1648, 042109 (2020) 19. Chao, W., Xiangyu, J.: The researches on public service information security in the context of big data. In: ISBDAI 2020, pp. 86–92, 28 April (2020) 20. Kang, M., Anat, H.: Benchmarking methodology for information security policy (BMISP): artifact development and evaluation. Inf. Syst. Front. 22(1), 221–42 (2020)

LS-SVM Assisted Multi-rate INS UWB Integrated Indoor Quadrotor Localization Using Kalman Filter Dong Wan, Yuan Xu(B) , Chenxi Li, and Yide Zhang School of Electrical Engineering, University of Jinan, Jinan 250022, China xy [email protected] Abstract. This paper focuses on the problem of positioning accuracy degradation caused by inconsistent sampling frequencies of INS/UWB navigation system. In order to achieve the same sampling frequency of INS and UWB, this paper proposes a data fusion algorithm combining extended Kalman filter (EKF), locally weighted linear regression (LWLR), least squares support vector machine (LS-SVM). First, during the UWB data sampling interval, the UWB data are fitted by LWLR, and the fitted UWB data and INS data are fused by EKF. Then, estimation error of EKF is optimized by LS-SVM. At last, the simulation results indiciate data fusion algorithm restrains divergence problem in the UWB sampling interval. And the positioning accuracy of indoor quadrotor INS/UWB navigation system has been increased through the algorithm. Keywords: INS/UWB extended Kalman filter

1

· least squares support vector machine ·

Introduction

As a commonly used positioning system, the inertial navigation system (INS) doesn’t rely on the transmission and reception of signals. The linear motion, angular motion, magnetic field strength and direction of the carrier are measured by the inertial measurement unit (IMU), and the position, speed and attitude are calculated to realize the positioning of the carrier. It has the advantages of strong anti-interference, short-term high precision and good concealment. However, the positioning accuracy gets lower and lower with the time up [3]. Due to its obvious advantages in indoor positioning occasions, the ultra wide band (UWB) is widely used in indoor positioning. However, positioning accuracy of UWB is affected by non-line-of-sight (NLOS) error [1]. Besides, the environmental adaptability is not strong and the positioning performance is unstable. Due to the advantages and disadvantages of INS and UWB, the most common practice in the field of indoor positioning is to combine UWB and INS to build INS/UWB navigation system [5]. c ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024  Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 11–18, 2024. https://doi.org/10.1007/978-3-031-50577-5_2

12

D. Wan et al.

In order to improve the positioning accuracy of integrated navigation system, some navigation strategies based on improved extreme learning machine and improved deep belief network are proposed [6]. Particle-based Kalman filter [10], complementary Kalman Filter [4], and cubature Kalman filter [2] are proposd to improve the robustness. Further, a data fusion method combining machine learning algorithms is proposed to improve the performance of UWB positioning system in NLOS situation [9]. To tune hyperparameter of the model faster and more effective, a PSO-guided Self-Tuning Convolution Neural Network is proposed [7]. To improve the speed and accuracy of distinguishing infected patients from healthy populations, a deep learning model (WE-SAJ) using wavelet entropy, two-layer FNNs and the adaptive Jaya algorithm is proposed [8]. The above papers only focus on how to improve the performance of machine learning algorithms. In fact, there exits difference in sampling frequency between INS and UWB, which leads to decrease in positioning accuracy of INS/UWB system within the UWB sampling interval. To solve the problem, this paper presents a data fusion algorithm including extended Kalman filter (EKF), locally weighted linear regression (LWLR), and least squares support vector machine (LS-SVM). First, UWB data within the UWB data sampling interval is predicted by LWLR. Then, the fitted UWB data and INS data is fused by EKF. Finally, the bias of EKF is decreased by LSSVM. The main contribution of this paper is that it proposes a new strategy for sampling frequency synchronization and data fusion error compensation using LWLR and LS-SVM. The remainder of this paper is: EKF algorithm is introduced in Sect. 2. Section 3 is about the detailed introduction of the fusion algorithm. Section 4 shows the simulation results. The conclusion is given in Sect. 5.

2

EKF Algorithm

A discrete nonlinear system is usually modelled as  xk+1 = f (xk , wk , k), wk , k) zk = g(xk , k) + hk

(1)

in Eq. 1, wk and hk denote uncorrelated white noises, its variance are Qk and Rk , xk denotes an n×1 dimensional state vector,zk denotes an m×1 dimensional measurement vector, f denote nonlinear state function, g denote measurement function. Similar to the traditional Kalman filter algorithm, EKF is also divided into one step prediction and measurement update two steps to complete a filter, one step prediction procedure is: If the state estimation after measurement update at time k is x ˆk (+), let ∂f |x =ˆx (+) ∂x k k Then the one-step prediction equation is  xk (+), k] x ˆk+1 (−) = f [ˆ Pk+1 (−) = Fk Pk (+)FkT + Qk Fk =

(2)

(3)

LS-SVM Assisted INS/UWB Navigation System Using Kalman Filter

13

where x ˆk+1 (−) represents state estimation of measurement filter correction at time k+1, P(-) denotes estimation bias correlation matrix of one-step prediction, Pˆk (+)represents estimation bias covariance matrix after measurement update at time k. Then, after the measurement, the measurement is updated. The process is as follows: ∂g (4) Gk+1 = ∂x The measurement update equation is:  Kk+1 = Pk+1 (−)GTk+1 [Gk+1 Pk+1 (−)GTk+1 + Rk+1 ]−1 (5) x ˆk+1 (+) = x ˆk+1 (−) + Kk+1 {zk+1 − g[ˆ xk+1 (−), k + 1]} in (5), K represents gain array.

3 3.1

Proposed Fusion Strategy LWLR Algorithm

An innegligible problem of linear regression is that the model may be underfitting, because it seeks unbiased estimates with least mean square error. Obviously, if the model is under-fitting, it will not achieve the best prediction effect. So some methods allow the introduction of some bias in the estimation, thereby reducing the mean square error of prediction. LWLR is the most common method. The algorithm gives weights founded on gap between the predicted point and training point. The closer the distance is, the greater the weight is. Then weighted mean square error minimization model is solved. For most regression algorithms, when it completes learning process, parameters are settled. However, the situation is different in the LWLR algorithm, each prediction needs to update the regression coefficient parameters. Therefore, LWLR is a real-time algorithm for global fitting through local fitting to achieve better generalization performance and prediction accuracy. For LWLR algorithm, kernel function is used to assign weights. The type can be freely selected, usually Gaussian kernel. As is shown in Eq. (6), the kernel used in this paper is Gaussian kernel. w(i, i) = exp(

|x(i) − x| ) −2k 2

(6)

where x is the point to be predicted; x(i) is the sample point; k is a parameter specified by the user. The selection of k should be appropriate, otherwise it will cause underfitting or overfitting problems.

14

3.2

D. Wan et al.

LS-SVM Algorithm

The LS-SVM algorithm is a useful tool to obtain the input and output fuzzy relations in the fields of classification, regression analysis and nonlinear estimation. Its mathematical model can be described by the next steps. First, given a training dataset {di , ei }, i = 1, 2, /cdots, m. Then, goal function and constraints are determined as follows: m

min Jp (V, c) =

V,a,c

1 T r V V + ci 2 2 i=1

s.t.ei = V T ϕ(di ) + a + ci

(7) (8)

where V denotes the normal vector of the hyperplane, c denotes the square of prediction error, r represents the penalty factor of prediction error, a is a constant, ϕ(di ) denotes the kernel function. After that, the cost function of LS-SVM founded on theory of structural risk minimization (SRM) is the following Lagrangian function: L(V, a, c, b) = Jp (V, c) −

m 

bi [V T ϕ(di ) + a + ci − ei ]

(9)

i=1

where bi is Lagrange multiplier. The partial derivatives of V , a, ci , bi in (9) are calculated respectively and the derivative is set to be zero. The linear equations can be obtained as follows: ⎧ ∂L ⎪ ⎪ ∂V = 0 ⎪ ⎪ ⎪ ⎪ ∂L ⎪ ⎪ ⎨ ∂a = 0 (10) ⎪ ∂L ⎪ = 0 ⎪ ∂c ⎪ i ⎪ ⎪ ⎪ ⎪ ⎩ ∂L ∂bi = 0 The solution of (10) can be obtained: ei =

m 

K(d, di ) + a

(11)

i=1

where K(d, di ) represents the kernel function. Similar to the LWLR algorithm, the choice of kernel function needs to be careful, the kernel chosen in the paper is Gaussian kernel shown as follows: K(d, di ) = exp(−

(d − di )2 ) −2σ 2

(12)

where σ represents kernel parameter. Gaussian kernel is very sensitive to σ, and the selection of σ will affect the accuracy and generalization performance of model. Therefore, the selection of σ should be appropriate, otherwise it will cause underfitting or overfitting problems.

LS-SVM Assisted INS/UWB Navigation System Using Kalman Filter

3.3

15

Fusion Algorithm

In order to realize the consistency of INS and UWB sampling frequency, we presents a new data fusion algorithm in this section. First of all, it predicts the UWB data within UWB sampling interval by LWLR. Then, INS data and fitted UWB data are fused by EKF. Finally, the bias of EKF algorithm is optimized by LS-SVM algorithm. The data fusion algorithm consists of training phase and prediction phase shown as Fig. 1 and Fig. 2. Detailed introduction of the algorithm is as follows. 1) Due to the different sampling frequency of UWB and INS, there is a moment when only INS data is available and UWB data is not.

Fig. 1. The training phase

Therefore, INS data and UWB data can be represented as XIN S (k) and XU W B (k  ) respectively, where k’ denotes epochs, it satisfies k = 1, 1 + n, · · · ,(n=6 in this paper), k = 1, 2, · · · , k  . Both EKF-1 and EKF-2 represent EKF. They have the same parameters and structure, including state covariance, process noise and measurement noise covariance, etc. The core of the training phase can be divided into two parts, LWLR training and LS-SVM training. For the LWLR training phase, we use LWLR to fit UWB data and then determine the linear regression parameters. For the LS-SVM training phase, it establishes

16

D. Wan et al.

Fig. 2. The prediction phase 

the fuzzy relationship between K(k’) and δX (k  ) through the training samples  determined by Eqs. (13)–(15), where K denotes gain array of EKF-2, δX (k  ) represents the estimation error of EKF-2 due to the use of UWB data predicted by LWLR. Training Dataset 

[K(k  ), δX (k  )]

(13)

δX (k  ) = δX(k  ) − δX  (k  )  EKF − 1 → δX(k  ) EKF − 2 → δX  (k  )

(14)



(15)

2) After introducing the training phase of the data fusion algorithm, the prediction phase should be introduced. In prediction phase, the UWB data is predicted by trained LWLR model, so that EKF can be used to fuse INS data and fitted UWB data. For LS-SVM algorithm, the EKF-2 gain matrix K is used as the input of the trained LS-SVM model to predict bias of EKF-2 within the UWB sampling interval, and then the INS data is corrected by the estimation error. The above steps are determined by equations (16)–(18).  In : [K(k)]  (16) Out : [δX (k)] 

δX (k) = δX(k) − δX  (k) 

XIN S (k) = XIN S (k) − δX(k)

(17) (18)

Notably, in the LS-SVM used in the paper, the kernel parameter and regularization parameter are set to 0.001.

LS-SVM Assisted INS/UWB Navigation System Using Kalman Filter

4

17

Experiment and Analysis

This section verifies and analyzes the effect of the algorithm through simulation experiments. The experimental results are shown as Fig. 3. From Fig. 3, it can be seen that the position error of INS data modified by the algorithm is significantly reduced. However, the bias suddenly becomes larger at some epochs. This is because the position error of the UWB data suddenly become larger in that moment, which leads to large deviations in UWB data predicted by LWLR. 0.6

0.8

0.4

LS-SVM+LWLR+EKF

Positioning error in y direction (m)

Positioning error in x direction (m)

LS-SVM+LWLR+EKF

0.2

0

-0.2

-0.4

0.6

0.4

0.2

0

-0.2

-0.6

-0.4 0

500

1000

1500

2000

2500

3000

3500

4000

0

500

Time (0.02 s)

1000

1500

2000

2500

3000

3500

4000

Time (0.02 s)

0.8 LS-SVM+LWLR+EKF

Positioning error in z direction (m)

0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 0

500

1000

1500

2000

2500

3000

3500

4000

Time (0.02 s)

Fig. 3. Position error

5

Conclusion

Combining LWLR, EKF and LS-SVM, this paper presents a data fusion algorithm for indoor quadrotor INS/UWB navigation system. The LWLR algorithm achieves good results in predicting the UWB data of the sampling period interval, which solves the problem of inconsistent sampling frequency between INS and UWB, and achieves the same sampling frequency between INS and UWB. And the EKF algorithm suppresses accumulative errors of INS data and improves the

18

D. Wan et al.

positioning accuracy. Besides, the positioning error is further reduced by predicting the EKF error of the sampling period by the LSSVM algorithm. Simulation results prove that the above method eliminates accumulative errors of INS data and further increases the positioning accuracy of indoor quadrotor INS/UWB navigation system.

References 1. Djosic, S., Stojanovic, I., Jovanovic, M., Djordjevic, G.L.: Multi-algorithm UWBbased localization method for mixed LOS/NLOS environments. Comput. Commun. 181, 365–373 (2022) 2. Gao, B., Hu, G., Zhong, Y., Zhu, X.: Cubature rule-based distributed optimal fusion with identification and prediction of kinematic model error for integrated UAV navigation. Aerosp. Sci. Technol. 109, 106447 (2021) 3. Li, K., Li, W.: The error model based on the special Euclidean group SE(3) of the INS: comparison and extension. Digital Sig. Process. 132, 103820 (2022) 4. Liu, F., Li, X., Wang, J., Zhang, J.: An adaptive UWB/MEMS-IMU complementary Kalman filter for indoor location in NLOS environment. Remote Sens. 11(22), 2628 (2019) 5. Liu, J., Pu, J., Sun, L., He, Z.: An approach to robust INS/UWB integrated positioning for autonomous indoor mobile robots. Sensors 19(4), 950 (2019) 6. Nan, J., Xie, H., Gao, M., Song, Y., Yang, W.: Design of UWB antenna based on improved deep belief network and extreme learning machine surrogate models. IEEE Access 9, 126541–126549 (2021) 7. Wang, W., Pei, Y., Wang, S.H., Gorrz, J.M., Zhang, Y.D.: PSTCNN: explainable COVID-19 diagnosis using PSO-guided self-tuning CNN 8. Wang, W., Zhang, X., Wang, S.H., Zhang, Y.D.: Covid-19 diagnosis by WE-SAJ. Syst. Sci. Control Eng. 10(1), 325–335 (2022) 9. Wei, J., Wang, H., Su, S., Tang, Y., Guo, X., Sun, X.: Nlos identification using parallel deep learning model and time-frequency information in UWB-based positioning system. Measurement 195, 111191 (2022) 10. Zhao, X., Li, J., Yan, X., Ji, S.: Robust adaptive cubature Kalman filter and its application to ultra-tightly coupled SINS/GPS navigation system. Sensors 18(7), 2352 (2018)

Design of Digital Image Information Security Encryption Method Based on Deep Learning Licheng Sha(B) , Peng Duan, Xinchen Zhao, Kai Xu, and Shaoqing Xi State Grid Beijing Electric Power Company, Beijing 100031, China [email protected]

Abstract. Due to the high degree of overlapping of digital image information, the security encryption ability of digital image information is weak and the encryption accuracy is low. A security encryption method for digital image information based on deep learning is designed. Determine the connection form of the deep learning network, and based on this, select the key learning function, decipher the digital image information, and complete the digital image analysis based on deep learning. According to the scrambling result of the image information, a circular index table structure is established, and then the secure encryption of digital image information is realized by solving the value range of the security parameter. The experimental results show that the maximum value of the information overlapping index of this method can only reach 1.36, which has strong digital image information security encryption ability and can effectively improve the security encryption accuracy of digital image information. Keywords: Deep Learning · Digital Image · Information Security Encryption · Learning Function · Scrambling Processing · Circular Index Table · Security Parameters

1 Introduction Digital image encryption is a branch of cryptography, and it is also the combination of digital image processing technology and cryptography. Understanding the basic system composition of cryptography is the key to putting forward image encryption technology. The most fundamental problem of cryptography is how to securely transmit information on open channels, and the development of the Internet has provided a shortcut for the rapid transmission and acquisition of information. However, since each participating individual can obtain information on the Internet, the encryption of information is particularly important, and preventing illegal individuals from obtaining encrypted information is exactly the subject faced by cryptography. At present, scholars in related fields have carried out research on image encryption. Reference [1] proposes an image information encryption and compression algorithm based on the transport layer in the Internet of Things. On the basis of the original two-dimensional encryption method of square image, combined with the principle of © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 19–32, 2024. https://doi.org/10.1007/978-3-031-50577-5_3

20

L. Sha et al.

image supplementary encryption algorithm of two-dimensional chaotic map, a chaotic encryption algorithm suitable for rectangular image encryption is proposed and simulated. This method has good pixel distribution but low encryption accuracy. Reference [2] proposed an image encryption algorithm based on chaotic set. The encryption algorithm can select different chaotic system combinations according to the requirements of encryption strength, use the pixel mean value and pixel coordinate value of image pixels to control the generation of chaotic keys, and enhance the connection between chaotic keys and plaintext data. On the basis of encryption, the ciphertext is cut into 3 pieces of data bit by bit and then disguised and hidden in a processed public image, which changes the appearance characteristics of the ciphertext. Through the image histogram analysis, adjacent pixel correlation analysis and image information entropy analysis of the encrypted image, the effectiveness of the encryption algorithm is shown, but the encryption ability of this method needs to be improved. Aiming at the above problems, a security encryption method for digital image information based on deep learning is designed. According to the deep learning network connection form, the learning function is selected, the image information is deciphered, and the digital image is analyzed. The digital image information security encryption is realized through scrambling processing, cyclic index table construction and security parameter calculation. This method has stronger encryption ability and higher encryption accuracy.

2 Digital Image Analysis Based on Deep Learning According to the deep learning network connection form, the learning function is selected to decipher the image information, and the input analysis of the digital image information can be realized. This chapter will conduct research on the above content. 2.1 Deep Learning Network Fundamentals The deep learning network architecture contains a large number of data sample parameters, most of which are located in the fully connected layer, and the risk of overfitting is serious. Therefore, two main measures are also adopted in the training process to reduce the risk of overfitting. The first is the increase of data. The most direct measure to prevent overfitting is to expand the training data set, allowing the network to learn more features of different images under the same category. Data augmentation is a simple method to expand the data set, such as flipping, cutting, and shifting the image without changing the core elements of the original image. If all images classified as the same standard in the original training set belong to the same set space, the misclassification behavior can be avoided during testing. In addition, you can change the RGB pixel value of the original image, add specific data to the three components to simulate the effect of the original image under different light intensities and colors, and change the brightness, contrast and saturation of the original image. The second measure to reduce the risk of overfitting is based on the dropout operation of the fully connected layers. Ensemble learning emphasizes that combining the outputs of multiple models can achieve better output results than a single model. The second method is based on the same principle,

Design of Digital Image Information Security Encryption Method

21

randomly selecting some neurons to inactivate to avoid mutual adaptation between different neurons. It is inclined to let each neuron combine other randomly selected neurons in the learning process, and learn different features of the image to help get a better output in the end. For the derivation of the deep learning network connectivity criteria, it involves the joint solution of the neuron coefficient a and its supplementary description condition a . The specific calculation formula is as formula (1):   2  (1) a = lim δ β Sˆ − S  δ→∞

β≥δ

In the formula, δ represents the RGB pixel definition coefficient, and β represents the data information discarding coefficient in the original image space. And the inequality value condition of β ≥ δ is always established, Sˆ represents the transmission characteristics of data information in digital images, and S represents the average value of data information transmission per unit time. The supplementary explanation conditions of neuron coefficient are as formula (2):  a χ αmax −α min  a = (2)  d · |S| In the formula, αmax represents the maximum value of the neuron learning vector, αmin represents the minimum value of the neuron learning vector, χ represents the learning behavior standard of digital image information, S represents the unit accumulation of image data information in the deep learning network, d  represents the orientation index. On the basis of formula (1) and formula (2), let s1 , s2 , ..., sn represent n unequal sample information parameters, and the inequality conditions of s1 = 0, s2 = 0, ..., sn = 0 are established at the same time, and combining the above physical quantities, the standard expression of deep learning network connection is deduced as formula (3):    (3) Aa = (n! − 1)2 · a s1 · s2 · · · · · sn Since the value of the n index belongs to the numerical range of (0, +∞), the deep learning network framework has a very strong bearing capacity for data information samples. The layout of the complete deep learning network architecture is shown in Fig. 1. It can be seen from Fig. 2 that the pooling layer extracts the features in the image, compresses the feature map to reduce parameters, and alleviates overfitting. There are two types of pooling layers: the maximum pooling layer and the average pooling layer. The former is sensitive to image texture and the latter can better preserve the image background. The maximum pooling layer is commonly used. The specific role of the pooling layer: (1) Scale invariance: The pooling operation is performed on local regions in the image. When the image is translated horizontally, vertically or diagonally, the largest feature will remain in the local area, and the pooling layer focuses on the details of the image that can best represent the feature.

22

L. Sha et al.

Data sample layer Data transmiss ion Pooling layer Learning behavior summary Convolutional network layer Data launch Function expression Fig. 1. Deep learning network basic connection architecture

(2) Remove redundant information: The pooling layer compresses the image information most intuitively. An image contains many pixels and has many image features, but the network does not need to fully remember these features when learning. The pooling layer can just remove most of the redundant features, allowing the convolutional network to learn the details of the essence of the image and alleviate overfitting. 2.2 Learning Function Selection The learning function is built on the deep learning network architecture, and uses the security parameters configured by the handshake protocol to support functions such as compression, encryption and encapsulation of the upper layer data. The complete learning function includes the handshake protocol, the cipher specification change protocol, and the warning protocol. The handshake protocol mainly negotiates session parameters. That is, the version of the protocol that the encryption parties agree to use, the other party can be selected for authentication, the information about the session ID is exchanged, the encryption and compression algorithms are selected, and the shared key used to generate the key, etc. The cryptographic specification change protocol primarily signals changes to the encryption policy [3]. The protocol consists of a single message sent by the client or server, which informs the other party that the subsequent transmission process needs to be performed with a new key. The warning protocol is primarily about reporting error conditions or changes in session state to the counterparty. When the deep learning-based function expression encrypts digital image information, the client first sends a Client Hello message to the server, which mainly informs the server of the list of cipher suites and compression algorithms it supports. After receiving the message, the server will reply a Server Hello message, the content of which is mainly to clarify which encryption algorithm the two parties want to use from the lists provided by the client. At the same time, both parties also need to verify each other’s identities, that is, the server will provide the other party with its own certificate and ask the client to provide its own certificate, but this process is not necessary, because it is possible that the two parties have had a normal session before. When the client receives the certificate

Design of Digital Image Information Security Encryption Method

23

of the other party, it will authenticate to the certification authority. If the authentication is passed, the client will encrypt the shared key used to encrypt the transmitted data and send it to the other party. After the above process is completed, the two parties start to change the cipher specification, that is, the two parties will use the negotiated key to conduct a conversation. At this point, the handshake process is completed, and the data transmission begins. When selecting a learning function, it is first necessary to determine the data mutual transmission relationship between the two parties of information encryption. The specific calculation formula is as formula (4): +∞  γ 2 g˙ + Aa (4) D= ε=1

In the formula, ε represents the encoding coefficient of the information to be encrypted, γ represents the real-time encoding vector, and g˙ represents the encryption feature of the information text in the digital image. Then, the value range of encrypted information can be determined on the basis of the relational expression of data transmission; Finally, the complete learning function expression is obtained by solving, and the calculation process is as formula (5):    logD h 2  H = 1 −  (5) f × (φ − 1)2  Among them, h represents the data sample that is executing the encryption instruction in the deep learning architecture, φ represents the transcoding coefficient of the digital image information, and f represents the encryption template definition item. Also, the key must be very sensitive. The original image can only be decrypted correctly with the correct key. When encrypting the image information in the deep learning network, the processing host can find that it matches a rule in the library, and the current situation indicates that the application traffic is successfully identified. In other words, this method requires a large feature rule database to achieve better encryption effect. 2.3 Image Information Cracking Image information cracking identifies applications based on the port numbers in the headers of deep learning network packets, that is, classifying traffic by mapping the port numbers to specific applications. The principle of this classification method is very simple. It only needs to read the first data packet in the network data stream to be successfully identified. The identification efficiency is very high and the specific implementation is extremely simple. However, with the development of network technology, this method faces many problems: some application ports may not be registered. Some applications use dynamic ports, which may change during data transfer. Some applications use ports of other common protocols for data transmission in order to avoid system restrictions, so as to achieve port concealment. Moreover, since the header port information is hidden

24

L. Sha et al.

after the traffic is encrypted, it is difficult to identify the encrypted traffic by the port number. The so-called deep learning is to use the labeled data to continuously train the model, so that the model can predict the results of any given range of data. In order to be able to judge the quality of model training, the input sample data is usually labeled, and when the model training ends, the parameter variables of the model at this time are loaded into the classifier that recognizes unknown types [4]. Usually, the recognition accuracy of this method is better, so this method is also used by researchers in the field of encrypted traffic recognition and classification. Let k1 and k2 represent two unequal digital image information training parameters, and their values satisfy formula (6): k1 , k2 ∈ (0, 1)

(6)

By combining formula (5) and formula (6), the digital image information cracking expression can be defined as formula (7): +∞

J =

ϕ·L˜ |k1 −k2 |2

ι=1 +∞

ι=1

q × j

×H

(7)

In the formula, ι represents the initial value of the encryption node labeling coefficient, and L˜ represents the value feature of the digital image information sample in the deep learning network, ϕ represents the sequential encryption parameter of digital image information, q represents the learning vector of digital image information, and j represents the encryption vector of digital image information. The so-called image information cracking is to use a small number of labeled and a large number of unlabeled sample data for model training, and to associate the unlabeled data with the labeled sample data in some way, so as to introduce the information contained in the unlabeled data. The main reason is that the deep learning framework is expensive to label data, so when deciphering image information, it first tries to model the unlabeled sample data, and then predicts the labeled data [5]. The main premise of considering deep learning algorithms is that the distribution of data must not be completely random. Therefore, better classification results can be achieved through the local features of a few labeled samples and the overall distribution of a large number of unlabeled samples.

3 Security Encryption of Digital Image Information With the support of the deep learning network architecture, according to the execution process of scrambling processing, cyclic index table construction, and security parameter calculation, the secure encryption of digital image information is completed, and the smooth application of the deep learning-based digital image information security encryption method is realized.

Design of Digital Image Information Security Encryption Method

25

3.1 Scrambling The scrambling of digital image information needs to be organized by binary tree. A binary tree is a tree structure where each node has at most two subtrees, called the left subtree and the right subtree. For a complete binary tree, the capacity of each layer of node organization for data samples is exactly the same. Its depth and number of nodes are w0 and e0 , respectively, and the numerical relationship between the two can be described as formula (8): ⎧ ⎪ ⎨ w0 ≥ 1 e0 ≥ 1 (8) ⎪ ⎩ w0 = e0 Binary tree is a common way to implement binary search tree and binary heap. The basic operation of a binary tree is traversal. Since the structure of a binary tree is nonlinear, traversal essentially transforms each node of the binary tree into a linear sequence according to certain rules and order. Generally speaking, the reproduction process of digital image information encryption behavior by deep learning network can be understood as computer simulation reproduction or optical reproduction. The computer simulation reproduction process can be realized using only a computer. In the optical reproduction method, a corresponding experimental framework needs to be built. In the framework of the optical reproduction experiment, when the beam emitted by the laser is collimated and expanded, the spatial light modulator loads the hologram. The expanded beam is then irradiated on the spatial modulator, reflected by the spatial light modulator, and then the reflected beam passes through the lens to adjust the size and position of the reproduced image. Finally, it is received by the basic learning node and transmitted to the computer for display. The deep learning network can divide the collected information into real object information and virtual object information. The real object information can obtain object information through related image acquisition instruments [6]. The virtual object information can be simulated and obtained by computer-related software. In the encryption process, the random phase plate is regarded as the key to restore the original image, and only by obtaining the correct random phase distribution can it be decrypted correctly. Digital image information can be viewed as a collection of many point clouds of data samples, or it can be decomposed into a collection of many levels. The encryption method is to divide the digital image information into a plane layer with a certain surface spacing to obtain the cross-section of each layer, and then project the cross-section of each layer to reconstruct the object [7]. The digital image information can be divided at equal intervals along the direction of the given coordinate axis, and then the complex amplitude distribution of the light field of each layer section at the holographic surface is calculated, and then the complex amplitude of each layer section is superimposed to obtain the total light field distribution of the holographic surface.

26

L. Sha et al.

The solution expression for scrambling processing of digital image information is as formula (9):  +∞ 1

|w0 × e0 |2 J R=

κ=1

y˜ · |T |

(9)

Among them, κ represents the digital image information division coefficient, T represents the unit execution time of the digital image information encryption instruction, and y˜ represents the projection length of the digital image information to be encrypted in the deep learning network architecture. In each deep learning network connection layer structure, each neuron sums up the local output digital image information of the previous layer according to a certain weight, and then inputs the weighted result into the activation function. The activation function is a non-linear change, which prevents the deep learning network from learning only simple linear combinations of inputs. 3.2 Loop Index Table The index matrix can be used to scramble image data. The specific steps of indexing the matrix include: first, establishing an indexing matrix. Second, find each specific number of positions from 1 to n in each row of the index matrix, and generate a permuted index matrix. Then the data in these positions are rotated to the next position in turn to obtain the result of scrambled encryption. However, the index matrix is fixed once it is determined. Therefore, if the key for encryption is obtained, the encrypted image can be easily cracked. In order to avoid this problem effectively, a scrambling structure based on dynamic circular scrambling index table is proposed. The corresponding operation steps are as follows: First, use plaintext and standard sequence conditions generated by the deep learning network framework. Then, use the chaotic sequence to control the cyclic shift of the reference sequence to generate a matrix table [8]. Finally, transpose the matrix to get a circular index table. The circular index table is dynamically generated and has a strong correlation with the plaintext, which can effectively resist selective plaintext attacks. Let λ1 , λ2 , ..., λn represent n different digital image plaintext information value indicators, and i1 , i2 , ..., in represent n different digital image plaintext information sample value results, and formula (9) can be combined, and the cycle index coefficient Y1 , Y2 , ..., Yn can be expressed as formula (10): ⎧ i1 ⎪ ⎪ Y1 = 1 − ⎪ ⎪ λ ⎪ 1R ⎪ ⎪ ⎪ i ⎪ 2 ⎪ ⎨ Y2 = 1 − λ2 R (10) ⎪ .. ⎪ ⎪ ⎪. ⎪ ⎪ ⎪ ⎪ ⎪ i ⎪ ⎩ Yn = 1 − n λn R

Design of Digital Image Information Security Encryption Method

27

Using the circular index table to calculate the encrypted expression of any digital image information, the calculation time can be reduced by algorithm optimization. However, in practical applications, the computing time and computing efficiency of fully encrypted information parameters will vary with the change of the tilt angle of the image. When the inclination angle is too large, a problem of computational inefficiency is caused, and the display quality of the reproduced image is also degraded. On the basis of formula (10), let μ represent the tilt angle of the digital image. In the deep  π  learning architecture, the value of μ always belongs to the numerical range of 0, 2 , u˜ represents the encryption efficiency, and p represents the encryption calculation vector based on deep learning. Combining the above physical quantities, the calculation expression of the loop index table is formula (11):   cos μ − 1 (Y1 + Y2 + · · · + Yn ) u˜ (11) P= p · sin μ For the image information diffusion part, a parameter fine-tuning method is proposed. The method effectively combines the plaintext image and the key, and at the same time improves the diffusion effect of the diffusion algorithm, and also uses the parameter finetuning method to improve the diffusion effect for the existing encryption algorithm. There are Gaussian noise signals in digital images. Gaussian noise is a kind of noise whose probability density function obeys Gaussian distribution and is uniformly distributed in image information density. For Gaussian noise, the second moment is uncorrelated and the first moment is constant, which refers to the temporal correlation of continuous information. In the process of encrypting digital image information, Gaussian noise is used as additive noise, which improves the encryption security of information. 3.3 Security Parameters The number of occurrences of each pixel value in a digital image accounting for the total pixel value of the plaintext can be regarded as a probability value. Therefore, an advantage of using deep learning coding to encrypt plaintext images is that the result of deep learning coding and encryption changes according to the change of plaintext, which can solve the problem that some algorithms are not strongly related to plaintext. At the same time, deep learning coding also has a certain compression effect, which can maximize the lossless compression of image information, and combine it with the circular index table and scrambling processing mechanism. Finally, embed the obtained secret image into a carrier image to realize hiding, which will be a secure encryption algorithm. Chaos has the property of being sensitive to initial conditions, and for a system without inherent randomness, as long as the two initial values are close enough, the two trajectories from them will remain close enough throughout the course of the system. But for a chaotic system with inherent randomness, two trajectories starting from two very close initial values may become “enough” apart after a long period of evolution, showing extreme sensitivity to the initial values. The steps to solve the security parameters in the process of digital image information encryption are shown in Fig. 2. It can be seen from Fig. 2 that a significant quality evaluation index for the encrypted image is the histogram analysis of the image. The histogram of the original image is

28

L. Sha et al. Digital image information extraction Template fusion processing Deep learning network framework Data exchange Encryption mechanism

Encryption template

Information feedback

Decrypt template

Fig. 2. Security parameter solution steps

based on the theme that the image needs to express, and some pixel values are very high and some are very low. However, all encrypted images must meet the condition of histogram equalization, that is, the histogram of the encrypted image is equalized, and the pixel values are evenly distributed in the predetermined value interval of the interval [9]. The uniform histogram can mask the information of the encrypted image and prevent the attacker from using the frequency statistics attack. According to the characteristics of the image, the pixel values between adjacent pixels are roughly similar, and the colors of the images transition smoothly, thus forming different color blocks to express different content themes. However, the encrypted image pixel values are approximately randomly distributed, and the correlation coefficient can be used to analyze the adjacent pixel correlation coefficient between the original image and the encrypted image. Let z˜ represent the compression coefficient of digital image information, ξ represent the data information encryption authority based on the deep learning network, X represent the unit accumulation of encryption operation instructions, and C represent the directional encryption coefficient. With the support of the above-mentioned physical quantities, formula (11) is combined, and the calculation result of deriving digital image information encryption security parameters is shown in formula (12):    1 2 1 − z˜  ξ ×P  (12) M = C × |X | A deep learning network is used to extract the feature maps of the fixed hidden layer of the style image and the content image, and the hidden layer selected for the general content image is close to the output layer. The style image needs to select multiple hidden layers approximately uniformly, so that the obtained style transfer image does not deviate from the original image in content and is close to the style image to be converted in style [10]. A differential attack is a chosen-plaintext attack. The attacker often makes some

Design of Digital Image Information Security Encryption Method

29

changes to the original image and then uses the proposed encryption algorithm to encrypt the original image and the changed image, and analyzes the relationship between the encrypted images before and after the change, hoping to find clues about the encryption key. In order to resist differential attacks, the encrypted image must be completely inconsistent even if the original image is only one pixel different. Two evaluation criteria are often used in analysis to test the degree of differentiation of images after encryption. One is the pixel change rate, and the other is the normalized pixel value change intensity.

4 Experimental Analysis In order to highlight the practical value of the digital image information security encryption method based on deep learning, the method of reference [1] (Image information encryption and compression method based on the transport layer in the Internet of Things) and the method of reference [2] (An image encryption method based on chaotic sets) are used as comparison methods, the following comparative experiments are designed. 4.1 Experiment Preparation In order to test the encryption effect of the proposed method, build a network system as an experimental environment, as shown in Fig. 3.

Digital image information input

Image pixel level Windows host

Information overlap index Image information stripping

Data processing equipment

Fig. 3. Experimental environment

According to Fig. 3, input the selected digital image into the Windows host, close the control switch, and judge the processing capability of the selected encryption method for digital image information according to the indication level of the relevant equipment components. In order to ensure the authenticity of the experimental results, when changing the encryption algorithm, the model of the selected experimental component remains unchanged.

30

L. Sha et al.

4.2 Steps and Processes Step 1: Control the Windows host using the deep learning-based digital image information security encryption method, and record the specific numerical changes of the information nesting index, the degree of information stripping, and the image pixel level; Step 2: Clear the existing experimental results and debug the Windows host to the initial state; Step 3: Use the neural network-based digital image information security encryption method to control the Windows host, and record the specific numerical changes of the information nesting index, the degree of information stripping, and the image pixel level; Step 4: Clear the existing experimental results again, and debug the Windows host to the initial state; Step 5: Utilize the digital image information security encryption method based on the kernel function to control the Windows host, and record the specific numerical value changes of the information nesting index, the degree of information stripping, and the image pixel level; Step 6: Compare the obtained experimental results and summarize the specific experimental rules. 4.3 Analysis of Experimental Results The information overlapping index can affect the security encryption ability of the network host for digital images. Without considering other interference conditions, the lower the numerical level of the information overlapping index, the stronger the security encryption ability of the network host for digital images. The specific experimental results of the information overlap index of the three groups of experimental methods are shown in Fig. 4. According to Fig. 4, the information overlap index of the proposed method shows a numerical change trend of increasing first and then decreasing, and the maximum value can only reach 1.36 during the whole experiment. The information overlap index of the method of reference [1] also shows a numerical change trend of increasing first and then decreasing. However, the rising trend of its value in the early stage of the increasing stage is significantly larger than that in the later stage. During the whole experiment, its maximum value reaches 2.00, which is much higher than the information overlap index level of the proposed method. The information overlap index of the method of reference [2] shows a numerical change trend of increasing first and then stabilizing. During the whole experiment, its maximum value reached 2.40, and the duration of the extreme value level reached 15 min, and its average level was also much higher than the information overlap index level of the proposed method. It can be seen that the proposed method has strong security encryption ability for digital image information. It is specified that ψ  represents the security vector, and its value is always equal to 1 during this experiment. The calculation formula of encryption accuracy of digital image information is as formula (13): B=

ψ × 100% ωˆ max

(13)

Design of Digital Image Information Security Encryption Method

31

The reference [1] method The reference [2] method The proposed method 3.6

Information overlap index

3.2 2.8 2.4 2.0 1.6 1.2 0.8 0.4 0

0

3

6

9

12 15 18 Time/min

21

24

27

30

Fig. 4. Experimental value of information overlap index

Among them, ωˆ max represents the maximum value of the information overlap index. Formula (13) is used to calculate the encryption accuracy of digital image information of the three methods, and the specific calculation results are shown in Table 1. Table 1. Encryption accuracy of digital image information by different methods Different methods

Encryption accuracy

The reference [1] method

85.6%

The reference [2] method

89.2%

The proposed method

97.4%

According to Table 1, the encryption accuracy of the method of reference [1] and the method of reference [2] are 85.6% and 89.2% respectively, while the encryption accuracy of the proposed method is 97.4%, which proves that the digital image information encryption accuracy of the proposed method is higher. To sum up, the proposed method can effectively control the value result of the information overlap index. Compared with the method of reference [1] and the method of reference [2], the proposed method has stronger security encryption ability of digital image information. It can effectively improve the security encryption accuracy of digital image information, and is more in line with the application requirements of security encryption of digital image information.

32

L. Sha et al.

5 Conclusion This paper designs a security encryption method for digital image information based on deep learning. Determine the connection form of the deep learning network, select the key learning function, and decipher the digital image information. The image information is scrambled, the structure of the circular index table is established, and the security encryption of digital image information is realized by solving the value range of the security parameters. The experimental results show that the maximum value of the information overlap index of this method can only reach 1.36, which has a strong ability of digital image information security encryption and can effectively improve the accuracy of digital image information security encryption. It solves the problem of weak security encryption ability and low encryption accuracy of digital image information due to the high degree of overlap of digital image information. However, in this study, building a network system as an experimental environment is limited in practical application. Therefore, in the future research, VC software can be considered for programming, and the generated files can be applied in the Windows system to expand the application scope of image encryption.

References 1. Feng, N.: Algorithm of image information encryption and compression based on transmission layer in Internet of Things. J. Jixi Univ. 20(04), 85–88 (2020) 2. Li, F., Liu, J., Wang, G., et al.: An image encryption algorithm based on chaos set. J. Electron. Inf. Technol. 42(04), 981–987 (2020) 3. Gan, T., Liao, Y., Liang, Y., et al.: Partial policy hiding attribute-based encryption in vehicular fog computing. Soft. Comput.Comput. 25(16), 10543–10559 (2021) 4. Shuai, H., Xu, X., Liu, Q.: Backward attentive fusing network with local aggregation classifier for 3d point cloud semantic segmentation. IEEE Trans. Image Process. 30, 4973–4984 (2021) 5. Ao, J., Wu, T., Ma, C.: A deep learning reconstruction approach for underwater distortion image. Comput. Simul. 37(08), 214–218 (2020) 6. Abdulwahed, M.N., Ahmed, A.K.: Improved anti-noise attack ability of image encryption algorithm using de-noising technique. TELKOMNIKA (Telecommun. Comput. Electron. Control) 18(6), 3080–3087 (2020) 7. Mohammed, S.J., Basheer, D.: From cloud computing security towards homomorphic encryption: a comprehensive review. TELKOMNIKA (Telecommun. Comput. Electron. Control) 19(4), 1152–1161 (2021) 8. Varkuti, K.S., Manideep, G.: A novel architectural design of light weight modified advanced encryption standard for low power and high speed applications. High Technol. Lett. 27(2), 360–370 (2021) 9. Abdalla, M., Benhamouda, F., Pointcheval, D.: Corrigendum: public-key encryption indistinguishable under plaintext-checkable attacks. IET Inf. Secur. 14(3), 365–366 (2020) 10. López-Santos, F., May-Pat, A., Ledesma-Orozco, E.R., et al.: Measurement of in-plane and out-of-plane elastic properties of woven fabric composites using digital image correlation. J. Compos. Mater. 55(9), 1231–1246 (2021)

Design of Iot Data Acquisition System Based on Neural Network Combined with STM32 Microcontroller Xinyue Chen(B) The Hong Kong University of Science and Technology, Hong Kong 999077, China [email protected]

Abstract. Aiming at the low efficiency and high power consumption of data acquisition in the Internet of Things, a data acquisition system based on the combination of neural network and STM32 microcontroller is designed. In the control module, according to the system functional design and performance requirements, the STM32F103C8T6 minimum system module is used as the control core of the system. The data acquisition terminal adopts the baseplate + core board structure, and is composed of high-speed data acquisition module, AT91SAM9X25 platform and various expansion function modules. Eight types of environmental data sensor modules are used to build sensor modules. The k-means convolution neural network is used to realize the data classification of the Internet of Things, and the design of the data classification module is completed. The performance of the system is tested. The test results show that the system can realize the real-time collection of indoor environmental data, the communication rate is higher than 80 mbps, and the power consumption of the data acquisition terminal is low. Keywords: Neural Network · STM32 Microcontroller · Arm Chip · Internet of Things · Data Acquisition System

1 Introduction In recent years, the rapid development of the Internet of Things industry has become the strategic commanding point of a new round of economic and technological development in the world. There are traditional electrical appliance manufacturers, as well as some Internet companies and start-up companies; the second is the control center, such as Ali Xiaozhi launched by Alibaba, which integrates many APPs; the third is the ecosystem, the main model currently emerging is intelligent module + control center + cloud service, Such as Xiaomi, Haier, Ali, Tencent, Baidu, etc.; the fourth is the operating system. Currently, Ali YumOS, Tencent’s TOS, 3600S, etc. are promoted to the Internet of Things operating system [1]. The driving role of the Internet of Things in manufacturing, operation, application and other related industrial chains is becoming more and more prominent. At the same time, with the further upgrading and improvement of the Internet of Things industry standards, the Internet of Things will extend in the direction of © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 33–45, 2024. https://doi.org/10.1007/978-3-031-50577-5_4

34

X. Chen

clustering and collaboration, further strengthen industrial cooperation, and accelerate the penetration and integration of the Internet of Things in related industries and fields; further build an independent innovation system, enhance the core competitiveness of the industry; at the same time, give full play to the market advantages, cultivate and expand the Internet of Things industry, and form an all-round integrated global Internet of Things architecture system. At present, the widespread popularity of the Internet of Things has been integrated into all aspects of people’s daily production and life. From the perspective of structural layering, the Internet of Things can be expressed as a three-layer architecture consisting of a perception layer for comprehensive perception, a network layer for reliable transmission, and an application layer for intelligent processing. Various emerging sensing technologies such as surveillance cameras, face recognition technology, and global positioning systems constitute the application terminals of the Internet of Things. Since the standards for sensor device acquisition are not uniform, an intelligent terminal acquisition system is needed to perform unified data acquisition, control, processing, and transmission. Based on this background, the Internet of Things data acquisition system is researched. Reference [2] proposes a NB-IoT wireless sensor IoT data acquisition system. The main controller is designed and implemented by the STM32L476 low-power microprocessor. The SHT30 sensor is used to collect temperature and humidity data, and the 433 MHz wireless communication module is used to realize the data communication between the node and the coordinator. The coordinator uses the NB-IoT Internet of Things technology to realize the For the docking of the platform, the coordinator and the sensor nodes adopt a star-shaped ad hoc network to realize the upload and delivery of data and commands between the sensor nodes and the IoT platform. Reference [3] designed a multi-channel data acquisition system based on the Internet of Things. Collect data through temperature and humidity, carbon monoxide, carbon dioxide and dust collection modules, send the data to the coordinator node through the wireless network, and store, analyze and process the data at the same time. The coordinator node transmits data to the client through the cloud server for monitoring. Under the above background, this paper proposes the design of IoT data acquisition system based on neural network combined with STM32 microcontroller. STM32F103C8T6 minimum system module is used as the control core of the Internet of Things data acquisition system to improve the reliability of the Internet of Things data acquisition system. The data acquisition terminal adopts the baseplate + core board structure, and is composed of high-speed data acquisition module, AT91SAM9X25 platform and various expansion function modules. Eight types of environmental data sensor modules are designed to build sensor modules. The k-means convolution neural network is used to realize the data classification of the Internet of Things, and the design of the data classification module is completed.

Design of Iot Data Acquisition System Based on Neural Network

35

2 IoT Data Acquisition System Design 2.1 Control Module Design In the control module, according to the system function design and performance requirements, the STM32F103C8T6 minimum system module is used as the control core of the system. It is mainly composed of STM32F103C8T6 single-chip microcomputer, clock circuit, reset circuit, power supply, etc. With the peripheral design circuit of the system, it can The information collected by the sensor is converted into information that can be identified and processed by itself. The STM32 series of microcontrollers are 32-bit microcontrollers with very powerful functions developed by ARM and based on the ARM Cortex-M core. The STM32F103 series of microcontrollers use a 32-bit Cortex-M3 core with a maximum CPU speed of 72 MHz. This product family has 16 KB−1 MB Flash, various control peripherals, USB full-speed interface and CAN. It has high integration, good reliability, rich command system, low power consumption, serial programming, and very cheap price. According to the relevant experimental data, the STM32 MCU not only consumes less power, but also has significantly better performance than the MSP430 and C51 series MCUs in the processing speed, floating-point operations and environments that require complex operations. In view of the many advantages of STM32F103 series single-chip microcomputer, combined with system design function and performance requirements, STM32F103C8T6 single-chip microcomputer is used as the core processor. The main features of the STM32F103 series of microcontrollers are shown in Table 1. In STM32F103C8T6, the main function of the system clock circuit is to provide the rhythm, and this system clock circuit selects an 8M crystal oscillator. The role of the reset circuit is to restore the system to its initial state. The development board used in this system is a low-level reset. When the button is floating, the RESET input is high level; when the button is pressed, the RESET pin input is low level, thus the circuit is reset [4]. Three different startup modes can be selected by setting the BOOT[1:0] pins. The startup modes are shown in Table 2. Use C language to develop the software function program of STM32 microcontroller. This system uses KeilUvision5 MDK software to write C language program code. Keil Uvision5 MDK software provides a complete development environment for devices based on Cortex-M, Cortex-R4, ARM7, and ARM9 processors. It is specially designed for microcontroller applications. Most demanding embedded applications. Keil Uvision5 MDK includes all software system development functions such as program writing, compiling, debugging, etc. It can not only write the code, but also check the right or wrong of its programming, check the functionality of the code through simulation debugging, and finally pass the downloader and microcontroller. The system can be directly connected and can be debugged online, and can perform functional testing through physical objects. After the system software is compiled, compile the designed program to generate the target file.hex, download it to the STM32F103C8T6 single-chip microcomputer through the programmer, then insert the single-chip microcomputer into the socket of the PCB board, and connect the power supply to the produced object, and the program download

36

X. Chen Table 1. Main features of STM32F103 series microcontrollers

Serial number project

Main characteristics

1

Arm32-bit Cortex-M3 CPU core

The maximum operating frequency is 72 MHz; The memory access performance in the 0 wait state is 1.25 dmips/mhz (dhrystone2.1); One cycle multiplication and hardware division

2

storage

64 or 128 KB flash memory; 20 KB SRAM

3

DMA

7-channel DMA controller; Supported peripherals: timer, ADC, SPI, I2C

4

Up to 80 fast I/O ports

26/37/51/80 multifunctional bidirectional 5 V compatible I/O ports, all I/O ports can be mapped to 16 external interrupts

5

Debugging mode

Serial line debugging (SWD) and JTAG interface

6

Two 12 bit analog-to-digital Conversion range: 0 to 3.6 V dual converters, l µ S conversion time (up sampling and holding capacity to 16 channels) temperature sensor

Table 2. Boot Mode Table Serial number

Startup mode

Select pin BOOTO

BOOT1

1

Main flash memory

1

0

2

System memory

1

1

3

Built in SRAM

0

1

can be completed [5]. During the download process, if the program download fails, it is necessary to verify the serial port and the microcontroller. Whether the model selection is correct; when confirming that the previous sequence is correct, you need to power off and then power on the microcontroller to download. Software debugging needs to be combined with hardware. The debugging steps are roughly as follows: (1) Open the Keil Uvision5 MDK software, create a “New Vision Project”, determine the type of the single-chip microcomputer chip, and complete the program code writing; or, select the program code that has been written; (2) Complete the program code compilation through the “Build Target” function of the Keil Uvision5 MDK software; (3) Program code error correction, compile repeatedly until the error is 0, use the “Debug” function to simulate and debug;

Design of Iot Data Acquisition System Based on Neural Network

37

(4) Through the “Options for Target” function in the “Project” of the Keil Uvision5 MDK software, check “Create hex file” in the “Output” column, and click “Compile, Build, Rebuild” in the software toolbar in turn. The software generates the.hex target file, and burns the.hex file into the STM32 microcontroller to check the operation; (5) Confirm whether the software function meets the design requirements, otherwise return to the first step to write the code. 2.2 Design of Data Acquisition Terminal The data acquisition terminal adopts the structure of base plate + core board, which can maximize the scalability and versatility of the system. It is mainly composed of high-speed data acquisition module, AT91SAM9X25 platform, and various expansion function modules. According to different application scenarios, the backplane, sensor, and USB device can be replaced, which reduces the project development cycle and has strong flexibility [6]. The high-speed data acquisition module is composed of highperformance sampling chips and FPGA chips, which are responsible for high-speed data acquisition; the AT91SAM9X25 platform is responsible for data processing and the realization of other extended functions. In order to collect external environmental data at different rates, a variety of interfaces are designed in the data collection part. For external data with high-speed instantaneous fluctuation, high-speed data acquisition module is used for high-speed sampling; for external data and sensor data of general rate, data acquisition is performed directly, and external digital signals are received through SPI interface, RS485 interface, and general GPIO port. And through the on-chip multi-channel ADC, the analog/digital signal output by the sensor can be directly collected. The core part of data processing is composed of AT91SAM9 × 25 platform, the main processor is ARM926EJ-S chip, the main frequency is 400 MHz, 128 M DDR2, 256 M NANDFLAS, the platform has high-speed USB2.0 interface, 10/100 Mbps Ethernet MAC, two HS SD Card/SDIO/MMC interface, USART, SPI, I2S, multiple TWIs and 10-bit ADC, the platform is plugged into the base through pin headers. The data processing part controls each IO port, communication module, storage module, and video monitoring module of the terminal to meet the needs of terminal expansion functions. The system frame of the terminal data processing part is shown in Fig. 1. Digital signal interface: The system can communicate with any sensor with an RS485 interface, or read external environmental data to the system through a general-purpose GPIO port. Communication module: The user can view the collected data in real time remotely or locally through the wired Ethernet port, 4G MODEM, and wireless WIFI interface. It is also convenient for users to transmit the collected data to the data management center in a wired or wireless manner [7]. Video Surveillance Module: Real-time display of external environmental conditions through a USB camera. Data storage module: realizes the local cache of data, supports storage in mobile devices or flash.

38

X. Chen Video monitorin g module Data storage module

Digital signal interface At91sam9x25 platform

communic ation module

Power Supply

Fig. 1. System framework of terminal data processing part

The AT91SAM9 × 25 platform contains 12 channels of 10-bit AD sampling channels, which can meet some low-speed external environmental data acquisition, but for the data acquisition requirements of high-speed transient, large fluctuation and high acquisition accuracy, it is necessary to use the data acquisition part. High-speed AD data acquisition module. It can be seen from the analysis of the data collection principle that for high-speed external environment data, the real-time sampling resolution of the data needs to be improved, and the sampling interval must be shortened, and the collection points must be maintained and stored [8]. The data collection terminal not only needs to complete the real-time collection of data, but also needs to cache the collected data locally for subsequent further data processing. This terminal is designed to realize large-capacity storage of collected data through SD card and U disk, which is convenient for data export. It can also be saved to memory FLASH. SD card is a new generation of multimedia memory card, which has the characteristics of high-speed data storage, safe plugging and unplugging under power, and safe data writing. Compatible with MMC card, the interface that supports inserting SD card also supports inserting MMC card. The USB interface has the characteristics of high speed, two-way synchronous transmission, and pluggability, which is convenient for the realization of terminal expansion auxiliary functions. The data acquisition terminal expands the video monitoring function, wireless WIFI function, 4G dial-up Internet function, U disk and so on through the USB interface. The data acquisition terminal realizes the wireless access to the IoT cloud platform for the collected data, using WIFI based on USBWiFi module for short distances; using 4G modem based on USB4G cellular module for long distances. The WIFI module adopts Leike Leike’s NW362, which is based on RTL8192 chip, supports USB2.0, transmits power of 20 dBm, wireless connection rate up to 300 Mbps, supports IEEE 802.11b/g/n wireless standard, and supports multiple data encryption methods, WPA security mechanism. The cellular module uses SIMTECH’s SIM7100C, SIM7100C has

Design of Iot Data Acquisition System Based on Neural Network

39

a USB interface, supports a variety of mobile frequency bands, and supports dial-up Internet data services and SMS services. 2.3 Sensor Module Design As the nerve endings of the Internet of Things, sensors are an important element for human beings to fully perceive the natural world. The wide-scale application of various sensors is a necessary basic condition for the Internet of Things. Sensors can be divided into humidity sensors, magnetic sensors, gas sensors, chemical sensors, etc. according to their applications. The system adopts eight types of environmental data sensor modules, namely: DS18B20 digital temperature sensor module, DHT11 digital temperature and humidity sensor module, MQ-2 smoke sensor module, MQ-5 LPG sensor module, MQ-7 carbon monoxide sensor module, MQ-137 ammonia gas sensor module, MQ-138 aldol gas sensor module, MG-811 carbon dioxide sensor module [9]. DS18B20 digital temperature sensor is launched by DALLAS company in the United States. It is an excellent digital temperature sensor and has the following advantages: high accuracy, 12-bit binary conversion result, ± 0.5 °C accuracy and 0.0625 °C resolution; DHT11 sensor has The following features: humidity accuracy + −5%RH, relative humidity and temperature measurement; full calibration, digital output; excellent longterm stability; no additional components required; long signal transmission distance, up to about 20 m; ultra-low energy consumption; 4-pin mount; fully interchangeable; can be used in harsh environments; MQ-2 smoke sensor: This smoke sensor is mainly used for smoke detection caused by various gas leaks. It has the following characteristics: can accurately measure the smoke concentration caused by liquefied gas, natural gas and other gases; has good sensitivity; has a long service life and reliable stability; fast response and recovery characteristics; can be used in homes or factories; MQ -5 Liquefied petroleum gas sensor: It is suitable for the monitoring device of liquefied gas, natural gas and coal gas in household or industry. It has the following characteristics: good sensitivity to liquefied gas, natural gas and city gas; almost no response to ethanol and smoke; fast response and recovery characteristics; long service life and reliable stability; simple test circuit; MQ-7 Carbon monoxide sensor: mainly used for the detection of carbon monoxide and carbon monoxide-like gases. It can be used in enterprise production environment or ordinary home environment. It has the following characteristics: high sensitivity; long service life; strong stability; MQ-137 ammonia gas sensor: a harmful gas detection device suitable for households, factories, and atmospheric environments, and is suitable for ammonia vapor detection. Ammonia gas sensor has the following characteristics: fast response recovery and high sensitivity; simple test circuit; long-term working stability; MQ-138 aldol gas sensor: commonly used for detection of aldol gas. It has the following characteristics: wide detection range; fast response recovery and high sensitivity; long-term working stability; simple test circuit; MG-811 carbon dioxide sensor: carbon dioxide detection device suitable for home and business. It is suitable for the detection of carbon dioxide concentration. The carbon dioxide gas sensor has the following characteristics: good sensitivity and selectivity to carbon dioxide; under normal circumstances, changes in temperature and humidity have little effect on it; good stability and reproducibility.

40

X. Chen

The range data of the eight types of environmental data sensor modules are shown in Table 3. Table 3. Range data of environmental data sensor module Serial number

sensor

Sensor type

range

1

DS18B20

Digital temperature sensor Temperature range: - 55 °C–+ 115 °C

2

DHT11

Digital temperature and humidity sensor

Humidity range: 20–80% RH

3

MQ-2

Smoke sensor

Smoke range: 300−20000 ppm

4

MQ-5

LPG sensor

Liquefied gas, natural gas, gas range 500−6000 ppm

5

MQ-7

Carbon monoxide sensor

CO concentration range: 5−1200 ppm

6

MQ-137

Ammonia sensor

Ammonia gas range: 15−350 ppm

7

MQ-138

Aldehyde, ketone and alcohol sensors

Formaldehyde (2−20), acetone (5−500), methanol (2−200) unit: ppm

8

MG-811

Carbon dioxide sensor

CO2 range (2 to 20000 ppm)

Environmental data sensors perform their respective duties and monitor the corresponding environmental values in the surrounding environment. Due to the various data formats (units, value ranges) it collects, it needs a “command” to manage it, resulting in a sink node, that is, a coordination and management node. According to the derived purpose, the coordination management node mainly aggregates the data of eight environmental data sensors, formats it, and transmits data with the TQ6410 gateway. The sink node used is STM32W108. 2.4 Data Classification Module The data classification module is designed based on neural network, and k-means convolutional neural network is used to implement IoT data classification. The design of k-means convolutional neural network is as follows: K-means algorithm is a vector quantization method derived from signal processing, and now it is more popular in the field of machine learning as a clustering algorithm. The purpose of the K-means algorithm is to divide m data sample point into K clusters, so that each point belongs to the category corresponding to the cluster center (mean) closest to it, and use this as the clustering standard. The dataset is represented by: Y = (y1 , y2 , · · · , ym )

(1)

Design of Iot Data Acquisition System Based on Neural Network

41

ym in formula (1) refers to the m data sample point. Each data sample in Y is a D-dimensional real vector. K-means clustering is to divide these m data samples into K(K ≤ m) sets, as follows: A = {A1 , A2 , · · · , AK }

(2)

To minimize the sum of the squares of the clusters. That is, the goal of K-means is to find: K   2 y − oj  argA min (3) j=1

y∈Aj

oj in formula (3) refers to the mean of the data in the j cluster Aj . Assuming that the initial K mean points are known, the standard K-means algorithm alternates between the following two steps during learning: (1) Assignment Using Euclidean distance, the data point can be assigned to the observation point closest to it. (2) Update Calculate the new cluster center in each cluster obtained in step (1) as the new mean point. This algorithm converges when each cluster center does not change. However, it should be noted that since there are only limited allocation schemes, this method will generally converge to a (local) optimal solution [10]. In semi-supervised or unsupervised learning, K-means clustering is often used for feature learning. The main purpose is to obtain a K-means cluster representation by training unlabeled data, and then map arbitrary input data to new feature space. After K-means clustering, the K-means clustering center can be used to extract features. The specific steps are as follows: (1) Convolve the clustering results obtained above with the input data to obtain the characteristics of the input data. (2) Generally, in order to reduce the number of feature dimensions, speed up the speed, reduce the network size and obtain a certain translation invariance, the feature maps obtained above are pooled. Through the above steps, the output feature map of the k-means-based convolutional layer (unsupervised learning stage) is obtained. Combined with the hardware modules designed above, the design of the IoT data acquisition system based on neural network combined with STM32 microcontroller is realized.

3 System Test 3.1 Indoor Environment Data Collection The performance of the designed IoT data acquisition system based on neural network combined with STM32 microcontroller is tested. Use the system to collect indoor environmental data. The indicators collected mainly include three aspects: safety indicators,

42

X. Chen

comfort indicators, and health indicators. Among them, the safety indicators focus on remote video monitoring of the indoor environment to know the status of the home. The comfort index is to monitor the indoor temperature, humidity and other indicators to make the indoor environment in a comfortable environment. Health indicators are to monitor indoor inhalable particles and some harmful gases. When the concentration of inhalable particles or harmful gases in the room exceeds the preset threshold, it can alarm in real time. This part is mainly to test the data acquisition and data processing capabilities of the system. Due to the limitations of the laboratory sensor conditions, the test process takes the testing of temperature, humidity, smoke, carbon monoxide and carbon dioxide as examples to verify the data acquisition capabilities of the system and the data acquisition terminal. The function of wireless access to the IoT cloud platform. The test hardware consists of the following: voltage regulator source, data acquisition terminal, video camera, signal source, host computer, DS18B20 digital temperature sensor module, DHT11 digital temperature and humidity sensor module, MQ-2 smoke sensor module, MQ-7 carbon monoxide sensor module, MG-811 carbon dioxide sensor module composition. For the test of the data acquisition function, the signal source is used to simulate the high-speed signal, and it is connected to the data acquisition terminal. In the IoT cloud platform, adding data acquisition terminals and sensor devices, the cloud platform can display the data of remote networked data acquisition terminals in real time. The IoT cloud platform can view the data collected by the terminal in real time, and perform operations such as analysis and supervision. The collected data can be drawn into a curve, a pie chart or a bar chart. In the experiment, the front-end sensor constantly senses the external data, and the data collection terminal reports the data to the cloud platform every 5 s, corresponding to each node in the chart. 3.2 Data Acquisition Performance Test The temperature, humidity, smoke, carbon monoxide, and carbon dioxide data collected in the experiment are shown in Table 4. According to the test data in Table 4, the real-time collection of indoor environmental data can be achieved through the design system, which proves that the design system has good data collection performance. 3.3 System Communication Rate Test Taking the Reference [2] system and the Reference [3] system as the experimental comparison method, the communication rates of different systems are tested, and the test results are shown in Fig. 2. According to the test results in Fig. 2, the communication rate of the system in this paper is higher than 80 mbps, and it has been relatively stable at the same time. However, the communication rate of the Reference comparison system fluctuates greatly, and the communication rate is small. It can be seen that the communication rate of the system in this paper is large and the stability is strong.

Design of Iot Data Acquisition System Based on Neural Network

43

Table 4. Data collected in the experiment Serial Time Temperature Humidity smoke(mg/m3) Carbon Carbon number (s) (°C) (% RH) monoxide(ppm) dioxide(ppm) 1

500

25.30

36.25

34

10.10

200.36

2

1000

27.32

34.25

35

9.52

224.30

3

1500

26.85

36.20

28

8.63

212.62

4

2000

25.47

35.41

30

10.52

218.32

5

2500

26.41

36.20

30

11.63

204.36

6

3000

25.32

38.21

31

12.01

224.32

7

3500

24.10

36.32

32

9.32

201.32

8

4000

25.47

36.41

35

7.52

214.36

9

4500

25.63

36.20

36

9.30

214.32

10

5000

25.47

35.96

32

7.32

212.20

11

5500

26.52

36.14

30

8.52

231.0

12

6000

24.75

36.52

28

7.63

214.20

Reference [2] system Reference [3] system This article system

130

Communication rate/mbps

120 110 100 90 80 70 60 50 500

1000

1500

2500 2000 running time/s

3000

3500

Fig. 2. Communication rate test results of the designed system

3.4 Power Consumption Test of Data Acquisition Terminal The power consumption test results of the data acquisition terminal of the design system are shown in Table 5. The test results in Table 5 show that, compared with the Reference comparison system, the power consumption of the data acquisition terminal of the system in this paper has always been low. It shows that the system has good energy-saving performance.

44

X. Chen Table 5. Data acquisition terminal power consumption test results

Serial number

Acquisition time (s) Power consumption/W This article system

Reference [2] system

Reference [3] system

1

500

0.25

0.56

0.64

2

1000

0.48

0.78

0.82

3

1500

0.62

0.88

0.91

4

2000

0.85

1.20

1.15

5

2500

1.02

1.32

1.30

6

3000

1.26

1.48

1.51

7

3500

1.48

1.83

1.96

8

4000

1.68

2.01

2.09

9

4500

1.88

2.28

2.45

4 Conclusion The data acquisition system of the Internet of Things is the layer closest to the perception of the surrounding environment in the Internet of Things layer. It is also the tentacle of the Internet of Things and the most important part of the Internet of Things. In the research, through the experimental verification, the designed data acquisition system of the Internet of Things can realize the real-time collection of indoor environmental data, the power consumption of the data acquisition terminal is low, and the system communication rate is stable. However, the designed system still has some shortcomings. For example, the system can not realize the simultaneous collection of multiple Internet of Things at the same time. In the future, some expansion and improvement can be made on this basis.

References 1. Wang, Y., Gao, Y.: State prediction simulation of IoT terminal secure access data gateway. Comput. Simul. 39(4), 341–344, 356 (2022) 2. Yan, G., Huang, X., Haiwu, X.: Design of a wireless sensing data acquisition system for NB-IoT. Modern Electron. Technol. 44(18), 38–42 (2021) 3. Jia, J., He, Y., Hong, Y.: Design of multi-channel data acquisition system based on Internet of Things. Ind. Instrum. Autom. 4, 21–24 (2020) 4. Wang, M., Liu, W., Xu, Y., et al.: Single-chip hybrid integrated silicon photonics transmitter based on passive alignment. Opt. Lett. 47(7), 1709–1712 (2022) 5. Vinjamuri, U.R., Rao, B.L.: Efficient energy management system using Internet of things with FORDF technique for distribution system. IET Renew. Power Gener. 15(3), 676–688 (2021) 6. Park, J., Kim, S., Youn, J., et al.: Low-complexity data collection scheme for UAV sink nodes in cellular IoT networks. IEEE Trans. Veh. Technol. (99), 1 (2021)

Design of Iot Data Acquisition System Based on Neural Network

45

7. Sun, Z., Wei, Z., Yang, N., et al.: Two-tier communication for UAV-enabled massive IoT systems: performance analysis and joint design of trajectory and resource allocation. IEEE J. Sel. Areas Commun. (99), 1 (2020) 8. Sun, Z., Zhang, X., Wang, T., et al.: Edge computing in internet of things: a novel sensing-data reconstruction algorithm under intelligent-migratoin stragegy. IEEE Access, (99), 1 (2020) 9. Kim, T., Qiao, D.: Energy-efficient data collection for IoT networks via cooperative multi-hop UAV networks. IEEE Trans. Veh. Technol. 69(11), 13796–13811 (2020) 10. Fazlollahtabar, H.: Internet of Things-based SCADA system for configuring/reconfiguring an autonomous assembly process. Robotica 40(3), 672–689 (2022)

Research on Network Equilibrium Scheduling Method of Water Conservancy Green Product Supply Chain Based on Compound Ant Colony Algorithm Weijia Jin1,2(B) and Shao Gong1,2 1 Zhejiang Tongji Vocational College of Science and Technology, Hangzhou 311231, China

[email protected] 2 School of Humanities and Communication, University of Sanya, Sanya 572022, China

Abstract. In order to improve the balanced scheduling performance of the water conservancy green product supply chain network, a research on the balanced scheduling method of the water conservancy green product supply chain network based on the composite ant colony algorithm is proposed. Build the logistics distribution model of water conservancy green product supply chain to obtain the competitive relationship between supply chains. By calculating the entropy value of the supply chain network scheduling task, the priority weight of the supply chain network scheduling is completed. Based on the composite ant colony algorithm for population fusion calculation, the balanced scheduling model of water conservancy green product supply chain network is constructed to realize the balanced scheduling of water conservancy green product supply chain network. The experimental results show that the proposed method has better performance in the supply chain balance, scheduling responsiveness and storage facility utilization of water conservancy green products. Keywords: Compound Ant Colony Algorithm · Green Products; Balanced Scheduling · Supply Chain Network · Feature Extraction · Network Optimization

1 Introduction At present, the constraints of resource and environmental issues on achieving sustainable development are increasingly prominent. Relying on the supply relationship between upstream and downstream enterprises, actively manufacturing and promoting green products is one of the effective ways to improve the green level of the entire supply chain [1]. However, when an enterprise carries out ecological transformation, it will often be accompanied by a large amount of cost input. These additional costs will be passed down the industrial chain level by level, and will eventually be transferred to customers. Whether customers are willing to pay enough premium to make up for additional costs is the key driving force to encourage manufacturing enterprises to take the © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 46–61, 2024. https://doi.org/10.1007/978-3-031-50577-5_5

Research on Network Equilibrium Scheduling Method

47

initiative to adopt green strategies. Although green products can weaken the negative impact on the environment, the current green consumption awareness of customers has not been popularized. High manufacturing costs and product prices make it difficult for green products to gain competitive advantage in the market, which to a large extent also leads to the uncertainty of the income of each member in the green supply chain [2]. Therefore, how to balance the cost and profit of green products among members and improve the performance of green supply chain system is a problem that many managers and scholars have been concerned about. In the domestic research, Tang Liang et al. Considering multiple different types of collaborative supply chain networks with interactive characteristics, the objective function of minimizing production cost, inventory cost, waiting cost and order backorder cost is constructed, and the decision variables of consolidation are designed to construct the start time constraint of similar orders in the same collaborative enterprise. In addition, the model considers two types of orders: deterministic order and stochastic order, and designs the arrival probability of stochastic order at discrete time points in interval time. In order to obtain the optimal production scheduling strategy of the collaborative supply chain network, four sub decision models are constructed based on the scenario of random order arrival or not, and further design the main decision model to judge the cost difference under different scheduling strategies of advance scheduling random order collaborative production and non advance scheduling random order collaborative production. The simulation results show that the merger decision brings production cost benefits, but also causes the delay of partial orders, and different types of collaborative supply chain networks have different anti-jamming capabilities to random orders. Gao Jibing et al. [4] believed that the operation interruption caused by the preventive maintenance of the network equipment in the coal supply chain would affect the total network traffic to a certain extent. In practice, effective scheduling was used to reduce the impact of network interruption. However, the current manual scheduling is not only inefficient, but also has limited time intervals to deal with. Therefore, based on the network flow characteristics of coal supply chain, combined with the characteristics of interruption scheduling, a mixed integer programming model is constructed. For the special network flow and scheduling structure of the problem, the CBD and BBC two Benders decomposition algorithms are used to solve the problem and make a comparative analysis. Finally, based on the BBC, the improved solutions are designed to integrate the advance of pre flow and add effective inequalities. The results of two groups of examples show that BBC has a better solution effect than CBD, and the solution efficiency of the improved algorithm based on BBC is significantly improved. In foreign studies, Duan C et al. [5] studied the dynamic multi cycle supply chain network equilibrium problem considering marketing and corporate social responsibility in order to explore the optimal marketing and corporate social responsibility strategies in the dynamic multi cycle supply chain network system. Multi cycle CLSCN system includes manufacturers, retailers, recyclers and demand markets. Based on Nash non cooperative game theory and variational inequality, the optimal behavior and equilibrium conditions of members are designed. Then, a new multi cycle supply chain network

48

W. Jin and S. Gong

equilibrium model is constructed. In this mode, marketing is the responsibility of manufacturers and retailers, and corporate social responsibility is the responsibility of manufacturers. Numerical examples verify the validity of the model, and analyze the impact of marketing and corporate social responsibility on the equilibrium results. Retailers are responsible for marketing, and manufacturers’ corporate social responsibility activities are at a high level in the early stage, which is most conducive to the diversified supply chain network system and social welfare. Based on the research conclusion of this paper, the management enlightenment is proposed from the perspective of enterprises and government. In addition, in the process of the overall operation of the green supply chain, customer behavior is an important factor affecting the decision-making of members in each link. When water conservancy green products enter the consumer market, different customers often have different valuations or payment intentions. With the development and growth of online e-commerce platform, convenient and fast shopping channels and open and transparent product information make customers’ behavior of purchasing products more complex and changeable. Before making the final purchase decision, customers will have a more comprehensive understanding and analysis of the product’s performance, price, retailer’s sales strategy, etc., often delaying the decision to purchase products in the current period, which will not only hurt the enthusiasm of green manufacturers for production, but also damage the interests of retailers. So for all members of the green supply chain, how to formulate a price and environmental protection level that can not only satisfy customers’ consumption psychology but also ensure their own interests to maximize according to customers’ purchase behavior in the actual market is one of the important problems faced at present. With the worsening of resource and environment problems and the increasingly fierce market competition, more and more enterprises choose to produce green products and use effective marketing means, such as pre-sale strategy to stimulate customers’ consumption demand, in order to comply with the trend of green development, improve their competitive strength, and obtain more revenue. Therefore, under the premise of green product pre-sale, aiming at the specific purchase behavior of customers in the actual market, building a green supply chain decision-making model and studying the decision-making situation of each member can enrich and expand the application of customer behavior theory and green supply chain management theory in the consumer market. With the rapid development of e-commerce, the time and space constraints of the market have been broken. The increasingly transparent product information and increasingly fierce market competition make customers’ consumption concepts diversified and personalized, and the purchase behavior presents a complex and dynamic trend. Affected by this, the core enterprises in the green supply chain are also facing unprecedented challenges when making decisions. Therefore, under the pre-sale mode, based on customer behavior, studying the optimal decision-making of green supply chain under different supply chain decision-making situations and contract mechanisms can effectively avoid the adverse impact of customer strategy waiting behavior on enterprises, create more revenue for enterprises, thus promoting green manufacturing and encouraging customers to consume green.

Research on Network Equilibrium Scheduling Method

49

Based on the above research background, this paper applies the compound ant colony algorithm to the balanced scheduling of the water conservancy green product supply chain network, so as to ensure the stability of the water conservancy green product supply chain network.

2 Design of Network Equilibrium Scheduling Method for Water Conservancy Green Products Supply Chain 2.1 Build the Logistics Distribution Model of Water Conservancy Green Product Supply Chain In practice, the logistics distribution center of water conservancy green product supply chain is multiple demand points, so the logistics distribution model of water conservancy green product supply chain is constructed. The logistics distribution process is as follows: the logistics distribution center of water conservancy green product supply chain has M special vehicles, of which the vehicle type and load capacity are the same; All vehicles to be distributed start from the same distribution center. All distribution vehicles need to pass through various distribution points and return to the distribution center after completing the task. The distribution center can obtain and update data information in real time, including the geographical location of all demand points that need services, and the number of water conservancy green products to be distributed. The water conservancy green products distributed are all regarded as one type, and the quality of these water conservancy green products will continue to decline with the increase of time; Qi represents the demand for green water conservancy products at demand point i. Each demand point hopes to deliver green water conservancy products within its scheduled time. If the delivery arrival time has exceeded the specified time, the demand point can be rejected. The model is established to find the optimal distribution scheme under the condition that the cost is minimized, the maximum load capacity of each allocated vehicle is not exceeded, and the time range required by the demand point is not violated. Put M vehicles (ants) into N demand points, and the actions of each vehicle (ant) in each step are: select the next demand point that it has not visited through certain criteria; And after completing a step, from one demand point to another, or a cycle, after completing the access to all N demand points, the pheromone concentration on all paths will be updated. Assuming that there are N demand points in total, and the number of vehicles (ants) is M , there is formula 1: M =

N 

Bi (t)

(1)

i=1

  Vehicles (ant number) located at demand point i at time t, and meeting  = εij (t) ,  represents the concentration set of pheromones on the distribution path at time t, and εij (t) represents the concentration of pheromones on the distribution path at time t.

50

W. Jin and S. Gong

After determining the distribution vehicles of the water conservancy green product supply chain, build a tabu list as shown in Formula 2: ⎧ ⎫ α β ⎨ [εij (t)] [γij (t)] ⎬ j ∈ allowed k Pijk (t) = s∈ allowed [εis (t)]α [γis (t)]β (2) ⎩ ⎭ 0 else. Among them, α represents the information heuristic factor, which is used to evaluate the relative importance of distribution routes, β represents the expected heuristic factor, which is used to evaluate the relative importance of visibility, allowedk represents all nodes that vehicle k can currently select, γij (t) represents the visibility at time t, , and represents the expected degree of vehicle (ant) transfer from node i to node j. Through the tabu list, record each customer that has completed the task assigned by the ant, and keep updated throughout the search process [6]. After a period of time, the entire path search is completed. At this time, all taboo lists have been filled in. Then, calculate and compare the path length of each vehicle (ant) to obtain the best path. Save and record the best path, and update the pheromone on the path. When the number of iterations reaches the upper limit, the final path obtained is the shortest path of vehicles (ants) in all paths. The rules for updating pheromones are Formula 3 and Formula 4: εij (t + N ) = φ · εij (t) + εij εij (t + N ) =

M 

εijk

(3)

(4)

k=1

According to the logistics distribution process of the water conservancy green product supply chain, the distribution vehicles of the water conservancy green product supply chain are determined, and the logistics distribution model of the water conservancy green product supply chain is constructed by constructing the taboo list. 2.2 Obtain the Competitive Relationship Between Supply Chains Assume that dj represents the node set of the water conservancy green product supply chain network, sg represents the node set of transportation, warehousing and manufacturing, bi represents the water conservancy green product supply chain system composed of core suppliers, and sh represents the uncertainty of water conservancy green product supply, then use Formula (5) to give the relationship between water conservancy green product storage and multi-level supply chain:   lp − mo sh ∗ bi rp = + dj · (5) + ru sg gi · {p∗ }   In the formula, lp represents the storage demand issued by the supplier according to the production progress, mo represents the available facility resources for the i-th

Research on Network Equilibrium Scheduling Method

51

node of Class ru water conservancy green products to execute t task in cycle p∗ , and gi represents the link of the storage facility node. Suppose that eh represents the inventory time of green water product d at the supply chain network node ph, and kh represents the inventory area of green water product d at the supply chain network node ph, then use Formula (6) to calculate the demand status of green water product storage for each layer of the supply chain: up =

py kh × eh (d , ph) + ∗ rp − cn t

(6)

where, cn represents the transportation cost of water conservancy green product logistics storage, and py represents the state vector of water conservancy green product t ∗ being consumed. Assume that I ∗ represents the set of logistics warehouses of green water products, J ∗ represents the set of suppliers of green water products, Ku represents the middleman, μe represents the supply and demand relationship of green water products, and EI represents the service relationship of green water product logistics, then use Formula (7) to form a supply chain competition network of green water products: GK =

 EI + Ku  ∗ × J × I∗ μe

(7)

Assume that τl represents the circulation status of green water products, lp represents the service status of the warehouse, mp represents the sales status of the middleman, the middleman places the water green products χ required by the market in the logistics warehouse, and ∂j represents the types of water green products required by the market, and formula (8) is used to give the direct competition relationship among members in each node of the supply chain network: bg =

∂j × τl   mp χ × lp

(8)

Assume that lp∗ represents the fixed inventory cost of the supply and demand goods in each market, and ly∗ represents the competition behavior between the warehouses or suppliers in each layer, then use Formula (9) to obtain the non cooperative competition relationship between the nodes of the supply chain in the same layer: ej =

ly∗ + lp∗ Po

× ∂l

(9)

where, Po represents the maximum storage period of each water conservancy green product, and ∂l represents the supply and demand capacity limit of various water conservancy green products. Suppose that qkl represents the non negative quantity of water conservancy green product l stored by warehouse k ∗ , up represents the storage vector of warehouse k ∗ , and QG represents the distance from storage facility k ∗∗ to the destination, then use formula (10) to obtain the operation cost of water conservancy green product logistics

52

W. Jin and S. Gong

warehousing: pk =

{i∗ , i∗∗ } + QG

± ej ∓ bg GK qkl · up

(10)

In the process of balanced scheduling of the water conservancy green product supply chain network, the decision theory model is first integrated to form a water conservancy green product supply chain competition network [7], giving the direct competition relationship among the members of each network node, obtaining the non cooperative competition relationship between the supply chain nodes of the same layer, giving the warehousing cycle of each circulating water conservancy green product, and obtaining the logistics warehousing operation cost, It lays a foundation for the balanced scheduling of the supply chain network of green water products. 2.3 Prioritization Weight of Supply Chain Network Scheduling In the process of balanced scheduling of water conservancy green product supply chain network, there are differences in scheduling demands among various network nodes, so the network scheduling priorities of the supply chain must be ranked according to plan to determine the optimal network scheduling configuration scheme. Generally, priority weight ranking method is used, and subjective evaluation method and relatively objective AHP method are used for measurement. This method has a high degree of randomness and subjective color. Through in-depth analysis of the data, the information obtained is quite different from the actual situation. In view of the above problems, using the principles of informatics, taking the scheduling demand and the number of green water products as indicators, select each supply chain network with backup plans as the sample, take the optimal order of the supply chain network backup plans as the sample characteristics, and rank the weights according to the priority attribute values of the supply chain network backup plans [8]. The specific process is as follows: Suppose that the observation value hij of the priority attribute dj of the supply chain network scheduling of green water products belongs to category l, then the measure of dj is Formula 11:

V = vij1 , vij2 , · · · , vijl (11) When 0 ≤ vijl ≤ 1,

l 

vijl = 1. If the priority weight ranking of water conservancy

i=1

green product supply chain network scheduling is regarded as the selection of samples, then the weight information of dj to make emergency equipment sample hi in l categories is 1l , which shows that attribute dj does not play a decisive role in the priority ranking of supply chain network sample hi . On the contrary, if there is only one 1 in vijl and the rest is 0, the entropy value of the supply chain network scheduling task is 0, indicating that the dj attribute is important. Therefore, Formula 12 is obtained: xij = 1 +

L 1  vijl lg vijl lg l l=1

(12)

Research on Network Equilibrium Scheduling Method

53

It can be seen that when 0 ≤ xij ≤ 1 and xij are large, the entropy of supply chain network scheduling task is the minimum, and the two are closely related. When xij = 1, the weight value of dj makes the entropy value of g in the l1 class is 1, and the entropy value of g in other classes is 0, which shows that dj has a great influence on the ranking of supply chain network scheduling priority hi ; When xij = 0 is set, vij = 1l and dj divide the supply chain network sample hi into weight sorting. Redundancy attribute dj needs to be removed. The more concentrated the value of vij is, the greater the effect of attribute dj on hij ’s priority sorting. Formula 13: vij =

xij m  xij

(13)

j=1

Then 0 ≤ vij ≤ 1,

m 

xij = 1, the greater vij is, the greater the influence of dj on hij ’s

j=1

priority weight ranking. Therefore, the priority weight vector of supply chain network scheduling is Formula 14: vi = (vi1 , vi2 , · · · , vim )T

(14)

On the basis of determining the priority attribute of supply chain network scheduling, the entropy value of supply chain network scheduling tasks is calculated, and the priority weight of supply chain network scheduling is sorted by combining the priority weight vector of supply chain network scheduling. 2.4 Optimize the Supply Chain Network of Green Water Products According to the priority weight ranking of supply chain network scheduling, the water conservancy green product supply chain network is optimized, and the optimal planning path of the water conservancy green product supply chain network is analyzed. The specific process is as follows: Let be the population size in the compound ant colony algorithm, distribute the population in the supply chain network optimization model, take the initial space of the population as the starting point [9], and obtain the initial optimization path element of the population as Formula 15:

(15) b(j) = exp −h uj Formula 16 is used to modify the initial optimization path elements of the population: o = ξ omax − (omax − omin )bet ·

kter kter max

(16)

where, ξ represents the interference degree parameter of the supply chain network, omax and omin represent the initial parameters of the supply chain network, kter max represents the optimization coefficient of the initial parameters in the supply chain network, and kter represents the number of iterations of the initial parameters in the supply chain network.

54

W. Jin and S. Gong

Using the fitness function of the supply chain network obtained above, the optimal logistics distribution path in the supply chain network model is obtained. This process can modify the initialization path of the population in the composite ant colony algorithm, as shown in Formula 17: b(j) =

ξ (1 − φ) + b(j) b(j)

(17)

where, φ represents the update interval of the population in the supply chain network. By modifying the initialization path of the population in the compound ant colony algorithm, the network equilibrium mechanism of the water conservancy green product supply chain is obtained, as shown in Formula 18: Fy =

λp × ωy + kh × (jk ) sp

(18)

Among them, λp represents the status of the supply chain network, ωy represents the limiting factors of the supply chain network of green water products, sp represents the equilibrium index of the supply chain network, kh represents the minimum cost of the supply chain network optimization of green water products, and jk represents the maximum goal of the supply chain network optimization. The application of ant colony algorithm to the balanced scheduling of the water conservancy green product supply chain network requires the grid processing of the coverage area of the water conservancy green product supply chain network, and the corresponding cost value is set for each cell. The path search process of the ant colony algorithm is shown in Fig. 1. Grid

Determine the optional neighbor grid according to the inspection conditions

Determine the position of the next moment through roulette based on the residual pheromone

Fig. 1. Schematic diagram of ant colony algorithm search

In the ant colony path search diagram shown in Fig. 1, the black grid is the current position of the ant and can move to any method. Through the guidance of grid pheromone concentration, the ant colony moves towards the best direction. According to the above process, in the process of optimizing the water conservancy green product supply chain network, the composite ant colony algorithm is used to plan the ant optimal path. The specific steps are: Step 1: Build the initial model of the water conservancy green product supply chain network, calculate the number and coverage area of the ant’s optimal path and shortest path;

Research on Network Equilibrium Scheduling Method

55

Step 2: Use the compound ant colony algorithm to initialize the supply chain network optimization model of green water products, set the initial model and basic information elements of the population, and set the iteration coefficient of the supply chain network processing of green water products; Step 3: According to the status and actual time consumption of each ant’s optimization path in the water conservancy green product supply chain network optimization model, complete the coding of each ant’s optimization path and obtain the ant’s location; Step 4: Adjust the parameters of the composite ant colony algorithm according to the parameters of the interference degree of the water conservancy green product supply chain network. It can be seen from the calculation that the shortest path is the optimal path, and the path is optimal when the amount of information contained in the population is more; Step 5: Collect the information of each network path in the water green product supply chain; Step 6: the process of iterative calculation of Step 1 –Step 5, and then increase the number of iterations; Step 7: According to the network fitness function of water conservancy green product supply chain obtained above, modify the planning results of ant optimization path information; Step 8: Perform iterative calculation on the optimization path of each ant in the population. If it does not meet the requirements, return to Step 2. Finally, the optimal path information of the supply chain network optimization model is obtained. If the network information of the water conservancy green product supply chain is highly matched with the population, it indicates that the path is optimal. On the contrary, the path is long and time-consuming. In other words, the higher the fitness function value of the water conservancy green product supply chain network optimization model, the better the effect of the optimal path is, and the worse the effect is. According to the above operation methods, based on the compound ant colony algorithm, the population fusion calculation is carried out on the water conservancy green product supply chain network optimization model. The optimal path information of the water conservancy green product supply chain network model is obtained through the fusion calculation results, and the water conservancy green product supply chain network optimization design is finally completed. 2.5 Construct the Network Equilibrium Scheduling Model of Water Conservancy Green Product Supply Chain The process of establishing a two-level logistics service supply chain with water conservancy green product logistics service as the basic integrator under the joint action of multiple functional logistics services is called the description and assumption of the supply chain network equilibrium scheduling model. Generally speaking, a logistics service integrator needs to face the service order demands of multiple scheduling customers at the same time, and each customer’s logistics service order must contain multiple service process flows. Generally, these service processes can be divided into personalized service processes and large-scale service processes.

56

W. Jin and S. Gong

Because the customer order service has a strong similarity in transportation content, it can integrate the supply and demand of logistics services with a high similarity between one customer object and another, and establish a complete large-scale operation. Of course, independent personalized service countermeasures can also be set according to the actual scheduling needs of customer objects. After receiving the logistics order of the customer object, the scheduling integrator can achieve optimal scheduling in the following ways: • Analyze the service time and service link of these orders; • Considering the uncertainty of service time of each provider, provide necessary time range conditions for the functional supply of services in each service link; • ·Set the optimization target to be scheduled from many aspects such as scheduling cost and scheduling time, and determine the optimal scheduling plan. When the logistics operation time of the supply chain provider is uncertain, the logistics service provider needs to comprehensively consider various application factors when conducting centralized logistics scheduling services. In general, logistics integrators need to consider the following aspects: The first point is the transportation cost factor. The lowest logistics cost of the supply chain is the primary goal of the integrator’s scheduling service construction; The second point is the time requirement factors related to transportation customers. The service integrator needs to comprehensively consider the requirements of all customers in terms of service order completion time, reasonably arrange the existing scheduling plan, and effectively control the supplier’s delay time as far as possible, so as to achieve good scheduling services for logistics customers; The third point is the logistics satisfaction factors related to suppliers. Integrators need to consider the application satisfaction of other suppliers and arrange the reasonable communication needs between adjacent logistics service processes. Based on the above analysis, assuming that L1 , L2 , · · · , Ln represents n different supply chain logistics demand indicators, the cost minimization objective of the supply chain network equilibrium scheduling model can be defined as Formula 19: dn 

R=

f × (L1 + L2 + · · · + Ln )

d0

n × WH2

(19)

where, d0 represents the minimum logistics cost consumption condition, dn represents the maximum logistics cost consumption condition, and f represents the transmission scheduling coefficient of goods in the supply chain. In general, there are some hard constraints that must be observed in the supply chain network equilibrium scheduling model, including the time impact constraints and time correlation constraints of the supply chain customer demand, as well as the logistics transportation satisfaction constraints related to the supplier. In order to make the scheduling solution method optimal and effective at the same time, so as to realize rapid adaptation to cargo transportation demand in actual operation, it is necessary to reasonably design various logistics expression parameters while studying the impact of scheduling parameters on supply chain scheduling performance.

Research on Network Equilibrium Scheduling Method

57

Therefore, the more unified the planning form of supply chain logistics coordination objectives is, the more realistic the final scheduling application results will be [10], without changing the lead time decentralized control conditions. Suppose x0 represents the minimum time constraint of customer demand in the supply chain, and xn represents the maximum time constraint of customer demand in the supply chain. The goal planning form based on lead time decentralized control is defined as Formula 20: xn 

Z=

R × Xmax

x0 =0 xn  d1 + (n2 x0

(20) − n1 )

Among them, Xmax represents the maximum expression value of logistics parameters, d1 represents the satisfaction constraints of supply chain logistics transportation, and n1 and n2 represent the coordinated scheduling application indicators of two different logistics paths. According to the above process, the calculation and processing of the application conditions of various coefficients are realized, and the construction of the network equilibrium scheduling model of the water conservancy green product supply chain is completed.

3 Comparative Analysis of Experiments 3.1 Build Supply Chain Logistics Network In order to verify the application value of the method in this paper in the supply chain network balanced scheduling of water conservancy green products, the supply chain logistics network as shown in Fig. 2 is built as the cargo transport framework required for the experiment of the experimental group and the control group, and the Windows 10 operating system is loaded in the experimental host. Figure 2 The supply chain logistics network includes n manufacturers and n recyclers. There is competition among members of the same kind, and there are transactions between members of the adjacent layer. Manufacturers use raw materials to produce new products, while using waste products for remanufacturing. New products and remanufactured products are low-carbon products with a certain degree of green, and the products are sold to various demand markets; The recycler’s job is to simply dispose the waste products produced by consumers and then resell them to the manufacturer for use in the remanufacturing process. Among them, the control host of the experimental group carries the method in the article, and the two control hosts of the control group carry the scheduling method considering consolidation decision and the scheduling method based on Benders decomposition respectively. In the experiment, the maximum cargo transportation volume is 10Mt, and there are 10 logistics transportation paths. Under the same experimental environment, analyze the specific changes of various experimental indicators.

58

W. Jin and S. Gong Vendor 1

Vendor 2

Vendor 3

···

Vendor n

Goods input Supply Chain Headquarters

Logistics transportation Goods export

Customer 1

Customer 2

Customer 3

···

Customer n

Fig. 2. Supply Chain Logistics Network

3.2 Setting Evaluation Indicators In the experiment, in order to prove the feasibility of the proposed method, the experiment is first divided into two parts. The first part of the experiment specifies evaluation indicators to verify the performance of the proposed method. In the second part of the experiment, the scheduling method considering the merger decision and the scheduling method based on Benders decomposition are used as the comparison method for joint analysis and comparison, and the comprehensive effectiveness of different methods for the balanced scheduling of water conservancy green product supply chain network is verified from the aspects of scheduling responsiveness and storage facility utilization. Suppose that rg represents the maximum volume of goods supplied by intermediaries in the warehousing link, lg represents the maximum volume of goods supplied by each layer of the supply chain to intermediaries, and rh represents the volume of goods flowing from intermediaries to the market, then use Formula 21–23 to calculate the supply chain equilibrium rg , scheduling response μo , and storage facility utilization μp of green water products: wd × lg rh

(21)

r g × lg ± wd rh

(22)

μo × lg × 100% rh

(23)

rg = μo = μp =

Among them, the higher the value of water conservancy green product supply chain equilibrium rg , dispatching responsiveness μo , and storage facility utilization μp , the better the balanced dispatching effect of water conservancy green product supply chain network.

Research on Network Equilibrium Scheduling Method

59

3.3 Result Analysis When the network scheduling demand of water conservancy green product supply chain is 155, the supply chain balance, scheduling responsiveness and storage facility utilization rate in the scheduling process of the three methods are tested respectively. The supply chain equilibrium test results are shown in Fig. 3. 100

Supply chain balance/%

80

60

40 Scheduling method considering decision Scheduling Method Based on Decomposition

20

merging Benders

Method in text 0

15

45

65

85

105

125

145

155

Service demand/piece

Fig. 3. Supply Chain Equilibrium Test Results

It can be seen from the experimental results in Fig. 3 that, with the increasing demand for services, the proposed method for network equilibrium scheduling of water conservancy green product supply chain can promote all suppliers in the water conservancy green product supply chain to reach the optimal stable equilibrium state, thus ensuring the effective utilization of water conservancy green products. The dispatching responsiveness test results are shown in Fig. 4. The results in Fig. 4 show that, compared with the scheduling method considering merger decision and the scheduling method based on Benders decomposition, the method in this paper is more responsive in the network balanced scheduling of water conservancy green product supply chain, which can significantly reduce the logistics cost, improve the economic benefits of logistics, and achieve the overall balance of the supply chain of water conservancy green products for logistics warehousing. The storage facility utilization test results are shown in Fig. 5. According to the results in Fig. 5, when the scheduling method considering consolidation decision and the scheduling method based on Benders decomposition are adopted, the utilization rate of storage facilities is low, below 70%. However, when the method in the paper is adopted, the utilization rate of storage facilities can be increased to more than 80%, which improves the layout of logistics and warehousing, and greatly improves the economic and social benefits of logistics.

60

W. Jin and S. Gong 100

Dispatch responsiveness/%

80

60

40 Scheduling method considering decision Scheduling Method Based on Decomposition

20

merging Benders

Method in text 0

15

45

65

85

105

125

145

155

Service demand/piece

Fig. 4. Dispatch Responsiveness Test Results

Utilization rate of storage facilities/%

100

80

60

40 Scheduling method considering decision Scheduling Method Based on Decomposition

20

merging Benders

Method in text 0

15

45

65

85

105

125

145

155

Service demand/piece

Fig. 5. Storage Facility Utilization Test Results

4 Conclusion This paper proposes a research on the network equilibrium scheduling method of water conservancy green product supply chain based on the compound ant colony algorithm. Based on the logistics distribution model of water conservancy green product supply chain, this paper obtains the competitive relationship between supply chains. Based on the priority weight ranking of supply chain network scheduling, the population fusion calculation is carried out based on the composite ant colony algorithm to realize the balanced scheduling of water conservancy green product supply chain network. The results show that the method has better performance in the network equilibrium scheduling of

Research on Network Equilibrium Scheduling Method

61

water conservancy green product supply chain. However, there are still many deficiencies in this research. In the future research, we hope to introduce particle swarm optimization algorithm to optimize the compound ant colony algorithm, and further improve the network equilibrium scheduling performance of water conservancy green product supply chain. Aknowledgement. Zhejiang Provincial Department of Water Resources Science and Technology Project: Research on the Mechanism and Path of Fintech-Driven Green Innovation and Development of Water Resources RC2112 2021 Fundamental Research Project of the University: Research on the Impact of Green Finance on the Ecological Development of Water Conservancy Economy under Digital Transformation FRF21PY005

References 1. Goodarzian, F., Hoseini-Nasab, H., Toloo, M., Fakhrzad, M.B., et al.: Designing a new medicine supply chain network considering production technology policy using two novel heuristic algorithms. RAIRO-Oper. Res. (2804–7303) 55(2), 1015–1042 (2021) 2. Zl, A., Jia, W.B.: Supply chain network equilibrium with strategic supplier investment: a real options perspective - ScienceDirect. Int. J. Prod. Econ. 208, 184–198 (2019) 3. Tang, L., He, C., Jing, K., Tan, Z., Qin, X.W., et al.: Supply chain network scheduling by considering merge decision with random order interference. Chin. J. Manag. Sci. 27(4), 91–103 (2019) 4. Gao, J., Zheng, L. Optimizing coal supply chain network maintenance scheduling based on benders decomposition. J. Wuhan Univ. Technol. Inf. Manag. Eng. Edi. 42(3), 227–232, 259 (2020) 5. Duan, C., Yao, F., Xiu, G., et al.: Multi-period closed-loop supply chain network equilibrium: perspective of marketing and corporate social responsibility. IEEE Access 9, 1495–1511 (2020) 6. Dao-ping, W., Chao, Z., Yan-ping, C.: Study on supply chain network equilibrium considering quality control and loss aversion. Oper. Res. Manag. Sci. 28(8), 76–85 (2019) 7. Lu, J., Fan, Z., Liu, C.: supply chain information oriented mining model based on TF-IDF algorithm. Comput. Simul. 38(7), 153–156, 349 (2021) 8. Li, J., Wang, Z.X.: Deriving priority weights from hesitant fuzzy preference relations in view of additive consistency and consensus. Soft Comput. A Fusion Found. Methodologies Appl. 23(24), 13691–13707 (2019) 9. Aliyu, M., Murali, M., Gital, A.Y., et al.: A multi-tier architecture for the management of supply chain of cloud resources in a virtualized cloud environment: a novel SCM technique for cloud resources using ant colony optimization and spanning tree. Int. J. Inf. Syst. Supply Chain Manag. 14(3), 1–17 (2021) 10. Peng, Y., Xu, D., Li, Y., et al.: A product service supply chain network equilibrium model considering capacity constraints. Math. Probl. Eng. 2020, 1–15 (2020)

Weak Association Mining Algorithm for Long Distance Wireless Hybrid Transmission Data in Cloud Computing Simayi Xuelati(B) , Junqiang Jia, Shibai Jiang, Xiaokaiti Maihebubai, and Tao Wang State Grid Xinjiang Electric Power Co., Ltd. Information and Communication Company, Urumqi 830001, China [email protected]

Abstract. Long distance wireless hybrid transmission data is vulnerable to noise, resulting in low data mining accuracy, large mining error and poor mining effect. Therefore, a weak association mining algorithm for remote wireless hybrid transmission data under cloud computing is proposed. The moving average method is used to eliminate noise data, and the attribute values of continuous data are divided into discrete regions, make it form a unified conversion code for data conversion. The Bayesian estimation method is used for static fusion to eliminate the uncertain data with noise. The rough membership function is constructed to distinguish the truth value, complete data preprocessing. According to the principle of relationship matching between data, data feature decomposition is realized. The non sequential Monte Carlo simulation sampling method is adopted to build the data loss probability evaluation model and integrate the data association rules. In the background of cloud computing, permission item sets are generated, and the rationality of association rules is judged by the minimum support. The dynamic programming principle is used to build the mining model, and the improved DTW algorithm is used to read out and analyze the structured, semi-structured and unstructured data to obtain the weak association mining results of mixed data transmission. The experimental results show that the algorithm can completely mine data sets, and the mining error is less than 0.10, with good mining results. Keywords: Cloud Computing · Long Distance Wireless · Mixed Transmission · Weak Association Mining

1 Introduction The continuous progress of information technology and science and technology has promoted the development of new information technologies in different fields. Data mining technology is a new technology in the information age. In the Internet, data mining refers to collecting high-value models and rules, using data analysis tools to analyze the collected data models and data information, obtaining the differences and similarities between the data models and data information according to the analysis results, © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 62–78, 2024. https://doi.org/10.1007/978-3-031-50577-5_6

Weak Association Mining Algorithm for Long Distance Wireless

63

and using the above results to make predictions on the Internet. With the development of communication network and information age, great changes have taken place in the network mode and the application field of communication network. Mining the valuable content and data in communication network can improve the utilization value of communication network. According to the analysis results, the data rules and data system of the communication network are optimized to establish a safe and stable communication environment and provide people with high-quality and efficient communication services. Therefore, it is of great significance to mine the weak association of mixed data in communication networks. Long distance wireless hybrid transmission under cloud computing brings a new development direction to the network industry, and also brings unprecedented challenges. With the rapid development of information technology, a large number of information is generated under the complex requirements of resource management. Promote information integration and safe operation through the use of advanced information and communication technologies [1]. However, due to the requirements of various types of network services, different functions are used in different periods, which makes the data model and information inconsistent. Massive wireless mixed transmission data is difficult to flexibly realize information sharing and provide data needed for the development of intelligent Internet, which has become a bottleneck restricting the rapid improvement of Internet automation. At present, there are two main data association mining methods. Reference [2] proposed a fuzzy association rule mining method based on GSO optimized MF in deterministic data. Firstly, the uncertain data is represented by the ternary language representation model. Then, given an initial MF, and taking maximizing the support of fuzzy itemsets and semantic interpretability as the fitness function, the optimal MF is obtained through the optimization learning of GSO algorithm. Finally, according to the best MF obtained, an improved FFP growth algorithm is used to mine fuzzy association rules from uncertain data. This method can adaptively optimize MF according to data sets, so as to effectively mine association rules from uncertain data. However, the efficiency and operation speed of this method are slow when dealing with large amounts of data. Reference [3] proposed an algorithm based on Apriori association rules. The specific method is to analyze each band first, and then use these bands to generate stronger correlation. To apply the Apriori association rule algorithm, first scan multiple databases, and then generate a large amount of commonly used candidate objects, which makes the Apriori algorithm time and space complexity. When mining large amounts of data, the performance of Apriori algorithm is poor. Therefore, a weak association mining algorithm for long-distance wireless hybrid transmission data under cloud computing is proposed.

2 Data Preprocessing of Long-Distance Wireless Hybrid Transmission Under Cloud Computing The remote wireless hybrid transmission data under the original cloud computing has serious quality problems, such as data loss, data redundancy, noise data, etc., which will reduce the data mining efficiency of remote wireless hybrid transmission. Under the condition of ensuring data integrity, reasonable and effective data preprocessing is the basis of improving data mining efficiency.

64

S. Xuelati et al.

2.1 Noise Data Elimination Mixed transmission data cleaning is to remove and repair the incomplete and noisy data in the data. In the original database, the average value is usually used to fill in the incomplete data. This process needs to be implemented using the moving average method. Moving average method is a key method to take the average value of data at a certain stage as the prediction value of a certain period in the future, and use this data as the later mining data. The formula for calculating the moving average is as follows: a=

l n

(1)

In formula (1): l represents the moving length; n is the total number used by the moving average. In order to facilitate that all data in the database have the same attributes, its transformation rules need to be defined and unified before mining. Because noise data is illogical deviation data, which often affects the accuracy of data mining, data smoothing technology is used to eliminate noise data. 2.2 Data Conversion The data conversion mainly includes data normalization and continuous data discretization. The continuous data is discretized by compressing the original data to reduce data input and output. The attribute values of continuous data are divided into several discrete regions, and values are taken from them in order to reduce the number of attribute values and facilitate data mining. The data conversion process is to unify different data formats to form a unified conversion code [4]. There is usually detailed data in massive data, and the data in the data warehouse is used for analysis. The data can be aggregated according to the database granularity without detailed data. Different data stores have different storage rules. These rules can not be implemented simply by adding, subtracting, or subtracting. These data need to be calculated and stored in the database for subsequent mining analysis and use. 2.3 Data Truth Screening Mixed transmission data is composed of network data from different periods, systems or departments, including structured data (such as department data, XML, JSON, etc.), unstructured data (such as remote sensing images, research plans, drawings, etc.), and semi-structured data (such as policies and regulations listed on this page). In order to facilitate data analysis and integration, data attributes, storage methods, read/write methods, etc. need to be preprocessed [5]. The data integration mode of mixed transmission needs to be further studied to realize deeper recognition of truth value. Data conflict refers to the problem of large margin provided by multiple sensors at the same time and differences in redundant data of the same attribute. The usual way to solve the conflict is to minimize the redundancy of data and find the true value from multiple conflicting data. The true value includes the unity of attributes. Because there are replication, dependency and digital similarity between data, Bayesian estimation is used for static fusion, The information of the algorithm conforms to the probability distribution, and

Weak Association Mining Algorithm for Long Distance Wireless

65

can well handle uncertain data with noise [6]. Bayesian estimation method optimizes the data by setting conditions in advance, integrates the information of each sensor with probability principle, and expresses it with probability density function. The basic process is to assume that there is a non empty object set, a non empty condition attribute set and a non empty decision attribute set in a decision system. Construct an approximation of existing knowledge and indiscernibility relation x, which can be expressed as: xS = (S|[S]x ∩ S = ∅)

(2)

In formula (2), [S]x is the equivalent of x. In the rough set theory, according to the knowledge and the calculation results of indiscernibility relations, the x attribute of the decision set is analyzed and expressed by the rough membership function:   [S]x ∩S  x γS (S) = (3) |[S]x | The true value is discriminated according to the rough membership function determined above. Assume that the decision set is S1 , S2 , · · · Sm , and the observation result obtained through the sensor observation data is G, and then calculate the posterior probability. The formula is: P(Sm |G ) =

P(Sm G) P(G|Sm )P(Sm ) = i P(G)  P(G|Sm )P(Sm )

(4)

m=1

In formula (4), P(Sm ) represents a prior probability; P(G|Sm ) stands for conditional probability; i represents the number of sensors, and the true value is identified according to the decision posterior probability.

3 Weak Association Mining of Mixed Transmission Data in Cloud Computing In the process of mining massive data in the cloud computing environment, it is easy to have a large number of redundant data, which reduces the correlation between data and can not effectively complete the mining of weak associations [7]. Therefore, a weak association mining method based on weak clustering algorithm for massive data in cloud computing environment is proposed, which decomposes data features through data description features, and fuses all data according to data features. Based on the association decision probability, the massive data in the cloud computing environment is effectively divided, and the calculation of the association probability of all data features is completed. Attribute elements are classified by the weak clustering method, and quantitative elements are converted into category types. The data after clustering is mined by weakening the association rule method [8].

66

S. Xuelati et al.

3.1 Principle of Relationship Matching Between Data Massive data in the cloud computing environment have a certain relationship with each other. It is the focus of research to mine the parts associated with other data by describing the information expressed by the data. The whole principle is as follows: First, get the description characteristics of the data, complete the data feature decomposition, and then obtain the following feature matrix: ⎤ ⎡ R11 R12 · · · R1j ⎢ R21 R22 · · · R2j ⎥ ⎥ (5) R=⎢ ⎣ ··· ··· ··· ···⎦ Rl1 Rl2 · · · Rlj In Formula (5), l represents the amount of data, and j represents the number of data types. Transform the data characteristic matrix, and the average value of the characteristics obtained is: A = l2 ·

j  Rlj

(6)

j=2

All data are fused according to data characteristics [9]. The massive data in the cloud computing environment is effectively divided based on the association decision probability, and the association probability of all data features is calculated by the following formula: (7) f gl · dj = f (gl ) + f gl · δj /f dj In Formula (7), f dj represents the prior probability of data; f gl · δj represents the corresponding conditional probability. Because the information objects in the remote wireless hybrid transmission database under cloud computing are described by the same remote wireless hybrid transmission data specification, in different cases, the internal nodes of the remote wireless hybrid transmission data standard tree are part of the data specification. The difference is that the element value at the node [10]. The approximate matching process of long-distance wireless mixed transmission data is shown in Fig. 1. As shown in Fig. 1, comparing the query tree with the standard tree, it is found that the nodes corresponding to node b in the multimodal data standard tree are b1, b2 and b3, and the nodes corresponding to node b in the query tree are b 1 and b 2. According to these, it is unnecessary to match the metadata of object U 2 with the query tree data when matching the query tree data with the standard tree data. When the query tree matches the metadata tree of U 3, since no node on the subtree can match the node in the query tree, it is unnecessary to consider the matching of the subtree with node b3 as the root node. Before querying, first match the query tree with the multimodal data standard scheme tree of the resource target database, and then record the matching information (i.e. preprocessing information) of the associated nodes. By analyzing the obtained information, we can avoid a large number of non associated nodes matching in the future matching between the query tree and the multimodal data standard tree, and avoid unnecessary duplication.

Weak Association Mining Algorithm for Long Distance Wireless

67

b11 b1 b12 Metadata b

b21 b2 b22 b31 b3 b32

u11 u1 u12 u21

Object u

u2 u22 u31 u3 u32

Fig. 1. Approximate matching process of long distance wireless hybrid transmission data

3.2 Mining Model Construction Before data mining, it is necessary to evaluate the remote wireless hybrid transmission under cloud computing. However, the actual remote wireless hybrid transmission has randomness and dynamic volatility. Therefore, a probability evaluation model for transmission data loss under mixed fluctuation conditions is designed to achieve data loss analysis under combined factor interference. Data integration is required before analysis. According to the results of data preprocessing and truth screening, the overall plan of data integration based on decision rough set model is designed, as shown in Fig. 2.

Data layer

Switch layer

Integr ation layer User layer

Struct ured data

Unstructured data

Semistructured data

Data node exchange Exchange standard Pl atform Management Center Dat a Exchange Servi ce Center

Data center

Human-computer int eraction

Fig. 2. Data integration scheme

68

S. Xuelati et al.

It can be seen from Fig. 2 that the scheme is composed of data layer, exchange layer, integration layer and user layer, and the data layer contains different data types; The main function of the exchange layer is to realize two-way data transmission; The integration layer is the core of the whole system. Its main functions include data conversion, data call, process control and data routing; The user layer is responsible for user login, authority management and maintenance. On this basis, the non sequential Monte Carlo simulation sampling method is used to build the data loss probability evaluation model. The specific process is as follows: First, the probability space of wireless network data is simulated by non sequential Monte Carlo, where the statistical value is calculated as follows: xi =

n 1 zi g

(8)

i=1

In formula (8), g represents a restricted constraint function, and z represents a series of random data. After the data loss of wireless network is constrained, the fluctuation probability of each time point is taken as. According to the daily transmission data volume, loss data volume and loss rate of wireless network statistics, when the loss data fluctuates, the loss data is calculated using power flow calculation coefficient, wireless network loss data, transmission data and loss rate. After evaluating the degree of wireless network data loss, define big data association rules, integrate data association rules, and build a mining model based on big data analysis. The detailed implementation process is as follows: In the background of cloud computing, permission item sets are generated, association rules are generated according to the minimum support and minimum confidence, and the rationality of association rules is judged by the minimum support. In the context of cloud computing, the association rules are as follows: D⇒

sup(x) sup tmin sup(x ⇒ y) p

(9)

In Formula (9): sup tmin represents the minimum support threshold; p indicates association rule item; sup(x) stands for support; sup(x ⇒ y) stands for confidence. According to the above formula, the mining model needs to be built using the principle of dynamic programming, as shown below: ⎧ ⎪ ⎨ Ex−1,y−1 − Jxy (10) Mxy = max D Ex,y−1 + p ⎪ ⎩ Ex−1,y−1 + p In Formula (10): Eab is the missing parameter of wireless network association rules; Jab indicates useless weak association rules. According to the above model, multiple data sources are integrated to extract data related to mining. Through data transformation, it is unified into a form suitable for mining, and visualization technology is used to provide users with mining data.

Weak Association Mining Algorithm for Long Distance Wireless

69

3.3 Mining Process of Transmission Data Weak Association In the process of weak association mining of transmitted data, frequent itemsets can be generated through the relationship between time and space, and frequent itemsets can be generated through the minimum collection cycle. On this basis, improve the DTW algorithm to improve the accuracy of data mining. Combined with the improved DTW algorithm, the speed of mining weak associations of transmitted data is greatly improved. The detailed steps are as follows: Step 1: Because the improved DTW algorithm needs a lot of calculation steps in the process of association mining, it takes up a lot of storage space. Therefore, in order to solve this problem, a path of data weak association mining is designed. Set the number the data oneof sampling points as y1 , y2 , y3 , and divide   into three dimensions, namely,    dimensional 1, y1 , two-dimensional y1 + 1, y2 , and three-dimensional y2 + 1, y3 . For the calculation of y1 and y3 values, it can be expressed as:  −1 y1 = εc+γ γ −ε (11) y3 = γ c−ε+1 γ −ε In formula (11), c represents the number of sampling points; ε, γ indicates the slope of two adjacent sides of the parallelogram. When the mining data is not inside the parallelogram, it indicates that these data do not have relevance and need not be mined; On the contrary, it is associated and can be mined. According to the mining results, data sets are collected. Step 2: Scan all data sets and record the number of data occurrences each time. Determine whether the time and spatial data are in the same dimension according to the requirement definition, and record them in the header table if they exist; Step 3: Recycle the dataset, delete the data not in the item header table, and arrange the data in the order of increasing the item header table. Loop the data set again to generate a frequent pattern tree. In the frequent pattern tree, all nodes represent spatial and temporal data, while the tree branches represent the number of data occurrences; Step 4: In the circular header table, search the entries in the regular pattern tree and the leaf nodes of the entries in descending order, and remove the duplicate node data to obtain a separate tree structure dataset. At this time, the dataset is an associated set. Step 5: Output the tree data set of all single paths to form the final result set. Step 6: First, read and analyze the filtered structured, semi-structured and unstructured data from the hybrid transmission database through the above model. For structured data, establish a database according to the data type and directly select it into the database; Semi structured data can be divided into two categories according to data types: structured data and unstructured data. In Category 1, create an associated class library and connect it directly to the class library. For the unstructured data in category 2, using the mapping relationship between different data, the corresponding data is embedded into these data as additional parameters to achieve a one to many direct mapping; In Category 2, coordinate transformation is carried out for various types of data, and then the single level segmentation algorithm of artificial neural network is used to partition. On this basis, structured data and unstructured data are combined into multiple single level fusion parameters, which are overlapped with quasi space to obtain the integration results.

70

S. Xuelati et al.

Step 7: Regard the integration result as a fuzzy attribute set, and build a fuzzy database through the original database. Let spatial data T yx1 and temporal data T yx2 be the support degree of spatial data x1 and temporal data x2 respectively. The support degree of rule x1 ⇒ x2 in database H can be expressed as: m 

T (x1 ⇒ x2 ) =

i=1

Ti (x1 ∪x2 ) |H |

(12)

It can be seen from Formula (6) that in the fuzzy association relationship, it is necessary to calculate the fuzzy support degree, that is, the implication degree, which can effectively reduce the mining steps and shorten the mining time. The k-th data implication can be expressed as: ν(x1 ⇒ x2 ) = I [T (x1 ), T (x2 )]

(13)

In formula (13), I represents the implication degree operator. By calculating support, frequent itemsets can be determined, and the result is the weak association mining result of mixed data transmission.

4 Experimental Analysis In order to verify the effectiveness of the weak association mining algorithm for longdistance wireless mixed transmission data under cloud computing, the GSO-based optimization MF method (the method of reference [2]) and Apriori-based association rule algorithm (the algorithm of reference [3]) are used as comparison methods, experiments are conducted on the Matlab platform through Unix operating system. 4.1 Experimental Platform As this experiment is based on the laboratory environment, the cloud platform is designed as a private cloud platform for LAN testing in the design, but the core modules can still be used under the public cloud platform. A standard private cloud platform should include at least four platforms: the underlying virtualization platform, virtual machine management platform, distributed storage platform and distributed computing service platform. Therefore, the designed architecture diagram of the test private cloud platform is shown in Fig. 3. It can be seen from Fig. 3 that the experimental platform provides a place for data integration, through which data can be found in time and uploaded to the response system to generate feedback information.

Weak Association Mining Algorithm for Long Distance Wireless

Underlying virtualization platform

Distributed storage platform

71

Vm management platform

Distributed computing service platform

Service request

Fig. 3. Experimental platform

4.2 Experimental Data Set The experiment used a set of public data sets, with a total of 2500 documents, each of which contains a picture with corresponding instructions. Each picture and a group of files correspond to a specific category directory, and all information in the category directory can be divided into 15 categories. The SIFT feature description method is used to describe the image as a 128 bit feature vector. Text of the dataset is presented in the form of 10 topics. During the experimental test, 2/3 of the data are used as training data and 1/3 of the data are used as test data. According to the difficulty of field data acquisition, a data acquisition and analysis system is used, which mainly includes two parts: the field end and the communication end. The field terminal is used to collect and transmit field data. The communication terminal is responsible for field data processing and background data interaction. Its architecture is shown in Fig. 4.

Surveillance cameras Posit ioning network system

Fi eld s ide

Monitoring data processing unit Video Surveillance Sys tem

User Interface

ServiceTermi nal Locate the communic ation unit

Backend

Pos itioning data processing unit Locate base station

Fig. 4. Experimental data acquisition architecture

72

S. Xuelati et al.

On the field side, cameras, positioning base stations, smart helmets, identification tags and other sensors can be used to collect infrastructure in real time, and upload the collected information to the server, providing good data support for infrastructure management. The data collection results are shown in Table 1. Table 1. Data collection results Wireless transmission

Document/KB

Picture/KB

Port # 1

75

21

Port # 2

80

22

Port # 3

68

20

Port # 4

63

17

Port # 5

59

18

Port # 6

67

21

Port # 7

66

19

Port # 8

62

18

Take the data in Table 1 as the standard data for experimental verification and analysis. 4.3 Standardized Processing of Experimental Data According to the data characteristic attribute rules, the data is standardized. The change of trust interval in this process is shown in Fig. 5.

Standard value

1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2

Unprocessed data

Processed data

0.1 0

1

2

3

6 4 5 Serial number

7

8

Fig. 5. Change of trust interval in data integration process

Weak Association Mining Algorithm for Long Distance Wireless

73

It can be seen from Fig. 5 that only the processed value under No. 4 is consistent with the untreated value, and it is taken as the best data integration point to verify the data mining effect on the premise of ensuring the integration stability. 4.4 Determination of Experimental Indicators Set the experimental indicators as mining integrity rate and mining error, and the mining integrity rate calculation formula is as follows: P=

∂ × 100% ∂

(14)

In formula (14), ∂ represents the amount of data mined; ∂  represents the total data volume. The larger the calculation result is, the more complete the data mining result is. The mining error calculation formula is as follows:       δ1 −δc  + δ2 − δc  + · · · δτ − δc  (15) e= τ In formula (15), τ represents the number of times of excavation; δc indicates information whose data has not been searched. The larger the calculation result is, the more accurate the data association mining result is. 4.5 Analysis of Data Integration Results Through the established experimental platform, data are collected in the open data set, standardized processing is carried out, experimental indicators are set, and data integration analysis is carried out. For the verification of the integration effect, the GSO optimized MF method, Apriori based association rule algorithm and cloud computing based weak association mining algorithm are respectively used for integration. The comparison results are shown in Fig. 6. It can be seen from Fig. 6 that the data integration based on Apriori association rule algorithm is too decentralized, and the data integration process based on GSO optimized MF method does not reach the full integration state. However, the weak association mining algorithm under cloud computing can integrate all data together, so it can be seen that the weak association mining algorithm has a good integration effect. 4.6 Analysis of Data Mining Results The complete mining results of actual data are shown in Table 2. Based on the data in Table 2, use the GSO optimized MF method, the Apriori association rule algorithm and the cloud computing weak association mining algorithm to mine, as shown in Fig. 7.

74

S. Xuelati et al.

Image data/bit

100 80 60 40 10

8

20

6 4

0

5

10

15

20

2 30

35 0

(a) Optimization of MF method based on GSO

Image data/bit

100 80 60 40 10

8

20

6 4

0

5

10

15

20

2 30

35 0

(b) Algorithm of association rules based on Apriori

Image data/bit

100 80 60 40 8

20 0

10

6 4 5

10

15

20

2 30

35 0

(c) Weak association mining algorithm in cloud computing Fig. 6. Comparison and analysis of integration effects of different methods

It can be seen from Fig. 7 that the instance dataset collected using the GSO based optimization MF method is inconsistent with the actual situation, and its dataset is {1, 2, 3, 4, 5, 6, 7, 9}; The instance dataset obtained based on Apriori association rule algorithm

Weak Association Mining Algorithm for Long Distance Wireless

75

Table 2. Data complete mining analysis Item

Data

1

{1, 4, 5}

2

{3, 4, 5, 6}

3

{1, 6}

4

{1, 5}

5

{2, 4, 6, 7}

6

{3, 5, 7}

7

{2, 5, 6}

8

{1, 3, 4}

is inconsistent with the actual situation, and its dataset is {1, 2, 3, 4, 5, 6, 8}; The instance dataset obtained by using the weak association mining algorithm under cloud computing is consistent with the actual situation, and its dataset is {1, 2, 3, 4, 5, 6, 7}. It can be seen that the data can be completely mined using this method. For data mining error analysis, the three methods are compared again, and the comparison results are shown in Fig. 8. It can be seen from Fig. 8 that the mining error of GSO-based optimization MF method and Apriori-based association rule algorithm is about 0.32 and 0.35 respectively, while the mining error of weak association mining algorithm under cloud computing is less than 0.10. It can be seen that the mining error using the method studied is small.

76

S. Xuelati et al.

Fig. 7. Comparison and analysis of complete data mining results of different methods

Weak Association Mining Algorithm for Long Distance Wireless

0.45

77

Weak association mining algorithm in cloud computing Optimization of MF M ethod Based on GSO Algorithm of Association Rules Based on Apriori

0.40

Mining error

0.35 0.30 0.25 0.20 0.15 0.10 0.00 0

25

50 75 100 125 150 Iterations/time

Fig. 8. Comparative analysis of data mining errors of different methods

5 Conclusion A weak association mining algorithm for long-distance wireless mixed transmission data under cloud computing is proposed. Pre-process the long-distance wireless mixed transmission data, and realize the weak association mining of transmission data through the principle of data relationship matching and the construction of mining model. The data weak association mining under cloud computing studied in this paper can improve the transmission quality of long-distance wireless mixed data, eliminate redundant data in long-distance wireless mixed data, and use the implication degree to mine the weak association of data, which solves the problems existing in current data weak association mining methods and provides a favorable guarantee for the development of data mining technology. By constructing the rough membership function, the truth value is discriminated. By solving conflict problems, efficient data integration can be achieved, which improves the data integration effect to a certain extent. The experimental results show that using the weak association mining algorithm under cloud computing can integrate all data together, and the obtained instance data set is consistent with the actual situation. The mining error is less than 0.10, which can effectively realize the weak association mining of long-distance wireless mixed transmission data. While obtaining the above research results, due to the limitation of lost data, there are still some areas to be optimized: (1) The coverage of the mining model should be further expanded, and the main transmission data should be analyzed during the verification process; (2) Establish a unified model transmission mechanism to achieve efficient control of the model.

78

S. Xuelati et al.

References 1. Yin, Y., Jia, Y., Wang, C., et al.: Research on multi-load remote wireless power transfer system. Power Electron. 55(09), 139–142 (2021) 2. Zhang, W., Dong, Q.: Fuzzy association rules mining method based on GSO optimization MF in uncertainty data. Appl. Res. Comput. 36(08), 2284–2288 (2019) 3. Wang, M., Zhu, X.: Analysis of association rules based on improved Apriori algorithm. Comput. Sci. Appl. 11(06), 1706–1716 (2021) 4. Wang, R., Zhao, L., Hu, S.: Fast estimation method for power loss of three phase unbalanced distribution network based on data correlation mining. Water Resour. Power 39(05), 202–206 (2021) 5. Wang, P., Meng, Y.: Simulation of mining frequent pattern association rules of multi-segment support data. Comput. Simul. 38(05), 282–286 (2021) 6. Xin, C., Guo, Y., Lu, X.: Association rule mining algorithm using improving treap with interpolation algorithm in large database. Appl. Res. Comput. 38(01), 88–92 (2021) 7. Wang, X.: Research on PDE-based improved method for correlation feature data mining. Modern Electron. Tech. 44(18), 111–113 (2021) 8. Jiang, F., Yuen, K.K.R., Lee, E.W.M.: Analysis of motorcycle accidents using association rule mining-based framework with parameter optimization and GIS technology. J. Safety Res. 75, 292–309 (2020) 9. Liu, S., Hu, R., Wu, J., et al.: Research on data classification and feature fusion method of cancer nuclei image based on deep learning. Int. J. Imaging Syst. Technol. 32(3), 969–981 (2022) 10. Liu, S., Liu, D., Muhammad, K., Ding, W.: Effective template update mechanism in visual tracking with background clutter. Neurocomputing 458, 615–625 (2021)

Detection Method of Large Industrial CT Data Transmission Information Anomaly Based on Association Rules Xiafu Pan1 and Chun Zheng2(B) 1 Department of Public Safety Technology, Hainan Vocational College of Political Science and

Law, Haikou 571100, China 2 Anhui Sanlian University, Hefei 230601, China

[email protected]

Abstract. In the abnormal detection of large industrial CT data transmission information, the network is unstable and vulnerable to noise interference, resulting in the unstable output energy of the ray source, making the detection accuracy of data transmission information abnormal low. To solve this problem, a large industrial CT data transmission information anomaly detection method based on association rules is designed. Through association rule mining algorithm, the data transmission information of large-scale industrial CT is analyzed, and the association rules are obtained by introducing interest threshold. The improved Apriori algorithm is adopted to improve the accuracy of association rule mining. According to the results of association rule mining, the nonlinear wavelet transform threshold denoising algorithm based on the improved threshold function is used to denoise the information data. By calculating the abnormal probability of information entropy in data flow and sliding window, the abnormal detection of data transmission information is realized. Experimental results show that the proposed method has high detection accuracy and short average anomaly detection time. Keywords: Association Rules · Large Industrial CT · Data Transmission Information · Wavelet Basis Function · Anomaly Detection

1 Introduction Large-scale industrial X-CT is a nondestructive testing technology, which is regarded as one of the best nondestructive testing means, and has wide application prospects in aviation, spaceflight, national defense, machinery, electronics, petroleum and electric power. With the development of science and technology, people have higher requirements for nondestructive testing technology. Especially for some large workpieces such as rails, railway axles, etc., because of its higher technological requirements and larger volume, so its detection accuracy, detection of the maximum diameter, detection speed, etc. [1]. Such trends have led to the increase in the number of channels and acquisition speed of the detector to collect and detect rays, and the amount of data to be processed and © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 79–91, 2024. https://doi.org/10.1007/978-3-031-50577-5_7

80

X. Pan and C. Zheng

transmitted to the upper computer has increased exponentially; at the same time, because the user requires that the distance between the upper computer and the CT is getting farther and farther when using large industrial CT, it is necessary to find a way to balance the contradiction between the speed and distance of industrial CT in data transmission, and also need to consider the complexity and cost of the system[2]. The data transmission system transfers the data to the upper computer through the data bus interface such as PCI, USB or ARM FPGA. These transfers are often restricted by the transmission distance. However, there are few practical applications of data transmission through network interface, which can overcome the limitation of transmission distance, and the network transmission protocol is universal and reliable. In this data transmission mode, there are some hidden troubles, such as monitoring, interference, data forgery and so on. At the same time, the network is unstable and vulnerable to interference. Therefore, the anomaly detection of large-scale industrial CT data transmission information is an important security issue to be solved, which has great practical significance. The anomaly detection of large-scale industrial CT data transmission has its particularity compared with network data anomaly detection. At present, the research on anomaly detection of large-scale industrial CT data transmission is less, and it has more practical value to study the comprehensive anomaly detection method. In the current research, foreign researchers have more experience in the research and application of this problem, and have given some more comprehensive anomaly detection schemes. Reference [3] proposed an anomaly detection method based on SVM. Firstly, the essential concepts of SVM classifier and intrusion detection system are introduced. Then the ADS methods are classified, and various machine learning and artificial intelligence technologies are discussed. These technologies are combined with SVM classifier to detect exceptions. In addition, it defines the main capabilities, possible limitations or advantages of the ADS method. In addition, the ADS schemes studied are compared to illustrate their various technical details. Reference [4] proposed a multi-scale attention memory automatic coding network for anomaly detection - MAMA network. First, in order to overcome a series of problems caused by the limited fixed receptive field of the convolution operator, a multi-scale global spatial attention block is proposed, which can be directly inserted into any network as the sampling, up sampling and down sampling functions. Because of its efficient feature representation capability, the network can obtain competitive results with only a few level blocks. Secondly, due to the lack of constraints in the training and reasoning process, the traditional automatic coder can only learn the fuzzy model, which can also reconstruct the anomaly well. In order to alleviate this challenge, a hash addressing memory module is designed, which can prove the abnormal situation, thus generating higher classification and reconstruction error. In addition, the mean square error (MSE) is coupled with Wasserstein loss to improve the distribution of coded data. Experiments on various data sets, including two different COVID-19 data sets and one brain MRI (rider) data set, prove the robustness and good generalization of the proposed Mama network. With the deepening of the understanding of large-scale industrial CT, domestic researchers have also carried out related research, but more research focuses on data

Detection Method of Large Industrial CT Data Transmission Information Anomaly

81

transmission method optimization and scheduling. But with the development of largescale industrial CT, abnormal detection of data transmission information is crucial, just like the firewall to the computer. Therefore, a number of well-known industrial enterprises established a research group, engaged in research in this area, and have introduced some anomaly detection programs. Referring to the present research contents, this paper designs an anomaly detection method based on association rules for largescale industrial CT data transmission. The association rule mining algorithm is used to analyze the data transmission information of large-scale industrial CT. According to the joint rule mining results, the nonlinear wavelet transform threshold denoising algorithm based on the improved threshold function is used to de-noise the mining data information. The anomaly detection of data transmission information is realized by calculating the anomaly probability of data flow. The experimental results show that the proposed method can achieve high precision detection.

2 Abnormality Detection of Large Industrial CT Data Transmission Information 2.1 Correlation Analysis Association rules mining algorithm is used to implement association analysis of large industrial CT data transmission information. Apriori algorithm is a classical association rule mining algorithm, which can mine meaningful association rules from massive data according to support and confidence, but its I/O load is high and time efficiency is low. In fact, the Apriori algorithm still has the problem of whether the association rules mined are really useful to users. Aiming at the problem that Apriori algorithm may generate inappropriate strong association rules, this paper improves the algorithm by introducing interest threshold to mine valuable strong association rules. The degree of interest describes the degree of correlation between itemset A and itemset B in rule A− > B. Because users in different roles pay attention to different association rules and have different attributes of selected datasets, their mathematical descriptions have many forms. At present, the relevance based interestingness is widely used, so the research also uses the relevance based interestingness to improve the Apriori algorithm. Let Q(A) represent the probability of occurrence of event A in a transaction, Q(B) represent the probability of occurrence of event B in a transaction, and Q(AB) represent the probability of simultaneous occurrence of event A and event in a transaction. Immediate establishment: Q(AB)=Q(A)Q(B)

(1)

Explain that A and B are independent of each other. Immediate establishment: Q(A)(B) = Q(A)Q(B)

(2)

It indicates that A and B are not independent of each other, that is, events A and B are related.

82

X. Pan and C. Zheng

The interest threshold is generally 1 by default. For the mined association rule A− > B, the interest degree of itemset A and itemset B is expressed as: QU (A− > B) =

Q(AB) Q(A)Q(B)

(3)

When QU > 1, it indicates that there is a positive correlation between itemset A and itemset B, and when A occurs, the probability of B will increase. When QU < 1, it indicates that there is a negative correlation between itemset A and itemset B, and when A occurs, the probability of B will be reduced. When QU =1, that is, Q(AB)=Q(A)Q(B), it means that A and B are independent of each other and have no correlation [5]. The introduction of interest can filter the association rules without correlation or with negative correlation, and further reduce the number of candidate association rules (the candidate association rules here refer to the association rules that have just been generated by frequent itemsets and have not been compared with the minimum confidence level), which ensures that the association rules judged by confidence threshold are practical. The improved Apriori algorithm (hereinafter referred to as I-Apriori, or Improved Apriori) is designed to improve the accuracy of extracting association rules. I-Apriori algorithm is still divided into two steps, the first step to find the frequent itemsets, the second step from the frequent itemsets to generate strong association rules. Let Dk be the set of candidate K-itemsets, Gk be the set of frequent K-itemsets, Smin be the minimum support threshold, Cmin be the minimum confidence threshold, Imin be the interest threshold, and Rt be the list of elements that can appear on the right side of the rule with the size of t. The specific operation steps of the algorithm are as follows: (1) Generate frequent itemsets. 1. Find frequent itemset E; 2. Generate candidates and prune; 3. Scan B to count candidates; 4. Return the itemset Gk in the candidate set that is not less than the minimum support; 5. Connection step; 6. If subset d already exists in k-1 term set, prune it; 7. Pruning step: delete infrequent candidates; 8. Pruning step. (2) Generate strong association rules. 1. For each itemset Gi in set Gk , if Gi is not 1_ Item sets are filtered and generated. 1 - Item sets cannot generate strong association rules; 2. Only association rules greater than the interest threshold can be filtered; 3. If the following formula holds: c > Cmin In Eq. (4), c represents the confidence level. The association rule is displayed.

(4)

Detection Method of Large Industrial CT Data Transmission Information Anomaly

83

The I-Apriori algorithm is parallelized based on Spark. The specific scheme is as follows: (1) Configure Spark. The original transaction data is stored on the distributed file system HDFS and transformed into a compressed matrix. The Spark driver reads the configuration file to generate the SparkConf object, then creates the SparkContext object to connect to the Spark cluster, scans the compression matrix on the HDFS using the textFile operator, and creates the RDD from the compression matrix data. As mentioned earlier, the Spark parallel framework compute process actually transforms the data set into a pending RDD, then performs a series of Transformation operations against the pending RDD to get a new RDD, and finally invokes the Action operation to get the result value. Because the elastic distributed data set RDD is the biggest characteristic of Spark big data framework, it is also the important reason for its high efficiency. Therefore, when computing tasks, the number of RDDs must match the computing resources allocated to the program by the Spark cluster, otherwise the Spark framework will be less computationally efficient and program concurrency will be reduced [6]. (2) Parallelization process of the I-Apriori algorithm. The implementation of I-Apriori algorithm on Spark is divided into two steps. The first step is to utilize the advantage of Spark platform with multi-nodes. In the second step, the candidate association rules are generated from the frequent itemsets. Then the association rules are filtered by the interest threshold and the confidence threshold. In the first step, the parallelization of the algorithm is realized by encapsulating the data into RDD and performing RDD operations. The key of the parallelization of the Apriori algorithm is the iterative call and action, which are solved by using the results of the previous iteration. For parallelization, each worker node performs the calculations required by the Apriori algorithm on the data in its RDD partition in a multi-threaded manner. 2.2 Denoising of Association Rules For the mining association rules, the information data is transmitted and denoised. The denoising method is a nonlinear wavelet transform threshold denoising algorithm based on improved threshold function. The selection of wavelet basis function is as follows: the denoising performance of different wavelet basis functions is tested for Doppler standard test signals with signal to noise ratio of 5dB and 15dB with different noise pollution. Try to evaluate the de-noising performance of various wavelet basis functions from two aspects: different vanishing moments (filter length) of the same wavelet basis function and different wavelet basis functions of the same vanishing moment (filter length). By analyzing the test data, we can see that under the 5dB low SNR noise pollution, the root mean square error (RMSE) can better evaluate the de-noising performance of the wavelet basis function. Under the 15dB high SNR noise pollution, Signal to noise ratio (SNR) can better evaluate the denoising performance of wavelet basis function. Therefore, in order to better test the denoising performance of different wavelet basis functions, the denoising performance of wavelet basis functions is analyzed according to different threshold functions under low and high signal-to-noise ratios and different

84

X. Pan and C. Zheng

characteristics of two quality evaluation indicators. In the case of low SNR noise pollution, the root mean square error (RMSE) is used as the quality evaluation index to analyze the processing results of the soft threshold method; In the case of high signal to noise ratio noise pollution, the signal to noise ratio (SNR) is used as the quality evaluation index to analyze the processing results of the hard threshold method [7]. By comparing and analyzing the test results, under the condition of low signal-tonoise ratio noise pollution: (1) From the perspective of single vanishing moment, Sym7, Coif4, Sym9, Sym13 and Sym14 wavelets have good denoising performance. Among them, Sym7 wavelet is the best and Bior3.1 is the worst; (2) In terms of the overall denoising effect, Sym N wavelet system has a good noise suppression effect, followed by Coif N wavelet system, followed by Db N wavelet system, and Bior Nr. Nd wavelet system has a poor overall noise processing performance. In the case of high signal-to-noise ratio noise pollution: (1) From the perspective of single vanishing moment, Bior5.5, Sym15 and Sym11 wavelets have good denoising performance. Among them, Bior5.5 wavelet is the best, and Bior3.1 has the worst denoising performance; (2) In terms of the overall denoising effect, Sym N wavelet system has a good noise suppression effect, followed by Coif N wavelet system, followed by Db N wavelet system, and Bior Nr. Nd wavelet system has a poor overall noise processing performance. According to the test results, Sym N wavelet system is selected as the wavelet basis function. In order to overcome the shortcomings of soft and hard thresholding methods, the selection of threshold function is improved on the basis of nonlinear wavelet transform thresholding method. The improved method is weighted average method. The basic idea of the weighted average method is to combine the soft and hard threshold functions with the weighted average method, introduce the weighting factor η, and construct a new type of threshold functions, as follows:   ⎧ − η) · Rf ,g + η · sign Rf ,g ⎪ ⎨ (1      (5) R˙ f ,g = · Rf ,g  - p , Rf ,g  ≥ p ⎪  ⎩   0, Rf ,g < p In formula (5), Rf ,g represents wavelet coefficient; p is the weighted average threshold. Immediate establishment: η=0

(6)

At this time, the method is hard threshold method. Immediate establishment: η=1

(7)

At this time, the method is soft threshold method. This method can overcome the shortcomings of the soft and hard threshold functions to some extent, and make the reconstructed signal approach the real signal better. In

Detection Method of Large Industrial CT Data Transmission Information Anomaly

85

practice, there are generally two methods for weighting factor η. One method is to simplify the value of η, and obtain the optimal value of η through comparison and analysis, usually η = 0.5; the other method is to use certain mathematical principles to calculate the wavelet coefficients Rf ,g and threshold p, and obtain the value of η through calculation as follows: p (8) η=

  |Rf ,g |−p Rf ,g  · exp |Rf ,g |+p The determination of the optimal decomposition and reconstruction scale is as follows: Add noise with signal to noise ratio of 10dB to Doppler standard test signal, use nonlinear wavelet transform threshold method to conduct wavelet denoising quality evaluation research, analyze the specific characteristics of each traditional quality evaluation index and two composite quality evaluation indexes, and then build a new wavelet denoising composite evaluation index to guide the determination of the optimal decomposition and reconstruction scale. Specifically, in view of the one-sided nature of a single quality evaluation indicator, a new fusion method of composite evaluation indicators is obtained by comprehensively considering four traditional quality evaluation indicators. The calculation steps are as follows: 1. Determine the parameters required for wavelet decomposition, such as wavelet decomposition level (2–8 layers), wavelet basis function, threshold and threshold function; 2. When the true value is unknown, four evaluation indexes, namely Signal to Noise Ratio (SNR), Root Mean Square Error (RMSE), Correlation Coefficient (R) and Smoothness (r), are obtained by wavelet decomposition and reconstruction of various parameters and normalized; 3. Comprehensively considering the different characteristics of four traditional single quality evaluation indicators, the normalized root mean square error (RMSE), signal to noise ratio (SNR) and correlation coefficient (R) are averaged to obtain the fusion indicator U , which is calculated as follows: U=

FRMSE (N ) + FSNR (N ) + FR (N ) 3

(9)

In Formula (9), FRMSE (N ) refers to the mean value of normalized root mean square error; FSNR (N ) is the mean value of normalized signal-to-noise ratio; FR (N ) refers to the mean value of normalized correlation coefficient. 4. The method of determining the weight of the coefficient of variation is adopted to calculate the coefficient of variation of the normalized fusion index U and smoothness g, and weight them according to the size to obtain the weights W1 and W2 ; 5. The weighted fusion index U and smoothness g are linearly combined to obtain the composite index H . The smaller the value of H , the better the denoising effect. The corresponding decomposition scale is the best decomposition and reconstruction scale. H =W1 · U + W2 · g

(10)

86

X. Pan and C. Zheng

The improved method of threshold estimation is as follows: use the 3 σ criterion in electronic measurement to detect and eliminate the abrupt part of the signal and noise, so as to calculate the noise standard deviation on different decomposition scales, and then determine the threshold on each decomposition scale. 2.3 Information Anomaly Detection An anomaly detection method is proposed by calculating the anomaly probability of the data stream, which improves the detection rate and reduces the false alarm rate. The algorithm first obtains the data series of the data stream in a period of time, then calculates the data series to get the information entropy series of the sliding window, then calculates the abnormal probability of the data value and the abnormal probability of the information entropy in the sliding window respectively, and finally judges whether the data stream is abnormal through the comprehensive calculation of the abnormal probability. The algorithm steps are as follows: (1) Information entropy sequence calculation. Information entropy can reflect the statistical distribution characteristics of data series in the window. The data sequence in the sliding window is abstracted as a system. The information entropy li of sampling data sequence yi (k) in the sliding window is defined, and the sliding window size is V . Assume that the value range of yi (k) is: S = {Y1 , Y2 , · · · , Yi }

(11)

In Eq. (11), Yi refers to the value of the i th yi (k). Calculate the sampling probability of each data value, which can be expressed as follows: αj = P(y = Yi ) =

count(Yi ) j≥1 count(Yi )

(12)

In Eq. (12), P(·) represents the sampling probability; count(Yi ) represents the number of occurrences of Yi in data sequence yi (k). The sliding window information entropy is calculated based on the sampling probability: βi =

j≥1

αj lg

1 αj

(13)

As the window slides, the information entropy of window data is calculated in turn, so the time series of information entropy can be expressed as: β = {β1 , β2 , · · · , βi }

(14)

Abnormal probability calculation. Anomaly probability and joint anomaly probability are defined so that they can be applied to the comprehensive determination of anomalies under multiple conditions,

Detection Method of Large Industrial CT Data Transmission Information Anomaly

87

and thus applied to the comprehensive utilization of time correlation and statistical characteristics of data streams. The sensor data anomaly is defined as: in dataset χ , for a data object L, if the number of data objects in the circle with distance r is less than ω, it is considered as an anomaly. In each detection condition, there is a threshold ω of the number of adjacent data objects, which is independent of each other, so it is difficult to make a comprehensive decision under multiple detection conditions. On the basis of the above, the anomaly probability and joint anomaly probability are defined, making them applicable to the comprehensive determination of anomalies under multiple conditions, and thus applied to the comprehensive utilization of time correlation and statistical characteristics of data streams. Suppose there are v data objects in dataset χ . If there is a data object L in dataset χ , and the distance between γ data objects and object L is greater than ϕ, then the anomaly probability ς0 of data object L is defined as γv . Where r is the standard deviation φ of dataset χ . For the sampling data sequence yi (k) in the sliding window, it is expressed as Y1 , Y2 , · · · , Yi , and the size of the sliding window is set as τ . For each data value Yi , there are two cases as follows: 1. Current establishment:   (15) d Yi , Yj ≤ φ   In Eq. (15), d Yi , Yj represents the distance between Yi and Yj . The number m1 of points not adjacent to Yi in the window remains unchanged. 2. Current establishment:   (16) d Yi , Yj > φ The number of points in the window that are not adjacent to Yi is m1 plus 1. Where m1 is initialized to 0. The abnormal probability of Yi data value of data point can be calculated by the following formula: ς1 =

m1 τ

(17)

Similarly, the information entropy sequence β in the sliding window is expressed as {β1 , β2 , · · · , βi }, and βi is the information entropy of the sampling data sequence yi (k). For each entropy value, there are two cases as follows: 1. Current establishment:   d βi , βj ≤ φ (18)   In Eq. (18), d βi , βj refers to the distance between βi and βj . The number m2 of points not adjacent to βi in the window remains unchanged. 2. Current establishment:   (19) d βi , βj > φ

88

X. Pan and C. Zheng

The number of points in the window that are not adjacent to βi is m2 plus 1. Where m2 is initialized to 0. The information entropy anomaly probability of data point yi (k) can be calculated by the following formula: m2 ς2 = (20) τ It is undoubtedly not accurate to judge data exceptions only by the time correlation or statistical characteristics of the data stream. It is necessary to combine these two aspects for comprehensive analysis. Based on ς1 and ς2 , joint probability can be calculated. When the joint probability satisfies the following formula: ϑ +ρ > ς0 (21) 2 In Eq. (21), PL refers to joint probability; ϑ refers to the mathematical expectation of abnormal probability in the event area of large-scale industrial CT data transmission information; ρ refers to the mathematical expectation of abnormal probability of large industrial CT data transmission. The network node corresponding to the data transmission information is abnormal. PL −

3 Experimental Test 3.1 Experimental Environment The experiment test is carried out for the designed anomaly detection method of large-scale industrial CT data transmission information based on association rules. The hardware configuration in the test is shown in Table 1. Table 1. Hardware Configuration in Test Serial No

Equipment name

Specific configuration

1

notebook

Portable notebook

2

Hard disk

SanDisk SSD TLC Cache Granules 128G capacity

3

operating system

CentOS release 7.2

4

Memory

16G*2 3200MHz DDR4L

5

CPU

Intel Core Gen6 i8-1254H

The large industrial CT data transmission system in the experiment is composed of digital processing module, PHY chip, etc. A data transmission system board is connected with multiple data acquisition boards. The data transmission system collects the data on each board and sends the data to PHY chip through the MII interface. PHY chip converts the data into differential signals and sends them to the network. 50 network nodes are arranged for the experimental system for experimental testing.

Detection Method of Large Industrial CT Data Transmission Information Anomaly

89

3.2 Test Method It is a common way to validate anomaly detection methods by manually injecting attacks on real node data. In the test, on the basis of the experimental system network and data set, the nodes in the network are selected as the targets of data attack. Select representative data attack methods for testing. During the test, discrete abnormal data and fixed abnormal data are injected by tampering with the data reported by the node, so as to realize the data attack on the system network. The design method is used to detect the abnormal data caused by the attack. Among them, node 5 is selected as the attack target for injecting discrete abnormal data, and node 16 is selected as the attack target for injecting fixed abnormal data. During the testing process, 50 different false data injection points are randomly selected from the reported data streams of node 5 and node 16, and one injection point is selected for each test to inject false data, and abnormal detection is conducted for the data in this section. The detection algorithm sets the sliding window size to 1000 and the window sliding distance to 100 sample data values. 3.3 Test Result The experimental design method is applicable to the anomaly detection accuracy and average anomaly detection time of two types of injected anomaly data. The higher the detection accuracy and the shorter the average anomaly detection time, the better the detection effect of this method. In the test, the methods proposed in Reference [3] and Reference [4] are used as the comparison test methods to jointly test. 3.3.1 Abnormal Detection Accuracy Test For two types of injected abnormal data, the test results of the three methods are shown in Fig. 1. According to the test data in Fig. 1, for discrete abnormal data, the accuracy of anomaly detection of the design method can reach 99.38% at most, and that of the methods proposed in Reference [3] and Reference [4] can reach 96.18% and 96.02% respectively. The accuracy of anomaly detection of the design method is much higher than that of Reference [3] and Reference [4]. For fixed abnormal data, the accuracy of anomaly detection of the design method is also higher than that of the other two methods. 3.3.2 Time Test For the two injected abnormal data, the average anomaly detection time test results of the three methods are shown in Fig. 2. It can be seen from the experimental results in Fig. 2 that the detection time of the three methods increases with the increase of data volume. The average anomaly detection time of the proposed method is kept within 300ms, while the detection time of the method in reference [3] and the method in reference [4] is 200ms ~ 500ms. The comparison shows that the average anomaly detection time of the proposed method is significantly lower than that of the other two methods. Therefore, the detection time of the proposed method is the shortest.

90

X. Pan and C. Zheng 100

Proposed method Reference [3] method Reference [4] method

99

Accuracy/%

98 97 96 95 94 93 92 10

20

50 30 40 Data volume/GB

60

70

(a) Discrete abnormal data 100 Proposed method Reference [3] method Reference [4] method

99

Accuracy/%

98 97 96 95 94 93 92 10

20

30

50 40 Data volume/GB

60

70

(b) Fixed abnormal data Fig. 1. Test Results of the Accuracy of Anomaly Detection of the Three Methods. (a) Discrete abnormal data. (b) Fixed abnormal data.

Detection Method of Large Industrial CT Data Transmission Information Anomaly 700

91

Proposed method Reference [3] method Reference [4] method

600 500

Average time/ms

600 500 400 300 200 100 0 10

20

30

40 Data volume/GB

50

60

70

Fig. 2. Test Results of Average Abnormal Detection Time

4 Conclusion In the field of industry, large-scale industrial CT is widely used. Based on the demand of large industrial CT data transmission, an anomaly detection method of large industrial CT data transmission based on association rules is designed in this paper. Experimental results show that the proposed method has high detection accuracy and short average detection time. Due to the high detection cost in the detection process, the application scope of this technology is limited. In the future research, the focus will be on reducing the detection cost and expanding the scope of use.

References 1. Pang, G., Shen, C., Cao, L., et al.: Deep learning for anomaly detection: a review. ACM Comput. Surv. 54(2), 1–38 (2021) 2. Cheng, S.L.: High-speed network multi-mode similar data string isolated transmission simulation. Comput. Simul. 38(6), 117–120 (2021) 3. Hosseinzadeh, M., Rahmani, A.M., Vo, B., et al.: Improving security using SVM-based anomaly detection: issues and challenges. Soft. Comput. 25(4), 3195–3223 (2021) 4. Chen, Y., Zhang, H., Wang, Y., et al.: MAMA Net: multi-scale attention memory autoencoder network for anomaly detection. IEEE Trans. Med. Imaging 40(3), 1032–1041 (2021) 5. Natalino, C., Udalcovs, A., Wosinska, L., et al.: Spectrum anomaly detection for optical network monitoring using deep unsupervised learning. IEEE Commun. Lett. 25(5), 1583–1586 (2021) 6. Ata-Ur-Rehman, T.S, Farooq, H., et al.: Anomaly detection with particle filtering for online video surveillance. IEEE Access 9, 19457–19468 (2021) 7. Kotlar, M., Punt, M., Radivojevic, Z., et al.: Novel meta-features for automated machine learning model selection in anomaly detection. IEEE Access 9, 89675–89687 (2021)

Design of Intelligent Integration System for Multi-source Industrial Field Data Based on Machine Learning Shufeng Zhuo1(B) and Yingjian Kang2 1 The Internet of Things and Artificial Intelligence College, Fujian Polytechnic of Information

Technology, Fuzhou 350003, China [email protected] 2 Beijing Polytechnic, Beijing 100016, China

Abstract. Aiming at the problem that information is scattered and difficult to manage in the current integration system, a multi-source industrial field data intelligent integration system based on machine learning is designed. The multi-source industrial field data synchronization device is designed, and the middleware technology is used to realize the integration of the field database, so as to realize the transparent access of the user to the field data source. Using machine learningbased host technology to integrate on-site data, design an intelligent retrieval engine for on-site data, and provide an integrated environment for users’ data processing. Design data integration channel point-to-point circuits, independently select power lines, remove impulse noise, and facilitate visual data integration. Use machine learning methods to train weight parameters and build an integrated task scheduling model to minimize construction queuing to process extraction and operation and maintenance tasks. Adjust the data topology structure, according to the specific needs of multi-source industrial field data intelligent integration, use database connection pool technology to integrate field data, and check the integrity of the integrated data. It can be seen from the experimental results that the system integration effect is good. Keywords: Machine Learning · Multi Source Industry · Site Data · Intelligent Integration

1 Introduction In recent years, with the rapid development of automatic control technology and network technology, more and more on-site monitoring systems and data acquisition systems have been used in various links of enterprise production processes. These systems have improved the automation of the production process, and also provided more and more abundant production and economic data from the site for enterprise management [1]. However, today’s on-site monitoring system and data acquisition system divide the realtime data of each link of the site into islands of real-time information, which makes it © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 92–109, 2024. https://doi.org/10.1007/978-3-031-50577-5_8

Design of Intelligent Integration System for Multi-source

93

extremely difficult for enterprises to centralize and utilize real-time information [2]. The real-time data of each link of the enterprise is a part of the whole production process. If it is not centralized, it cannot effectively reflect the overall situation of production. The traditional data integration system is to read the meter at each production link regularly by the instrument workers, and then gather them. The workload is large, the situation is not reflected in time, the data synchronization is poor, and the informatization level of the enterprise production site is low, which greatly affects the improvement of the enterprise management level. The intelligent integration systems that can be applied now are mainly the integration system based on TRS network data radar and the integration system based on web spider software. These systems are basically similar in architecture, but the data search accuracy is low and the price is expensive. Based on this, an intelligent integration system of multi-source industrial field data based on machine learning is designed.

2 The Overall Structure Design of the System In practical work, it is often encountered that the load suddenly increases, the load current increases, the load flow of the original cable is insufficient, and the over-current operation increases sharply. In order to increase the capacity, the cost can be saved by fully considering the operation of multiple cables in the original multi-source industrial site and laying cables again [3]. However, this method greatly increases the field data, so it is necessary to integrate the data. Based on the field operation system, an intelligent integration method of multi-source industrial field data based on machine learning is proposed to realize the coupling relationship between field systems under the integration system and ensure the efficient operation of multi-source industrial field. The overall structure of the system is shown in Fig. 1. It can be seen from Fig. 1 that the multi-source industrial field data intelligent integration framework is composed of five layers, which are software support layer, integrated bus layer, data bus layer, service layer and application layer. The software support layer includes the underlying communication components for multi-source industrial on-site operation; the integrated bus layer provides a standardized interaction mechanism for the multi-source industrial internal modules, using a public object request proxy architecture [4]; The data bus layer is responsible for connecting the data platform and database access interface and data interface; the service layer can meet the platform interface of each application requirement, realize the relative independence of the service layer and the market business, and will not be affected by the change of the business process; The application layer can provide the application service interface required for the operation of the market [5]. The realization of specific functions between different layers can provide application services for the upper layer, and each layer has a clear fixed calling interface. The multi-source industrial field operation framework, as an industrial market data information release platform, integrates relevant real-time data to determine the coupling relationship and acts as a data sharing link.

94

S. Zhuo and Y. Kang

Data bus Unified naming layer of interfaces

Integrated bus layer

Support software layer

Grid data interface

Security service

Data service

Data flow message

Horizontal and vertical transmission interface

Alarm service

Graphic tools

Data exchange

SCADA

Fig. 1. The overall structure of the system

3 System Hardware Structure Design The research goal of the intelligent integration system of multi-source industrial field data based on machine learning is to construct a transparent global data view required by users to support the global application of various databases and flexible data sharing among field databases on the basis of minimizing the impact on local autonomy of distributed field databases. 3.1 Multi-Source Industrial Field Data Synchronization Device Use middleware technology to realize on-site database integration. The middleware is located between the on-site database system (data layer) and the application program (application layer). It coordinates each database system downward and provides a unified data schema and data upward for applications accessing integrated data. Accessing the common interface, the applications of individual databases still complete their tasks [6]. The data integration system is mainly composed of 4 parts including concentrator, adapter, metadata and management configuration, and WEB server. The system supports virtual views or view sets, unifies the data communication standards of the on-site database system, and solves the problems of data access, data extraction, data conversion, data integration and data display by using the background method, and realizes the user’s transparent access to on-site data sources [7]. After the user submits a query, the concentrator translates the user query into one or more queries on the database. Then these sub queries are sent by the relevant adapter to the background database (field database) for query operation. The concentrator then processes the query results of each data source sent back by the adapter comprehensively.

Design of Intelligent Integration System for Multi-source

95

Through the design of the integration mode, the relevant multiple data are integrated into one record, which is output and returned to the user through the same interface [8]. From the perspective of client and server, the middleware encapsulates the business logic of the system, and is built between the database service system and the application, forming a three-tier client server structure. Each field database resource constitutes the system data layer; The middleware system provides business services for data integration, constituting the business logic layer of the system; Applications constitute the presentation layer or transaction application layer of the system [9]. Realizing on-site database integration is to maintain the relative independence and autonomy of each application database system. After the unified comprehensive database is established, after the data in the distributed application database changes, the corresponding data in the comprehensive database should also change accordingly., only the incremental data of the application system that has changed is sent to the comprehensive database for synchronization. 3.2 Field Data Intelligent Retrieval Engine The retrieval engine adopts the host technology based on machine learning to integrate the field data. When the memory capacity of the machine learning host exceeds the rated capacity, the driver of the application program will interrupt the data transmission with the memory host, thus providing a more stable data integration environment for users’ data processing. The search engine structure is shown in Fig. 2.

network

other

graphics

query parser

index file

language parser

indexer

Docume ntation

index core Tools

Fig. 2. Field data intelligent retrieval engine

It can be seen from Fig. 2 that the structure includes four parts: data search, data index construction, data storage and user interface. Data search is mainly responsible for searching the data most relevant to the field data from the database; Data index construction is mainly responsible for extracting index items from the searched data and

96

S. Zhuo and Y. Kang

building index tables; Data storage is responsible for quickly finding relevant documents on the network and evaluating them; The user interface is responsible for querying user information [10]. 3.3 Integrated Channel Circuit Design The multi-source industrial field data integration channel point-to-point circuit takes a crystal oscillator as the core, and uses a balun circuit composed of capacitors and inductors, plus a whip antenna, which can transmit and receive wireless data. The node circuit diagram is shown in Fig. 3. antenna

G0 AD1 AD2

1 2

G1 G2

AD4

3 G3 4 G4

AD5

5 G5

AD3

DRVD D G7 6 G8 7

CC2530

CLK

C 1

G9 8 9

G10

G11 G12 G13 G14

10 11 12 13

R1

S1 R2 C1 GND

VCC

conve rter

W_OUT

Fig. 3. Schematic diagram of the point-to-point circuit of the data integration channel

It can be seen from Fig. 3 that the control chip and LCD module components use 3.3 V DC voltage power supply, the power module uses three No. 5 batteries as the power supply, and the low-voltage differential regulator MCP1700–3.3 voltage is used to provide stable 3.3 V power supply. CC2530 single chip microcomputer is used as the main control chip, with an enhanced 51 core and 256 kB flash memory. The external equipment resources are very rich. Two crystal oscillators are externally connected, of which 32MHz crystal oscillator can provide accurate symbol period for wireless receiver. The button that controls the switch of the circuit is a non-self-locking independent button. In order to eliminate jitter, the program scans the button every 10ms. After storing the

Design of Intelligent Integration System for Multi-source

97

key data, press the reset button to clear the variable and start a new round of transmission [11–12]. The ST8500 programmable power line communication system-on-chip is used as the core system of the modulation and demodulation module to independently select the power line and provide a reliable path for information reception. Design coupling ports with internal impedance to remove impulse noise for more accurate data integration results. Design point-to-point circuits to facilitate visual data integration. 3.4 Real-Time Database Construction The function of real-time data system is the knowledge base of data, which provides efficient data storage, completes information query and operates real-time transactions, which is the basic unit of real-time database. Unlike ordinary commercial databases, real-time databases have the following characteristics: (1) Time constraints The main feature of real-time database systems is that time constraints are imposed on data objects and transactions. The time constraints and constraints on data are not only the ordinary consistency requirements of the database, but also temporal consistency requirements. (2) Transaction scheduling In traditional database systems, the goal of transaction scheduling is to improve the system’s transaction throughput, but real-time database systems require that as many transactions as possible be completed within their deadlines. Therefore, the scheduling of real-time transactions is different from that in traditional database systems. Most of the real-time transaction scheduling strategies focus on the priority of transactions [13–14]. (3) Real-time data storage management of real-time database The real-time database is mainly responsible for the storage and management of all real-time data in the system, and provides fast and accurate real-time information for related functions. Therefore, for the real-time database, real-time is the first priority. Considering this, the real-time database is in the system. During operation, it should occupy a small space and be resident in memory to ensure fast database reading, flexible access, and easy data sharing between functional modules. When designing the real-time database, the following functional modules should be considered: the real-time database initialization module: This module is mainly used to create each data object based on the data required by advanced control, use linked list as the storage method, and establish the object name index corresponding to each data object, so as to improve the access speed of the data object and establish the historical database [15]. Basic operation module: Provides basic operations of data objects, such as obtaining other attributes of the data object through the name or ID of the data object, or obtaining the ID of the data object through the name and so on. Read and write data operation module: Provide read and write data operations of data objects, write the field value stored in the data buffer into the field value attribute of the data object in the real-time database, and read the current value in the data object.

98

S. Zhuo and Y. Kang

Communication device read/write operation module: manage the communication device, read the current working state of the device, and operate the specified device. Window operation module: Read the name of the user window, operate the specified user window, and read the current state of the user window. Alarm operation module: store alarm information and read the alarm limit of data objects. Save operation module: store the data that needs to be saved in the SQL SERVER database.

4 Research on Key Technologies of the System The machine learning network is based on the Bayesian probability generation model, which trains and adjusts the weight parameters between the hidden layer and the visible layer, so that the entire network can generate target data with maximum probability. Multilayer Boltzmann sets constitute a machine learning network. The neural network divides neurons into dominant neurons and recessive neurons. There are associative memory units between the upper and lower neurons. This connection has no direction and is used to realize associative memory function. 4.1 Integrated Task Scheduling Based on Machine Learning The training and learning of a machine learning network consists of two steps: unsupervised training and tuning. The specific task of the unsupervised training step is to train a restricted Boltzmann machine in layers, with the output of each layer serving as the input to the upper layer. When the upper neurons are marked, joint training of marking is required; the tuning is mainly carried out in two stages: the cognitive stage and the production stage. In the recognition stage, the machine learning network generates the output of each layer layer by layer according to the input feature information, and uses the gradient descent method to correct the weight parameters of each layer. The basic state information consists of the top-level label annotations in the generation stage and the downward weight information, and the upward weight information is also modified in this stage. In the process of feature extraction of machine learning networks, the input signals need to be represented by vectors and trained. The highest level associative memory unit divides tasks according to the clues provided by the lower level. Using the feedforward neural network based on labeled data, the machine learning network can accurately adjust the classification performance and recognize in the last layer of training. Compared with the direct use of feedforward neural network, this method has higher efficiency, because the machine learning network can be trained locally only by modifying the weight parameters, so the training speed is fast and the convergence time is short. Based on this, an integrated task scheduling model is built, as shown in Fig. 4. It can be seen from Fig. 4 that an auxiliary variable is introduced to produce a loss queue. At the initial moment, this loss queue represents the difference between the current actual income of the system and the target. Introduce adjustable parameters to control the balance of system revenue and system delay to minimize construction queue processing

Design of Intelligent Integration System for Multi-source

99

Field data 1 Field data 2

task queue

Field data n

task scheduling server

Field data 1

Field data 2 resource Field queue data n

Fig. 4. Integrated task scheduling model

extraction and operation and maintenance tasks. Use the resource clustering algorithm to cluster resources, and generate scheduling task queues according to the resource clustering results, so that the allocation of resource queues is accurate and reasonable; for a queue, select the appropriate auxiliary resources, and select the maximum value; generate resources according to the results obtained in the calculation process Allocation vector; Update task vector, resource vector and resource allocation vector respectively according to the generation relationship between different formulas. 4.2 Data Topology Adjustment According to the above integration task scheduling results, a multi-source industrial field data mapping table is formed, and intelligent data integration is completed through offline and online data adjustment. 4.2.1 Multi-Source Industrial Field Data Mapping Table The mapping table is the most important connection bridge between offline and online data, which limits the mapping of a large number of multi-source industrial field data to a large extent, and facilitates the maintenance of multi-source industrial field data. Based on the above analysis, the multi-source industrial field data mapping table mainly includes: generating a data mapping table in combination with timing characteristics; forming a field device mapping table; forming an AC line mapping table; forming a line mapping table. According to the above mapping table, the offline and online data are verified.

100

S. Zhuo and Y. Kang

4.2.2 Offline and Online Data Verification Use the node/branch power flow formula data verification tool to filter online damaged data. The process is as follows: Assuming that p and o belong to the equivalence relationship in the multi-source industrial site database, the p positive fields of o are marked as posp (o), which is expressed as:  posp (o) = pXo (1) o

Based on formula (1), divide the multi-source industrial site data into a set in o, and calculate the important value T (Y ) of the i-th conditional attribute. The formula is: T (Y ) =

posX −1 (Y ) posX (Y )

(2)

where, posX (Y ) represents the attribute importance of category X multi-source industrial site data, and posX −1 (Y ) represents the attribute importance of category 4 multi-source industrial site data. The larger the value of T (Y ), the greater the attribute weight of multi-source industrial site data. Then the weight value of multi-source industrial site data can be calculated by using the weight formula: (tX (Y ) − tX −i (Y ))  Oi =   (tX (Y ) − tX −1 (Y ))

(3)

i=1

Among them, tX (Y ) represents the attribute weight of category X multi-source industrial site data, tX −1 (Y ) represents the attribute weight of category X − 1 multi-source industrial site data, and tX −i (Y ) represents the attribute weight of category i multi-source industrial site data. The clustering distance of multi-source industrial field data can be calculated according to formula (4):  Ou (yu − yv )2 (4) du  v = t=1

where, yu represents the dimension value of multi-source industrial site data u , and yv represents the dimension value of multi-source industrial site data v . The data after clustering is processed, and the time series characteristics are used for comprehensive analysis, so as to build the abnormal judgment matrix of multi-source industrial field data: ⎤ ⎡ z11 · · · z1n ⎥ ⎢ (5) Z(x) = ⎣ ... . . . ... ⎦ zm1 · · · zmn

Design of Intelligent Integration System for Multi-source

101

If the above matrix is met, the multi-source industrial site data is abnormal data, and the obtained multi-source industrial site data needs to be processed. The formula for filtering multi-source industrial site data is: Z(x)  Z(t  ) =   t0 − 3ε, t0 + 3ε

(6)

Among them, Z(t  ) represents the normal data set obtained after multi-source industrial field data verification, ε represents the filter, and t0 represents the local global time. From the perspective of the whole network, the voltage phase angle measurement is less. According to the timing characteristics, the connection nodes with large parameter errors are estimated, and the parameters with small errors are input into the multi-source industrial field database, which can obtain accurate data and improve the quality of offline and online data. 4.2.3 Data Topology Adjustment The offline data topology can only be changed manually, while the online data topology needs to be changed. By establishing a small branch to combine multiple computing nodes, and setting the components at both ends to reflect the disconnection of the equipment, the online data topology can be changed. 4.3 Integration Process Design According to the specific requirements of intelligent integration of multi-source industrial field data, relevant information is collected and processed. Based on the characteristics of time series, the data integration library is constructed with the maximum membership, and the gravity center method is used for comprehensive analysis. The time series characteristics are used for comprehensive analysis to build the analysis matrix: ⎤ ⎡ x11 ... x1n ⎥ ⎢. . ⎥ (7) X =⎢ ⎦ ⎣ .. xij .. xn1 · · · xnn The quantitative formula (7) converts the quantitative result into an integer value between 0 and 1 for users to query, and data retrieval is the premise of integration. In-depth analysis of the advanced retrieval function of the database will increase the storage space of the database. Use the binary data conversion method to match the extracted data. The specific matching process is as follows: (1) Determine the convex or concave growth relationship between adjacent points through the binary sequence between adjacent points, and use the analytic hierarchy process to calculate and formulate the subjective set weight vector ω: ω=[ω1 , ω2 , · · · , ωn ]

(8)

102

S. Zhuo and Y. Kang

According to the characteristics of offline and online data flow, the calculation method of anti-entropy weight method is used to obtain the difference matrix: ⎤ x11 ω1 ... x1n ω1n ⎥ ⎢. . ⎥ . . G=⎢ . . ⎦ ⎣ ⎡

(9)

xn1 ωn1 · · · xnn ωnn According to the analysis matrix, calculate the difference value between different data, set the weight vector, and reasonably design the data dictionary. (2) The trend proportional data reduction method is used to reduce the candidate sequence and pattern between adjacent points of the determined relationship to the same interval. Data reduction is a basic operation to convert attribute sets into target strings through insert, delete and replace operations. When two attribute sets are converted to the target string form, the distance formula between the converted two strings expressed by the edit distance is: ⎧ ⎪ ⎨ dyj (i, j − 1) + 1 dxi yj (i, j) = min dxi (i − 1, j) + 1 (10) ⎪ ⎩ dxi yj (i − 1, j − 1) + 1 In formula (10), xi and yj represent the string conversion results of attribute sets ci and cj , respectively; dyj (i, j − 1) represents the string cj to delete a number; dxi (i − 1, j) represents the string ci to delete a number. The reduced data is regarded as independent distributed sample data of machine learning, and the attribute set relationship between any two data is analyzed. (3) Calculate the sequence similarity in the same interval, so as to distinguish the amplitudes of convex growth or concave growth with different change amplitudes. According to the amplitude, obtain the set of sub sequences, that is, the pattern matching result. According to the machine training results, the attribute sets of xi and yj are ci and cj . Let α be the path of ci , β is the path of cj , h1 is a vector of data xi derived from α, and h2 is a vector of data yj derived from β. The similarity of two vectors can be expressed as:   2 × x(G(α, β)) Simx h1 , h2 =ω × x(α)+x(β)

(11)

In formula (11), x(G(α, β)) represents the longest common sub path length of path α and β; ω is the weight coefficient of similarity; i, j represents the i and j vectors respectively. Based on this, calculate the similarity between two documents: b1  b2 

  Simx h1 , h2

  i=1 j=1 Sim xi , yj = b1 + b2

(12)

Design of Intelligent Integration System for Multi-source

103

In formula (12), b1 and b2 represent the numbers of xi and yj data, respectively. The vectors fuse the underlying speech structure and content features, while also reflecting similar parts of the data. According to the calculation result, the sequence similarity in the same interval can be reflected. According to the similarity calculation results, the database connection pool technology is used to integrate the field data. The connection pool is the storage pool of connection objects. The amount of information is controlled through the internal connection mechanism, and the query interface in the system structure is used to provide the connection channel. The API function is used to connect with the on-site database, which can effectively convert the query statements to the specific database, obtain various data of the on-site database, and generate mapping files. The file cycle for each mapping is: t=t1 + t2 + t3 + t4 + t5

(13)

In formula (7): t1 represents the mapping end time; t2 represents the network delay; t3 represents the timing logic setup time; t4 represents the skew of the mapping signal; t5 represents the network delay. Thereby, the minimum cycle for generating the mapping file is obtained, which improves the file mapping efficiency. To fully combine the personalized service mode, it is convenient for users to use it. It is necessary to organically combine relevant majors and put them in different folders to form integrated files. 4.4 Integrated Data Integrity Detection Based on Machine Learning The integrated data is used as machine learning input data, and the integration behavior is defined by association rules. Through traversing each rule on the basis of rules, the data package load is detected. Assume that there are multiple collections of distinct items, given a transaction database, where each transaction T is a collection of a set of items, that is, with a unique identifier. If the itemset is X ⊂ T , then the transaction set T contains the itemset X, and the implication form of X ⇒ Y is the general representation of association rules. Association rules contain two parts, support and confidence. Among them, if a transaction also supports a group of transactions, it is called the support of association rules for a given total transaction database, that is, the support degree of association rule X ⇒ Y . Support supports the frequency of expression rules, and describes the antecedents and the proportion of antecedents in the entire dataset. Minsup is used to represent the minimum support, and the minimum support represents the minimum statistical significance of the data item set. An item set whose support is greater than the minimum support is called a frequent item set. For the known total transaction data set, among the transactions supporting transaction set X, there are also transactions supporting transaction set Y, that is, the confidence level of association rule X ⇒ Y . Confidence represents the strength of the rule and describes the likelihood that the rule will occur if its preconditions are met. The minimum confidence level is represented by minconf to represent the minimum confidence level of the rule, and the association rule whose confidence level is greater than the minimum confidence level is called a strong rule. In the transaction database, the problem of association rule mining is to specify the value of minsup with the minimum support and

104

S. Zhuo and Y. Kang

the value of minconf with the minimum confidence, and find the rules whose support and trust are both higher than the two predetermined values. Based on the above rules, once found, an alarm message will be reported. In the detection process, the detection process is divided into two parts: multi-mode matching and verification. After the detection rules are loaded into the rule set, the rule set will be pre-combined, and the rules with the same characteristics are put into the same signature group, and the integrated data integrity detection is realized through rule matching. Firstly, enter the multi pattern matching stage, input the data into the rule matching engine, and then recognize according to the data characteristics obtained. The message will first enter the first stage of detection, that is, the multi pattern matching stage. In the multi-mode matching stage, a rule matching the message characteristics is usually selected first to obtain the generated message to be detected and obtain the pre verification data set. Then enter the verification phase. In the verification phase, the message is traversed one by one. When the message characteristics meet all the constraints of the rules, alarm information is generated. The detailed detection process is as follows: Step 1: According to the time complexity of multi-pattern matching, candidate rules are extracted from multiple feature sets; Step 2: For the multiple multi-pattern matching rules obtained in step 1, it is assumed that the number of multi-pattern matching rules is; Step 3: Traverse the rules one by one. If the signature is set to “hit”, the n-1 times before the “hit” are invalid, so the position of the last signature in the signature sequence is determined, which is valid. Hit the result, the result is the integrated behavior data. As part of the integrated data is related to multi-source industrial site construction, a three-level detection mode is required to ensure that no data will be missed during the integrated detection. The first level of detection is mainly used to detect the operation status of all multi-source industrial sites and search for abnormal operation status information; The second level of detection is mainly to respond to abnormal data. After the first level of detection, integrate all data, build an abnormal database, and determine different attack means through integration and classification; The third level of detection is to compare and analyze the first two levels of detection results with the relevant abnormal operation data in the historical database, and obtain all the analysis results. After that, machine learning and big data technology are used to query whether the behavior status is kept within the normal indicators, and then the integrity of abnormal operation behavior data is analyzed quantitatively and regularly through preprocessing to obtain accurate analysis results.

5 System Test 5.1 Experimental Platform In order to test the performance of the multi-source industrial field data intelligent integration system based on machine learning, the data integration integrity of the system is compared with that of the integration system based on TRS network data radar and the integration system based on web spider. The experimental environment of the three systems is consistent, that is, the Windows host is i7-12700F; The processor is 13700K; The

Design of Intelligent Integration System for Multi-source

105

data register is 74HC595D; The database host is Redis cloud database; The processing chip is LQFP100; The microcontroller is stc89c52RC; The timing regulator is PZ-51 Tracker; The simulation software is Matlab R2019a. In order to ensure the authenticity and reliability of the experimental results, it is necessary to obtain the experimental results by increasing the number of experiments and cross-validation. The integrated system platform is a platform used to monitor violations of system security principles in multi-source industrial sites, and its structure is shown in Fig. 5. data source

Analysis engine

Multi-pattern matching

database

Anomaly detector exception pattern library

response module

Fig. 5. Experimental platform

It can be seen from Fig. 5 that the data source provides monitoring data for the platform, and the analysis engine is responsible for analyzing and reviewing the data. Once abnormal integration results are found, the data will be immediately transferred to the response module. The response module responds randomly and generates feedback information. 5.2 Experimental Indicators Using the accuracy rate and recall rate of the data integration results as the evaluation criteria, the formula can be expressed as: QA (QA +QB ) QA R= (QA +QC ) P=

(14) (15)

In formulas (14) and (15), QA represents the same kind of documents whose semantic similarity calculation result is greater than zero and judged to be the same attribute; QB refers to similar documents with semantic similarity calculation results greater than zero and judged as different attributes; QC refers to documents of the same category whose semantic similarity calculation result is equal to zero and judged to be the same attribute. The larger the calculation results of formula (14) and (15), the more accurate the inspection results are.

106

S. Zhuo and Y. Kang 600

Integration times/time

500 400 300 200 100 0 0

100

200

300 400 500 training times/time

600

700

(a) Integrated system based on TRS network data radarvv 600

Integration times/time

500 400 300 200 100 0 0

100

200

300 400 500 training times/time

600

700

(b) Integrated system based on web spiders 600

Integration times/time

500 400 300 200 100 0 0

100

200

400 500 300 training times/time

600

700

(c) Intelligent integration system based on machine learning Fig. 6. Comparative analysis of data integration effects of different systems (a) Integrated system based on TRS network data radarvv (b) Integrated system based on web spiders (c) Intelligent integration system based on machine learning

Design of Intelligent Integration System for Multi-source

107

5.3 Analysis of Data Integration Effect For the analysis of data integration effect, the integration system based on TRS network data radar, the integration system based on web spider and the multi-source industrial field data intelligent integration system based on machine learning are used for comparative analysis. The comparison results are shown in Fig. 6. It can be seen from Fig. 6 that with the integration system based on TRS network data radar and the integration system based on web spiders, the data is still in a decentralized structure, while with the integration system based on machine learning, the data integration effect is good. 5.4 Data Integration Integrity Check Firstly, set three external interference conditions, namely, buffer overflow, running process infection, and Trojan Horse. The root cause of buffer overflow is that C + + is insecure, and there is no limit to the reference between the array and the pointer. Buffer overflow vulnerabilities are very common, and exist in C language development tools; Running a process means that one process can read and write to another process. The attack mode is to directly insert the sbrk() function into the code, or find an appropriate space in the process space, then write code and rewrite the original data, so as to achieve the purpose of malicious code execution; A Trojan horse is a program that exists in a computer and can steal passwords, copy or remove files. Then select the experimental set as r1 and r2 , and finally train the two experimental sets to get the ROC curve, as shown in Fig. 7. It can be seen from Fig. 7 that under the two interference conditions of the integration system using machine learning, the accuracy of the data integration results is always greater than 0.95 when the recall rate is greater than 0.40, and the accuracy of the other two methods is lower than that of the machine learning method. This shows that the use of machine learning-based integration systems can obtain fully integrated data.

6 Conclusion In view of the large part of noise in the data obtained by the current integration system, and the low data integrity, which leads to the serious problem of data loss in the integration process, the intelligent integration system of multi-source industrial field data based on machine learning is designed to solve the problems existing in the current system. Self-selected power lines are used to remove impulse noise, facilitate visual data integration, and combine multiple pattern matching rules to achieve integrated data integrity detection, so as to achieve high data integration. The system can effectively solve the problem of data loss in the integration process, but there is still a problem that needs further in-depth study, that is, for the frequent distributed combined network attacks, the essence of machine learning needs to be enhanced, so that the research method can adapt to the changing multi-source industrial field environment.

S. Zhuo and Y. Kang

Accuracy

TRS network web spider machine learning 1.00 0.95 0.90 0.85 0.80 0.75 0.70 0.65 0.60 0.55 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 recall

(a) Buffer overflow

Accuracy

TRS network web spider machine learning 1.00 0.95 0.90 0.85 0.80 0.75 0.70 0.65 0.60 0.55 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 recall

(b) Running process infection TRS network web spider machine learning

Accuracy

108

1.00 0.95 0.90 0.85 0.80 0.75 0.70 0.65 0.60 0.55 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 recall

(c) Trojan Horses Fig. 7. ROC curve (a) Buffer overflow (b) Running process infection (c) Trojan Horses

Design of Intelligent Integration System for Multi-source

109

References 1. Baoxue, L., Zoujing, Y., Chunhui, Z.: Variational imputation model of multi-source industrial data based on domain adaptation. Control Eng. China 29(4), 627–636 (2022) 2. Ming, C., Jiyuan, Y.: Research on establishing Chinese altmetrics data integration and analysis platform. J. Acad. Libr. 40(4), 110–119 (2022) 3. Sanning, H., Yuxiang, L.: Cross social network user matching method based on multi-source data integration. Comput. Simul. 38(4), 352–355+466 (2021) 4. Long Beiping, W., Jiajie, Z.Q., et al.: A method of the real estate data integration based on weighted similarity model. Bull. Surv. Mapp. 6, 122–126 (2021) 5. Sunfa, L., Zhixing, L.: Design of server-side data integration system based on virtualization technology. Mod. Electron. Tech. 43(2), 77–79+83 (2020) 6. Xia, Y., et al.: Research on data integration of command information system based on SOA and ontology. Fire Control & Command Control 47(3), 136–143 (2022) 7. Shuangqin, L., Rui, X., Wenchen, C., et al.: Design of time dimension big data flow integration system based on multi-dimensional hierarchical sampling. Mod. Electron. Tech. 43(5), 133– 136+140 (2020) 8. Wu, L., Li, Z., Abourizk, S.: Automating common data integration for improved data-driven decision-support system in industrial construction. J. Comput. Civil Eng. 36(2), 4021037.1– 4021037.17 (2022) 9. Liu, X.: Design of enterprise economic information management system based on big data integration algorithm. J. Math. 22(10), 1–9 (2022) 10. Wang, K., Liu, X.: An anomaly detection method of industrial data based on stacking integration. J. Artif. Intell.Artif. Intell. 3(1), 9–19 (2021)

Link Transmission Stability Detection Based on Deep Learning in Opportunistic Networks Jun Ren1(B) , Ruidong Wang1 , Huichen Jia1 , Yingchen Li1 , and Pei Pei2 1 North Automatic Control Technology Institute, Taiyuan 030000, China

[email protected] 2 Department of Foreign Languages, Changchun University of Finance and Economics,

Changchun 130200, China

Abstract. In order to solve the low throughput and high delay problems of traditional link transmission stability detection methods, a detection method of link transmission stability in opportunistic networks based on deep learning is proposed. Establish the network link blocking model. Considering the impact of path delay, analyze the network link information to adjust the hierarchical structure, divide the link data into data blocks, and complete the construction of the link model. According to the link transmission data, the ground point coordinates of the network links in the area are obtained. Under the constraint of link carrying capacity, obtain the barcode sent by network transmission. Calculate the number of packets sent by the network source during congestion, extract network level features using deep learning algorithm, select the number of network layers, set hidden layer nodes, implement network training according to the learning rate, achieve the construction of classification prediction model, and complete the link transmission stability detection. The experimental results show that the proposed link transmission stability detection method can effectively improve the throughput of opportunistic network links and reduce the communication delay of opportunistic networks. Keywords: Opportunistic Network · Deep Learning · Link Transmission · Stability Detection

1 Introduction With the rapid development of network technology, opportunistic network links are emerging as a new network application and can be widely used in various fields. However, in the opportunistic network link, the phenomenon of packet loss is easy to occur. All the reasons for the packet loss are judged to be caused by network congestion. Therefore, when the packet loss is detected, the opportunistic network link starts the corresponding congestion avoidance. Mechanism [1]. However, in a wireless network, due to the change of the location of the mobile computer, there will also be a high packet loss rate. If the congestion control mechanism is enabled, it will cause the degradation of the opportunistic network link, which has attracted the attention of many experts and © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 110–126, 2024. https://doi.org/10.1007/978-3-031-50577-5_9

Link Transmission Stability Detection

111

scholars.. The information age is an era in which the Internet is integrated with each other. The integrated network will bring great promotion to all aspects of society. According to the characteristics of opportunistic networks, it is meaningful to improve network performance. Traditional transmission control network links have problems such as low average throughput and large round-trip delay value. The opportunistic network environment has been significantly improved, the traditional old-fashioned switching devices have been gradually replaced by modern program-controlled digital switching devices, and the opportunistic network has entered an all-fiber digital transmission mode. Opportunistic networks have good organization and self-configuration advantages in practical applications, so they are widely used in various regions [2]. Reference [3] proposes a link transmission stability detection method based on the node link evaluation model. This method proposes three indicators, namely, data transmission order, relay link control and transmission energy controllability. By matching the transmission power of nodes, a mobile Internet of things link stability detection method based on data transmission order is designed. Poisson distribution model is used to construct a node stability detection method based on relay link control. Reference [4] proposes a link transmission stability detection method based on virus antibody immune game, which evenly divides the node area in the transmission link, optimizes the divided area by designing the coverage division method and combining the distance and residual energy factors, so as to reduce the link jitter probability. Immune algorithm is introduced to construct a virus antibody immune game mechanism according to the antibody characteristics between link nodes, so as to optimize the clustering effect of nodes and links, and improve the data interaction characteristics between nodes and links through virus antibody training, so as to complete the detection of link transmission stability. Reference [5] proposes a link transmission stability detection method based on the intelligent path finding mechanism. In view of the limitations of the traditional mechanism of selecting only parameters, a regional path transmission stability detection method based on the energy angle stimulation mechanism is designed by comprehensively considering the residual energy of nodes, the transmission scattering angle and other factors. Although the above method can complete the detection of transmission link stability, but directly applied to the detection of opportunistic network link status, it will lead to problems of low throughput and long delay in opportunistic networks. Therefore, a deep learning-based opportunistic network link stability is proposed. Sex detection method.

2 Opportunistic Network Link Transmission Stability Detection 2.1 Link Node Model There are many factors that cause the instability of the transmission of opportunistic network links, among which the link blocking problem is the most important one. Therefore, this paper combines big data technology to monitor the blocking of network links and judge the stability of their transmission. Build a link blocking model according to the operating characteristics of opportunistic network links, as shown in Fig. 1. In Fig. 1, L1 , L2 , ..., Ln represents an input link in an opportunistic network, C1 , C2 , ..., Cm represents an output link in an opportunistic network, and M represents

112

J. Ren et al.

Output

c1

Opportunity network transmission node

Input

L1

c2

...

cm

M

L2

...

Ln

Fig. 1. Opportunistic network link blocking model

an opportunistic network transmission node. Considering the blocking phenomenon generated in the network link transmission process as a random variable, its value should be T or I , indicating that there is no blocking phenomenon and that blocking phenomenon occurs, respectively. In general, when the opportunistic network chain is blocked, the transmitted data traffic will exceed the link carrying capacity. Today, with the development of information technology, people have higher requirements for the accuracy and efficiency of transmission networks. Whether from the perspective of operators or consumers, the transmission delay problem of long-distance communication links is inevitable. The control strategy algorithm of addition, superposition, multiplication and decrement used in the adaptive transmission barcode circulation in long-distance communication links is used for calculation, and the software and monitoring of transmission delay in long-distance communication links are adjusted. Through the real-time monitoring of network quality, the problem of unclear image effect can be quickly solved, and the upgrading of customer computer equipment software can be strengthened. The quality problems of queue delay and link delay can be found in time, and relevant business personnel can view the system report to formulate long-distance communication adjustment plans. Due to the continuous integration of network technology and multimedia technology, adjusting the transmission delay of long-distance communication can be used for video conferencing, remote monitoring, remote education and network telephony, which not only brings benefits to the communication function of families, but also brings convenience to the working mode. The reliability, security and efficiency of long-distance communication links are affected by transmission delay. In order to avoid bad effects. It is necessary to calculate the carrying capacity of opportunity network. The calculation formula of the data volume of the input link and the probability of exceeding

Link Transmission Stability Detection

113

 when Ti (M ) ≤ Max(M )  when T (M ) > Max(M )

(1)

the link carrying capacity is:  G(Rn(M )) =

T (M ) Max(T )

I

Controlling the transmission stability of the opportunistic network link can improve the transmission performance of the network and ensure the normal operation of the opportunistic network link. When controlling the stability of network transmission, it is necessary to determine the status of the opportunistic network link and adaptively adjust the network resource transmission scheme according to the determination result. The principle of the multiplexing model under opportunistic network is to determine the information output path according to the stability of the networks of both communication parties and the handling of network link interruption. This principle should be followed for the subsequent key and process of multiplexing to select the path. Let X and Y be two terminals of communication respectively, there are many communication paths between these two terminals, and N1 , N2 , ..., Ni+1 in the figure is an intermediate forwarding station on the path. If X and Y use one of the paths to transmit information when communicating, the attacker can obtain all the information of X and Y communication by attacking any intermediate forwarding station on this path; If X and Y communicate When using multiple paths to transmit information, the attacker must attack at least one site on each path to obtain part of the information of X and Y communication. Since the network in the communication link is a network link distributed to various regions, the process of transmitting all network files is random. Due to the limitations of network transmission network link, the actual release of files is carried out through the network transmission path, and the transmission delay is mainly queue delay and link delay [6]. When there is a delay problem in the transmission process, there will be two path delays, one is queue delay, which exists at the device outlet of each network fulcrum, and the other is link delay, which exists at the data compression port. Therefore, in the case of serious network congestion, the transmission network link in the communication link will be delayed, which seriously affects the reliability and synchronization function of the neural network link. The impact of path delay is shown in Fig. 2. According to the analysis result of the influence of the path delay, according to the distance of the communication distance, the link data of the opportunistic network is divided into 4 levels n1/n2/n3/n4. The network link information adjustment hierarchy is shown in Fig. 3. Among them, n1 is mainly used for data calculation, and its accuracy can reach 80%; n2 is mainly used for data synchronization, using real-time synchronization network link to upload information; n3 is mainly used for data compression, when When a certain amount of data is stored in the database, it is compressed and transmitted to ensure clear image quality; n4 is mainly used in the detection of network links. When the inner loop of the network link is performed, it is detected whether the files are completely preserved. Build an opportunistic network model in the context of big data to ensure data availability. Divide the graph into several data blocks, select k data blocks to form several data groups, and add k data blocks in each data group into forged edges to form K link node small graphs. Select a module from each group of data blocks to form a

114

J. Ren et al.

Transport compressed package

IP network system

path delay

data storage Queue delay

link delay

Fig. 2. Schematic diagram of influence of path delay

subgraph, and add edges to form an opportunity network. The specific implementation steps are as follows: (1) Segmentation: divide the graph into M data blocks randomly according to the link node parameters, select the sparser part of the graph as the cutting point for data segmentation, ensure data availability, count the number, and select a certain probability to calculate the number of edges deleted in each iteration. Although the edges are deleted continuously in the process, because each block node is different, the edges can be added to the data blocks according to the original graph, so as to obtain M block subgraphs; (2) Group: select K data groups from the data block in sequence to form; (3) Process link node small graph: for K data blocks in different data groups, select a node to ensure that the same degree contains the same number of nodes, and become a link node according to the degree correspondence; (4) Add edges: According to the corresponding relationship of nodes, add fake edges to form an opportunistic network. The steps for establishing the link node model are as follows: Obtain the original graph → get M subgraphs → select K subgraphs to form a group → K subgraphs of disconnected link nodes → link node model. According to the above establishment steps, the link node model can be obtained. Depending on the transmission requirements of opportunistic network links, hardware upgrades for long-distance communications are required. Using the network link in dual frequency mode as the source of the upgrade mainly for long-distance communication ensures good transmission of two-way signals, in which information is transmitted

Link Transmission Stability Detection

n1

Calculation of data

n2

It is mainly used for data synchronization

n3

It is mainly used for data compression

n4

It is mainly used for system detection

115

Fig. 3. Network link information adjustment hierarchy

to the CPU of the device, and the signal is transmitted to the link. The different links and different types of data that appear are divided into two paths in total, one is the queue delay, and the other is the link delay. Upgrade the speed of correcting data, improve mathematical formulas according to different data, and achieve smooth and efficient network transmission [7]. 2.2 Network Transmission Sending Barcode Status Detection First, according to the link transmission data, the ground point coordinates of the network links in the area are obtained: ⎧ ⎨ x = Dxs1 + n1 x1 (2) y = ys1 + (n1 y1 + d + n2 y2 )/2G(Rn(M )) ⎩ z = Dzs1 + n1 z1 Among them, x represents the horizontal axis coordinate of the network link ground point in the area, y represents the vertical axis coordinate of the network link ground point in the area, z represents the spatial coordinate of the network link ground point in the area, xs1 , ys1 and zs1 represent The camera position coordinates of the front and rear cameras; x1 , y1 , z1 and n1 represent the image space auxiliary coordinates of the front and rear

116

J. Ren et al.

cameras, and D represents the photographic baseline component. After determining the coordinates of the ground point of the network link in the area, it will also be affected by the value of the angle between the optical axis of the front and rear cameras and the front view direction. Proceed to the next step. Assuming that the included angle is 25°, the positioning accuracy of the point probe at the front end of the geometric link is calculated after eliminating other relevant factors [8]. Each component structure of the link network may have the phenomenon of privacy information leakage, and then needs to be protected. The information that sensitive data needs to be protected mainly includes four aspects, namely, data attribute value, existence, re identification and graph structure. In the context of big data, data sensitive attribute values are usually anonymously processed during the transmission process, but sensitive attribute information still has security risks, so these four pieces of information need to be protected.In the process of controlling opportunistic network stability, the state of the network link is judged, the transmission time of network data packets and the relative arrival time of the packets are counted, and the congestion probability of the current data is predicted. The state model controls the transmission stability of the opportunistic network link of the opportunistic network. The specific steps are described in detail as follows: Assume that R represents the moment when the receiving end of the s packet arrives, and δ represents the time between the receiving end receiving two consecutive packets. Time interval, the network transmission capability judgment formula is: ∗ =

s × R∗ · F(n) δ ∗ (x + y + z)

(3)

where, F(n) represents the normal link state, and F(i) represents the transmission capacity of the current network packet. The threshold is further set to judge the different packet loss situations of the network: assuming that s represents the network packet transmission time and ϕ represents the packet loss caused by the sudden transmission error, the congestion probability of the current data is predicted by Eq. (4): P = ∗ ·

S ·δ ⊕ mak(i, t) Ack

(4)

In the formula, wi represents the ratio of the total throughput to the congestion packet loss rate, Ack B represents the interval of the latest packet loss rate, and mak(i, t) represents the transmission delay difference between adjacent data packets. The flow of any link exit E in the opportunistic network is the same as the flow entering the control node and the outgoing flow of the control node. If there are m nodes connected to the central node s (network flow reaches node S) and n nodes (network flow from S node), then there are: j  i=1

hm · SXih

=P

j 

hn · NXih

(5)

i=1

Total time slot number definition variable:  Xij = t∈T

(6)

Link Transmission Stability Detection

117

In the formula, Xij is the total number of time slots allocated by link ei in the scheduling period T. this variable satisfies 0 ≤ Xij ≤ T

(7)

Due to the limited bearing capacity of the links in the opportunistic network, the limit value of the link throughput cannot be greater than the link capacity, that is: N 

hi · Xih ≤

i=1

N 

C·δ

(8)

i=1

In the formula, C is the capacity of link ei ; δ is the link utilization. All in all, with high scheduling efficiency and energy consumption as the optimization goals, and synthesizing all the above constraints, the following optimization model can be established: ⎧ j j ⎪  ⎪ h ⎪ ⎪ h · SX = hn · NXih m ⎪ i ⎪ ⎪ ⎪ i=1 ⎨ i=1  (9) M = T · y, 0 ≤ Xij ≤ T ⎪ ⎪ ⎪ N N ⎪  ⎪ ⎪ ⎪ hi · Xih ≤ C·δ ⎪ ⎩ i=1

i=1

In the formula, y is the average throughput. The transmission in the opportunistic network link generally uses the control strategy of the signaling network link to adaptively transmit the barcode circulation. The control strategy algorithm is mainly dominated by the obtained network signal strength and makes judgments. If it is in a good state, the circulation rate of barcode data will be slowly increased to make the network usage reach a saturated state; If it is in a blocked state, the circulation rate of bar code data is rapidly decreased, and the flow rate of bar code is controlled to minimize the network usage until it is idle.The specific adjustment strategy is: if it is in a good state, use an addition factor x to represent the increase in bar code transmission; If it is in a blocked state, use a multiplication factor n to represent the decrease in bar code transmission. Using mathematical formulas, assuming that a(x) is the current state of the barcode sent by network transmission, it can be represented by the following formula: a(x + 1) = Mmax{n∗a(x), amin}

(10)

t(x) > M − k3 min{a(x) + m, a max}

(11)

In general, the corresponding values are set to the maximum and minimum values of a code stream in the mathematical formula for analysis and comparison. In the state of the maximum value of the barcode stream, the quality of the effect picture transmitted through the network is of course the clearest, but exceeding this maximum value will not have any impact on the clarity of the image quality. Therefore, it can be known that data higher than the maximum value will not have a better effect on the clarity of the image;

118

J. Ren et al.

In the minimum state of the barcode stream, the quality of the effect picture transmitted through the network is the lowest, and the resolution is also the lowest at this time. But below this minimum value, the clarity of the image quality has reached the point where it can not be directly viewed. Therefore, it can be known that the transmission in the communication path will be terminated if it is below the minimum value [9]. The barcode in the transmission process can be updated according to the change of the real-time status of the network. 2.3 Link Backpack Box Simulation The specific steps of reading, writing and service permission values for transmission security are as follows: (1) Unauthorized network users need to get permission to reduce the harm to the host to zero, that is, Null=0; (2) To gain the right to read, write and service without authorization, the host needs to be irrecoverably paralyzed. At this time, the harm to the host is the greatest, that is readroot = 1, writeroot = 1, service = 0.5; (3) Quantify the remaining permission value, that is, xa−b indicates that the degree of harm to the host caused by an unauthorized network user obtaining this permission is consistent with the readroot value of the read permission, and there is xa−b ∈ (0, 1); (4) Calculate the average value of different permission values; (5) Integerize the calculated average value for use in subsequent designs. Considering the priority of information and data, design different data transmission queues in the link network. Suppose that in the network, a certain data A obtains the right to use the transmission channel during the transmission process, and can freely send B-related data to the data B endpoint. A information data; when the B end receives the relevant information sent by the A end data, it will calculate the forwarding value based on its own remaining energy, and calculate the distance from A and B to node C in the network, so that data forwarding can be obtained. Value, the specific calculation formula is as follows: SB =

eB − enb dA − dB + D enb

(12)

In the formula: dA represents the distance from the A-end of the link network data to the sink node C; dB represents the distance from the B-end of the link network data to the sink node C; D is the distance from the A-end to the B-end; eB is the link network data B-end The energy when there is no data transmission; enb is the energy of the nodes around the link network data B. All the data that meets the formula is backup data, and those that do not meet it will enter sleep after transmission. Selecting backup data can improve the reliability of sensitive data transmission, ensure the safety of sensitive data on the link during transmission, and prevent data from being stolen. Given the routing requirements of the opportunistic network, it is necessary to find an optimal scheduling scheme for data information interaction, and calculate the maximum throughput that the link can

Link Transmission Stability Detection

119

withstand, So that the scheduling scheme can meet the transmission needs of all nodes in the opportunistic network topology and obtain the shortest scheduling period, the link scheduling optimization problem of multi-channel opportunistic network can be expressed as: ⎧ ⎪ ⎨ SB − T (13) min =  h Xi ⎪ ⎩ In the formula, T is the scheduling period.  XijT =

1, 0,

(14)

In the formula, XijT is the total number of time slots allocated by link ei in the scheduling period. The goal of multi-channel opportunistic network link scheduling optimization is to maximize the throughput of each opportunistic network link, improve link scheduling efficiency, and reduce scheduling energy consumption. In the process of controlling the transmission stability of the opportunistic network link, based on the obtained network packet loss rate judgment threshold, the weighted average rate sampling of each congestion period is dynamically adjusted according to different types of packet loss states and the duration of the congestion period. The parameters are fed back to the sender, which effectively completes the control of the transmission stability of the opportunistic network link. The specific steps are described in detail as follows: Assuming that in a stable network state, the network is in the congestion avoidance phase, represented by W . The maximum value of the network window in the congestion period, and the duration of the congestion period represented by k does not change, p represents the weighted average number of congestion periods, R represents the sum of the duration of the congestion period and the packet loss event, and calculates the duration of the congestion period. The number of packets sent by the network source over time: T=

W /2  R=0

W P 8(p + R) +k · · − SB 2 R 3W

(15)

The change of the opportunistic network environment can be inferred based on the change of the duration of the congestion period. The network receiver will maintain the duration of the historical n congestion periods, and compare the duration of the last n periods with the duration of the current n periods of congestion. Compared. If the current opportunistic network congestion period has not ended, and the duration is 1.5 times higher than the weighted average duration of n congestion periods in history, it can be judged that the degree of network congestion has eased and is satisfied with n = 8 condition. When the duration is 2 times higher than the weighted average duration of n congestion periods in history, it means that the degree of network congestion is constantly easing [10]. On the contrary, at the end of the current congestion period, if

120

J. Ren et al.

the duration is less than 0.5 times the weighted average of the durations of n periods in history, it is judged that the congestion degree is increased to satisfy the condition of n = 8. According to the calculation results of the positioning accuracy of the front-end point probes of the geometric link, the operation requirements of the opportunistic network link are grasped, and the link simulation backpack is designed based on this. The block diagram of this device is shown in Fig. 4. 1500 interference; and Optical coupler

Optical signal of traditional channel

Optical coupling splitter

1310 interference and

Optical coupler Optical signal of transmission channel

Fig. 4. Link simulation backpack overall structure

2.4 Building a Multi-Layer Deep Learning Detection Model Deep learning is an interdisciplinary field of pattern recognition, neural network, signal processing, artificial intelligence and other research. It is believed that this technology is based on machine learning. Deep learning intends to use multi-layer and nonlinear transformation to extract features from data and abstract data. It can simulate complex relationships between data. The main idea of deep learning lies in “layer by layer initialization”. Assuming that there is a system Z with hierarchical characteristics, and the inputs and outputs are I and O respectively, the learning process of deep Z1 , Z2 , ..., Zn -degree learning is described by the following formula: I ⇒ Z1 ⇒ Z2 ⇒ · · · ⇒ Zn ⇒ O

(16)

By adjusting the parameters, the input value I can be processed to obtain an output O that is as close to 1 as possible, so that the hierarchical features of the input I can be obtained, so as to achieve accurate data detection. In this study, the network link similarity index is used as the basic sample, and the network data is trained through the constructed detection model to realize the detection of link transmission stability. The target network link prediction weight matrix is defined as: Q=

n  i=1

f n−i · k

(17)

Link Transmission Stability Detection

121

In the formula: f is an adjustable parameter in the range; n represents the total number of network snapshots; k represents the similarity index matrix obtained according to the hierarchical feature Z1 , Z2 , ..., Zn . The weights are sorted according to the matrix nodes, and the sample space of the prediction model is generated. Considering stacking multilayer conditional restricted Boltzmann machines, a multi-layer conditional depth belief network model for classification is designed, as shown in Fig. 5.

Deep learning process

Model classification

Z1 z11

z21

zn1

Z2 z12

z22

Q(y=0|z) Z1

zn3

Z1

Ă

zn2

Zn

+1

Z13

z23

Q(y=1|z)

Zn

z2m

znn

Fig. 5. Deep learning multi-layer classification prediction model

In order to improve the feature extraction effect of the conditional depth belief network model, it is necessary to set the parameters of the network, such as the selection of the number of network layers, the number of hidden layer nodes and the number of past time series in the model. In this paper, different from the traditional method, the model should not stack more than 5 layers in combination with the computational time complexity. Like the selection of the number of network layers, the selection of the number of hidden layer nodes is also one of the important conditions affecting the classification and detection effect of the model. The following formula is used to select the number of hidden layer nodes: √ (18) P = a+b+γ In the formula: a represents the dimension of the input value I ; b represents the number of nodes that output O; γ represents an integer in the range of [1, 10][1, 10]. The deep learning algorithm determines the number of hidden layer nodes for the model according to the above formula. Finally, considering the relationship between the gradient of a single layer and the learning rate, the classification prediction model is obtained by training the learning rate. The learning rate formula is: ⎛ ⎛ ⎞⎞ 1 ⎟⎟ ⎜ ⎜  qc(t) = q(t) ⎝1 + log⎝1 +   (t)  ⎠⎠ Tc  2

(19)

122

J. Ren et al.

In the formula: t represents the number of iterations; c represents the number of network layers; q(t) represents the global learning rate; Tc(t) represents the gradient of the current iteration of the current layer. When the gradient of the front layer is large, the learning rate is almost the same as the global learning rate. As the gradient becomes smaller, the gradient drops to the low curvature point. At this time, the influence of the gradient value on the learning rate gradually increases, which speeds up the training speed of the network, so that the model can reach a stable state faster and realize the detection of link transmission stability. Through the above calculation, under the influence of path delay, adjust the information structure level of the network link and build the link model. According to the ground coordinates of the link, the data packets in the congestion period are calculated under the link carrying capacity. Finally, the deep learning algorithm is used to extract the hierarchical features of the network, complete the construction of the classification prediction model, and obtain the link transmission stability detection results.

3 Analysis of Experimental Results In order to verify the effect of the deep learing-based opportunistic network link stability detection method proposed in this paper in practical application, the opportunistic network in a certain region is selected as the experimental object, and the movement speed of sensor nodes is set as 5m/s, 10m/s, 15m/s, 20m/s and 25m/s, and the same transmission content is transmitted through the opportunistic network link. The detection method proposed in this paper, the method of reference [3] and the method of reference [4] are respectively used to detect the stability of the link, and the integrity of the link detection of the three detection methods is compared. In order to facilitate verification, the hardware and software environment used in the experiment should be set up, and the specific conditions are set up as shown in Table 1. Table 1. Experimental condition settings Name

Describe

Remarks

Hardware equipment

CPU

Pentium(R)Dual-Core [email protected] GHz

Memory Hard disk

5GB

Software network link

Windows 8

-

Programming environment

Visual studio2016

-

Code generation tool

Code Warrior IDE

-

Integrated development environment

-

Visual studio2016

35GB

Link Transmission Stability Detection

123

The transmission stability control experiments are carried out by using the methods in this paper, reference [3] and reference [4]. Under the same data volume, the transmission time of the three methods is compared, and the results are shown in Fig. 6. Methods in this paper Reference [3] method Reference [4] method

100 90 80 Transmission time/s

70 60 50 40 30 20 10 0

1

3 4 2 Number of experiments

5

Fig. 6. Time-consuming comparison results of transmission

It can be seen from Fig. 6 that the transmission time of the two literature comparison methods fluctuates with the increase of the number of times, but the transmission time is always higher than that of the method in this paper. However, the speed of the method in this paper increases with the increase of the number of experiments, and its transmission time shows a slight upward trend, and the final transmission time does not exceed 20s. Therefore, the transmission rate of the method designed in this paper is faster. The reason why the method in this paper has shorter transmission time is that the method in this paper directly carries out link transmission under the constraint of link bearing capacity, and can transmit as much data as possible under the maximum link bearing capacity, thus reducing the time of data transmission. Simulation experiments can be carried out according to the network topology. In the topology, S2-D2, S3-D3, and S4-D4 are all data streams sent in the UDP background. Under the same topology, set the background stream unchanged, and set the bit error rate. is 0. Taking the throughput as the experimental index, the method of Reference [3], the method of Reference [4] and the method in this paper are used for comparison and verification. The experimental results are shown in Fig. 7. It can be seen from the figure that when the bit error rate is 0, the average throughput of this method is greater than that of the two literature comparison methods. And the throughput results of this method are relatively stable, and the throughput results of the two literature comparison methods have strong fluctuations, which shows that the transmission link of the opportunistic network detected by this method can maintain

124

J. Ren et al.

Average throughput/kB

600 500 Methods in this paper Reference [3] method Reference [4] method

400 300 200 100 0

20

40

60 80 Time / s

100

120 140

Fig. 7. Average throughput comparison results and analysis

a stable throughput level, and the transmission performance of the link is relatively reliable. This is because the method in this paper can transmit more data per unit time by carrying out data transmission under the constraint of the maximum link capacity, thus improving the throughput of data transmission in this paper. The round-trip delay of reference [3] method, reference [4] method and the method in this paper are respectively used for comparative experiments. The experimental results are shown in Table 2. Table 2. Round-trip delay comparison results and analysis Time/s

Round trip delay / ms Reference [3] method

Reference [4] method

Methods in this paper

10

0

0

0

20

25

30

20

30

40

35

30

40

60

50

45

50

65

60

50

60

75

70

55

70

75

70

55

It can be seen from Table 2 that in order to clearly see the change of round-trip delay, the network stability of this method and the two traditional methods are analyzed. With the increase of transmission time, there is a significant difference in the round-trip delay of the three methods. However, the round-trip delay of this method is always lower than that of the two literature comparison methods. This is because the method in this paper can calculate all the data sent by network sources in the cycle, and adopt a fast transmission strategy for these data, thus reducing the round-trip delay.

Link Transmission Stability Detection

125

Complete the experiment according to the above steps, record the data detected by the two detection methods, and draw the relevant data generated during the experiment into the experimental result comparison table shown in Table 3. Table 3. Overall transmission stability test results Moving speed of sensor node (m/s)

Actual value (MB)

Methods in this paper (MB)

Reference [3] method (MB)

Reference [4] method (MB)

5

450

452

321

411

10

450

449

312

409

15

450

451

330

380

20

450

448

324

391

25

450

450

318

372

According to the data in Table 3, the amount of link data detected by the method in this paper is significantly higher than that of the two literature comparison methods. The difference from the actual amount of link transmission data is only 2 ~ 3MB, which meets the requirements of link transmission data. While the traditional detection method can only detect the amount of data in the opportunistic network link, the detection results obtained may have low accuracy. Therefore, through experiments, it is proved that the opportunistic network link stability detection method based on deep learning proposed in this paper effectively improves the integrity of the network link stability detection in practical applications, and the detection results obtained are more in line with the actual operation of the link. The reason why the method in this paper has high transmission stability is that the method in this paper carries out multiple iterative training through deep learning algorithm, and realizes network training according to the learning rate, thus improving the stability of network transmission.

4 Conclusion By carrying out research on the detection method of opportunistic network link stability, combined with deep learning algorithm, a new detection method is proposed, and the practical application effect of this method is proved through experiments, and it will not be affected by other related factors in the network environment. Ensure the accuracy of test results. Applying the method in this paper to the field of actual opportunistic network supervision can provide a basis for judging the stable operation of the network.

References 1. Chen, J., Gao, Y., Wu, Z., et al.: Homodyne coherent microwave photonic transmission link with 100.8 km high dynamic range. Acta Optica Sinica 42(5), 17–23 (2022)

126

J. Ren et al.

2. Zhao, S., Wei, W., Xiaorong, Z., et al.: Research on concurrent transmission control of heterogeneous wireless links based on adaptive network coding. J. Electron. Inf. Technol. 44(8), 2777–2784 (2022) 3. Tang, J., Tian, B., Chen, H.: Data transmission stability scheme of mobile internet of things based on node link evaluation model. J. Electron. Measur. Instrum. 34(10), 194–201 (2020) 4. Xu, F., Wang, J.: Link stabilization algorithm for WSN based on virus-antibody immune game. Comput. Eng. 46(4), 206–212+235 (2020) 5. Cheng, L.: WSN transmission path stabilization algorithm based on intelligent routing mechanism. Comput. Measur. Control 30(1), 215–220 (2022) 6. Hao, L., Xiaoli, H., Qian, H., et al.: Research and practice on fault location technology based on open optical transport network. Telecommun. Sci. 38(7), 57–62 (2022) 7. Zhang, M., Zhang, J.: Design of Wireless Communication Link Data Storage System with Distributed Fusion. Microcomput. Appl. 37(5), 106–109+112 (2021) 8. Xi, L., Ying, N., Ru-heng, X., et al.: DDPG optimization based on dynamic inverse of aircraft attitude control. Comput. Simul. 37(7), 37–43 (2020) 9. Wei-long, W., Yong-jun, L., Shang-hong, Z., et al.: Power allocation based on two-stage pareto optimization in satellite downlink. Acta Electron. Sin. Electron. Sin. 49(6), 1101–1107 (2021) 10. Qian, C., Zheng, K., Liu, X.: Delay sensitive adaptive frame aggregation scheme for uplink transmissionin IEEE 802.11ax. J. Chin. Comput. Syst. 43(7), 1529–1534 (2022)

Intelligent Mining Method of New Media Art Image Features Based on Multi-scale Rule Set Ya Xu1(B) and Yanmei Sun2 1 Modern College of Northwest University, Xi’an 710111, China

[email protected] 2 Yantai Vocational College, Yantai 264001, China

Abstract. In the feature mining of new media art images, there are always problems such as low coverage, which lead to incomplete mining data. Therefore, this paper designs an intelligent feature mining method for new media art images based on multi-scale rule sets. After the noise in the new media art image is removed by the variable window filtering algorithm based on the local pixel distribution rule, the new media art image is segmented. Design multi-scale association rule mining algorithm to realize the feature mining of new media art images. Test the performance of the design method. The test results show that the design method has high coverage, high accuracy and low average support estimation error. Keywords: Multi-Scale Rule Set · New Media Art Image · Filtering Control Strategy · Feature Intelligent Mining

1 Introduction In a broad sense, new media art refers to the art of using other media other than traditional media such as painting and sculpture since the 20th century, including photography, film, neon tubes, electronic mechanical devices, etc. With the invention of Apple Computer and the progress and popularization of computer programming technology [1], computer vision technology, network technology and interactive technology have gradually entered the field of art practice, making new media obtain large-scale artistic applications in computer image modification, digital video and editing, video synthesis, Flash animation and other application technologies, which makes new media art obtain a revolutionary breakthrough in language technology and image aesthetics. In the late 20th century, the narrow sense of new media art actually refers to video art. The direct meaning of the word Video refers to television, video and video [2, 3]. In its mixed use, the feature mining of new media art images becomes more difficult. Based on this background, the intelligent feature mining method of new media art images is studied. Although there are many researchers studying image data mining technology, but for the overall image mining technology, the research of image feature mining technology is still in the initial stage. Among them, Mu Xiaodong, Bai Kun, You Xuanang and other scholars [4] proposed a remote sensing image feature extraction and classification © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 127–138, 2024. https://doi.org/10.1007/978-3-031-50577-5_10

128

Y. Xu and Y. Sun

method based on comparative learning method, which can fully mine high-level semantic features in remote sensing images without using data labels. Xu Yangyang, Gu Xin, Wang Annie and other scholars [5] designed an algorithm model for data fusion mining of multiple image sources in the medical field and applied it to the teaching of imaging in colleges and universities. The above methods have some problems in application, so an intelligent image feature mining method based on multi-scale rule set is proposed. On the basis of denoising and segmentation of new media art images, multi-scale rule sets are used to realize feature mining of new media art images. Experimental results show that the proposed method has high mining performance.

2 Intelligent Mining of New Media Art Image Features 2.1 New Media Art Image Denoising The noise detection method is mainly divided into the following two steps: First, distinguish whether the pixel is non noise or may be contaminated by noise. Only processing noise points can greatly reduce the time of filtering algorithm execution. Non noise point detection in this stage is to determine whether it is likely to be contaminated by impulse noise by analyzing the gray value characteristics of pixel points according to the distribution of image gray values in the global range. In the image containing impulse salt and pepper noise, the gray value of the noise signal is shown as maximum or minimum. When formula (1) is established: ξ ≈ ρmax orξ ≈ ρmin

(1)

In formula (1), ξ represents the gray value of the noise signal; ρmax refers to the maximum value of the image; ρmin is the minimum of the image. If the gray value is close to the maximum or minimum value of the image, the current pixel is most likely a noise point. The current pixel is most likely a noise point. On the contrary, it is the signal point. Then, the noise signal and the non noise signal are further determined among the pixels that may be polluted by the noise. For a pixel at the extreme value, it cannot be determined that it is a noise point. It may also be a detail or edge point of the image, which plays an important role in the integrity of image information. In this stage, it is mainly to analyze whether the information around the pixel of suspected noise is relevant [6]. If there is no correlation with the surrounding information, it is very likely to be a noise signal; However, if there is a large piece of suspected noise around it, it may be banded detail information. The general filter noise detection and filtering processing [7, 8] start from the second row and second column of the image, and cannot process the pixels in the surrounding rows and columns of the image. It can be seen from the above analysis that this algorithm has a certain dependence on the elements on the left and top of the window, and in order to reduce the propagation of noise signals in the filtering process, it is necessary to process the rows and columns around the edges of the image. Therefore, this algorithm performs virtual edge processing before image processing, that is, adjusts the number of lines as shown in Formula (2): α = α + 2

(2)

Intelligent Mining Method of New Media Art Image Features

129

In formula (2), α  refers to the number of lines after adjustment; α refers to the number of original lines. Adjust the number of columns as shown in Formula (3): β = β + 2

(3)

In formula (3), β  refers to the number of columns after adjustment; β refers to the number of original columns. The specific method is to use the average value of the four vertices of the original image as the pixel value of the added row and column. Filter window with small size (such as 3 × 3) When the noise density is small, it can remove the noise signal better and destroy the image details less, but it is insufficient in the environment with high noise density; Filter window with large size (such as 7 × 7, 9 × 9). It has good denoising ability, but can not protect image details, which is easy to cause image blur and loss of details. In order to balance the contradiction between image detail protection and denoising ability, the adaptive median filtering algorithm is improved on the basis of the proposed filtering method based on local pixel related information extraction rules, and the adaptive size of the filtering window is selected, that is, the 3 × 3 filtering window is selected when the noise density is small, and the 5 × 5 filtering window is selected when the noise density is large. The filter window traverses the image, and the noise density Aj is defined as formula (4): Aj =

tj × 100% m×m

(4)

In formula (4), tj is the number of noise points in the current window; m × m is the filter window size, m is an odd number, and the initial value of m is set to 3. (1) When Aj < 50% is established: The filtering window remains unchanged, and the median filtering operation is performed under the window whose size is m × m; (2) When Aj ≥ 50% is established: Increase the size of the filtering window to m = m + 1, and then perform sorting and median operation; (3) In order to protect image details and improve denoising ability, in actual use, 3 ≤ m ≤ 5. Initialize m after each operation; (4) When m=5, there is still Aj ≥ 50%, keep the original value of the center pixel output, and proceed to the next step. (5) In order to further improve the denoising ability of the filter under the condition of high noise density, all sorting median operations are only carried out in the non noise point gray values in the current window, that is, sorting median values among all f (x, y) ∈ S gray values. Based on the above analysis and discussion, in order to protect more image details and improve the filter filtering ability in high noise environments, the variable window filtering algorithm flow based on local pixel distribution rules is as follows: Step 1: Enter the image. The input size polluted by salt and pepper noise signal is X × Y, the gray value of the pixel on the image is recorded as f (x, y).

130

Y. Xu and Y. Sun

Step 2: Edge processing. The average value of the pixel values of the four corners of the image is used to add virtual edges to the image. Step 3: Noise detection. Select 3 × 3 The window traverses the image and detects and judges all points on the image according to the noise detection strategy. The pixel points are classified into noise N set and non noise S set. All points of pixel value f (x, y) ∈ S are output without processing and the original value is maintained. The next step is to operate on point f (x, y) ∈ N . Step 4: rule extraction. Define the decision rules. If the pixel information near the noise point has certain correlation, replace the noise signal according to the decision rules; If it is not satisfied, it indicates that the correlation is not significant, and the next step is to proceed. Step 5: Adaptive variable window median filtering. Select an appropriate filtering window according to the density of noise points in the window, and sort the gray values of noise points to get the median value. Step 6: Output the repaired image. Repeat steps 3 to 5 until the image noise is filtered and the repaired image is output. 2.2 New Media Art Image Segmentation Design an image segmentation algorithm based on DC Unet network to implement new media art image segmentation. The DC-Unet network image segmentation model uses a self encoder structure, which is divided into four stages: encoding stage, feature fusion stage, decoding stage, and pixel classification stage. DC Unet’s original inspiration comes from the characteristic of empty convolution. The output result of image segmentation is an end-to-end pixel level classification label. The output and input are required to have the same size. After multiple convolution and pooling operations in the encoding stage, the output size becomes smaller and smaller. Therefore, in the decoding stage, it is necessary to use the up sampling method to restore the size to the original input size. The up sampling generally uses the transpose convolution operation, The previous pooling operation enables each pixel to see large receptive field information. Therefore, there are two keys in the classical full convolution neural network model of image segmentation network, one is to reduce the image size by pooling operation, the other is to restore the image size by using up sampling. In the process of size change, the semantic information of the image is extracted, but there will inevitably be a lot of information loss in the scaling process, so we can make a reasonable assumption: if an operation can be designed, and the image size can also have a larger receptive field to obtain more information without pooling, then the network using this operation will certainly show a better effect. The VGG like convolutional block is used in the encoding phase. Next, a series of cavity convolutions are carried out. Each hole convolution has different hole size, and the information of different size blocks in the image is collected respectively. Then we can get more abundant combination information by superimposing them, which is conducive to the follow-up training. Since the image segmentation is a model in which the input is equal to the output, three times of up sampling are carried out and the transpose convolution operation is adopted. At the same time, the input of each layer of transposed convolution combines the output of the down sampled corresponding position in the

Intelligent Mining Method of New Media Art Image Features

131

network. This operation is called jump connection. Through jumping connection, the low-level features extracted in the early coding stage can be combined with the high-level features extracted in the decoding stage to form a richer description of features, which is conducive to the classification of pixels later. The final classification does not use the traditional full connection layer, but also uses the convolution layer. First, the network parameters are reduced; Second, it can support image input of any size; Third, it can achieve the same effect as full connection. The final loss function uses the mainstream softmax cross entropy, which always performs well in multi category classification. The softmax cross entropy function is shown in Formula (5): soft max(M ) = 

e Ml Mi i∈n e

(5)

In formula (5), M refers to the input data; Ml represents the l-th input data; Mi represents the i th input data. 2.3 Feature Mining Based on multi-scale rule set, a multi-scale association rule mining algorithm is designed to implement feature mining of new media art images. The process of multi-scale data mining is divided into five stages: data preprocessing, data multi-scale, multi-scale data mining core, multi-scale representation of knowledge and evaluation and use of knowledge. It is not difficult to see that the obvious difference between the multi-scale data mining process and the traditional data mining process lies in the addition of a multi-scale data step. The multi-scale data will conceptually transform the pre processed data into the manifestation of multiple scales, thus preparing for multi-scale data mining and analysis; The core of multi-scale data mining is the core of multi-scale data mining, which is mainly based on different mining tasks and multiscale correlation theory to conduct multi-scale data mining and analysis. The multiscale data mining process will be introduced in detail in stages. Phase 1: Data preprocessing The data preprocessing process is shown in Fig. 1. Phase 2: Data multi-scale According to the requirements of the mining task, the existing scale data is conceptually aggregated or decomposed. Here, “conceptually” means not really dividing or aggregating data, but conceptually determining whether the scale of the target scale data set in the mining task is larger or smaller than the performance scale of the current data set. The real implementation of mining still uses the existing scale data set, which is consistent with the essence of multi-scale data mining. Mining the existing single scale data set, using multi-scale data mining methods, to obtain the knowledge of the target scale data set, rather than directly mining the target scale data set [9]. If the scale of the target scale data set is larger than the performance scale of the current data set, perform the data scale upward aggregation operation to merge one or some attributes of the existing data. If the scale of the target scale data set is smaller than the performance scale of the current data set, the data scale is decomposed downward to decompose one or some attributes of the existing data.

132

Y. Xu and Y. Sun

Fig. 1. Data preprocessing process

After defining the target data scale size of the data set, you can select appropriate multi-scale data mining methods according to the mining task requirements to conduct multi-scale data mining. Phase 3: multi-scale data mining kernel A scale up association rule mining algorithm SU-ARMA is designed. Use the association rules contained in the descendant scale data set to derive the association rules implied in the ancestor scale data set, instead of directly mining the ancestor scale data set [10]. The key of the algorithm is to use the JACARD similarity coefficient to describe the similarity between the frequent itemsets contained in each descendant scale data set, and use this similarity to simulate the similarity between the descendant scale data set itself, so as to complete the estimation of the support of some itemsets in the ancestor scale data set, and finally obtain the frequent itemsets implied in the ancestor scale data

Intelligent Mining Method of New Media Art Image Features

133

set. The jacard similarity coefficient is shown in formula (6): J =

 ∩θ  ∪θ

(6)

In formula (6),  refers to the data set of each descendant scale; θ refers to the set of frequent itemsets. In the scale down association rule mining algorithm, the basic idea of the reciprocal distance weighting method in spatial interpolation method is mainly used, in which the domain knowledge describing the relationship between the upper and lower scale data sets plays a crucial role. Interpolation is an important method in numerical analysis, which aims to establish a complete mathematical model of the research object, and is widely used in the geographical field and graphics and image processing. The principle of image interpolation applied in graphics and image processing is similar to that of spatial interpolation applied in geography. It mainly uses image interpolation to calculate pixels with unknown gray values by using adjacent pixels with known gray values, so as to improve image resolution and quality. The mining algorithm of scale down association rules mainly draws on the reciprocal distance weighting method of spatial interpolation. The reciprocal distance weighting method gives weight by the power of the reciprocal distance. Let Y be a series of observation points in the region, as shown in Formula (7). Y = {y1, y2, · · · , ym }

(7)

In Formula (7), ym refers to the m th observation point. Let Q be the corresponding set of observations, as shown in Formula (8). Q = {q(y1 ), q(y2 ), · · · , q(ym )}

(8)

In Formula (8), q(ym ) refers to the m observation value. The value q(y0 ) at point y0 to be inserted can be estimated by a linear combination, as shown in Formula (9): q(y0 ) =

m 

  νj q yj

(9)

j=1

In Formula (9), νj represents the weight of the j th observation value. At this point, the mining of new media art image features was completed. If the data scale of the target data set is large, the scale up data mining method is selected; If the data scale of the target scale data set is small, the scale down data mining method is selected. Phase 4: Multi scale representation of knowledge The mining results obtained from the multi-scale data mining kernel have multi-scale characteristics. The task of this stage is to show these knowledge in an orderly and logical way, and to show the multi-scale characteristics of these knowledge as much as possible. The current knowledge representation methods include natural language representation, formal logic representation, frame representation, etc. These methods all have substantial theoretical basis. At this stage, it can choose a knowledge representation method suitable

134

Y. Xu and Y. Sun

for specific mining tasks according to the advantages and disadvantages of different knowledge representation methods, and combine multi-scale theory to comprehensively display the knowledge of multiple data scales in the mining results. Compared with traditional knowledge representation, this stage is more hierarchical, more expressive, and easier for users to understand and analyze data at multiple scales. Phase 5: Evaluation and Use Finally, multi-scale knowledge is handed over to experts, who judge the availability and effectiveness of knowledge based on domain knowledge and methods to form a result document to guide users to make multi-scale decisions. Based on the above analysis, the new media art image feature intelligent mining model based on multi-scale rule set designed in this paper is summarized as Fig. 2.

Fig. 2. Design algorithm model

3 Experimental Test 3.1 Experimental Data Set For the new media art image feature intelligent mining method designed based on multiscale rule set, it is tested through experiments. The experimental data sets used are three movie resource sets, as shown in Table 1. The design method is used to intelligently mine the image features of the experimental data set. At the same time, the two methods mentioned in the introduction are used as comparative experiment methods to conduct comparative experiments, which are represented by Method 1 and Method 2 respectively.

Intelligent Mining Method of New Media Art Image Features

135

Table 1. Details of the experimental data set Serial number:

Project

Data

Data set 1

Movie type

Comedy

Number of films

102

Data set size

1524 GB

Movie type

Romantic love

Number of films

59

Data set size

853 GB

Movie type

Science fiction

Number of films

62

Data set size

923 GB

Data set 2

Data set 3

3.2 Experimental Environment The running environment of the experiment is Lenovo M7300 desktop computer, CPU Pentium 3.40 GHz, 4G memory, Windows 7 operating system; The data used in the experiment is stored in ORACLE 10g database system, and PL/SQL tools are used to extract, clean and transform the population data in Oracle database; Select the open source Octave to implement scale up and scale down association rule mining algorithms, and complete the statistics and analysis of experimental results. 3.3 Evaluation Criteria for Experimental Results In order to verify the correctness of the design method, 2100 GB of data were selected from three film resource sets for testing, and 9 groups were numbered from 500 GB to 2100 GB, with 200 GB as the interval, to test the coverage, accuracy and average support estimation error. Among them, coverage represents the proportion of mining results covering the real frequent item set in the target scale data set; The precision reflects the influence of false positive term set and false negative term set on the accuracy of experimental results; The average support estimation error is the average of the support estimation error of frequent itemsets in the experimental results of the algorithm. 3.4 Experimental Results The coverage test results of the three methods are shown in Table 2. According to the test results in Table 2, with the increase of data volume, the coverage of the three methods has decreased to some extent. However, the maximum coverage rate of the design method of the design method can reach 97.25%, and the overall coverage rate is greater than 90%. Compared with the two comparison methods, the performance of the design method is more excellent, indicating that the design method has good coverage performance.

136

Y. Xu and Y. Sun Table 2. Coverage Test Results

Serial number

Data volume (GB)

Coverage (%) Design method

Method 1

Method 2

1

500

97.25

92.30

89.63

2

700

95.63

91.27

88.74

3

900

94.32

89.62

88.21

4

1100

93.57

88.51

87.14

5

1300

93.01

88.02

87.02

6

1500

92.84

87.62

86.41

7

1700

92.36

87.41

86.20

8

1900

92.01

86.92

85.96

9

2100

91.58

85.25

85.41

1.00

Design method Method 1 Method 2

Accuracy / %

0.80

0.60

0.40

0.20

0.00 500

1000

2000 1500 Data volume / GB

2500

3000

Fig. 3. Accuracy Test Results

The accuracy test results of the three methods are shown in Fig. 3. The test results in Fig. 3 show that the accuracy of the design method can reach 0.965 at the highest and 0.895 at the lowest, which is higher than 0.89 on the whole. The accuracy is far higher than that of the two comparison methods, which proves that the design method has a high accuracy in image feature intelligent mining.

Intelligent Mining Method of New Media Art Image Features

137

The average support estimation error test results of the three methods are shown in Fig. 4. 0.8

Average support estimation error

0.7

0.6

0.5 Method 1 Method 2

Design method

0.4

0.3

0.2

0.1

0.0 500

1000

1500

2000

2500

3000

Data volume / GB

Fig. 4. Test Results of Average Support Estimation Error

Figure 4 shows that when using the design method for image feature intelligent mining, the maximum average support estimation error of the design method is only 0.34, while the average support estimation error of the two comparison methods is higher than that of the design method, indicating that the overall average support estimation error of the design method is low, which proves that the mining performance of the design method is good.

4 Conclusion In the age of big data, data is wealth and knowledge is treasure. In the future, the attention and research on data mining will remain on the rise. The new media art image feature intelligent mining method designed based on multi-scale rule set realizes the construction of multi-scale data mining process framework, and designs multi-scale association rule mining scale up and scale down extrapolation methods to achieve high coverage and accuracy and low average support estimation error, and has achieved certain research results. However, for multi-scale data mining, no matter in theory or method, there are still many directions that need to be further studied and expanded, which is the focus of our next work.

138

Y. Xu and Y. Sun

References 1. Qiu, B., Cui, S., Sun, M., et al.: Research on multi core parallel scalable functional programming based on Linux. Comput. Simul. 39(4), 223–226, 432 (2022) 2. Queier, J.F., Jung, M., Matsumoto, T., et al.: Video animation: emergence of content-agnostic information processing by a robot using active inference, visual attention, working memory, and planning. Neural Comput.Comput. 33(9), 2353–2407 (2021) 3. Liu, H., Lu, M., Ma, Z., et al.: Neural video coding using multiscale motion compensation and spatiotemporal context model. IEEE Trans. Circuits Syst. Video Technol. 31(8), 3182–3196 (2021) 4. Mu, X., Bai, K., You, X., et al.: Remote sensing image feature extraction and classification based on contrastive learning method. Opt. Precis. Eng. 29(9), 2222–2234 (2021) 5. Xu, Y., Gu, X., Wang, A., et al.: Research on the teaching of multi image source data and intelligent mining algorithm for renal head and neck tumor. Mod. Sci. Instrum. 38(3), 272–276 (2021) 6. Kowalczyk, M., Ciarach, P., Przewlocka-Rus, D., et al.: Real-time FPGA implementation of parallel connected component labelling for a 4K video stream. J. Sig. Process. Syst. Sig. Image Video Technol. 93(5), 481–498 (2021) 7. Konyaev, P.A.: Digital image processing for real-time correction of atmospheric turbulent distortions. Atmos. Oceanic Opt. 35(3), 197–201 (2022) 8. Xu, J., Xie, H., Liu, C., et al.: Hip landmark detection with dependency mining in ultrasound image. IEEE Trans. Med. Imaging 40(12), 3762–3774 (2021) 9. Acosta-Mendoza, N., Carrasco-Ochoa, J.A., Martínez-Trinidad, J.F., et al.: Mining clique frequent approximate subgraphs from multi-graph collections. Appl. Intell. 50(3), 878–892 (2020) 10. Bai, L.: Multi-scale data mining algorithm based on the method of scale division. J. Ningxia Normal Univ. 41(7), 65–72 (2020)

Data Security Sharing Method of Opportunistic Network Routing Nodes Based on Knowledge Graph and Big Data Xucheng Wan and Yan Zhao(B) Ningbo City College of Vocational Technology, Ningbo 315199, China [email protected]

Abstract. Opportunity network is one of the main network types for data sharing applications today. Due to the completely self-organized and distributed characteristics of its own structure, there is a greater security risk in the process of data sharing. Therefore, an opportunity based on knowledge graph and big data is proposed. Research on data security sharing method of network routing nodes. Analyze the opportunistic network routing protocol, expound the data sharing mode of the opportunistic network routing nodes, represent the opportunistic network based on the knowledge graph, calculate the influence degree parameter of the routing node, build the routing node influence propagation model based on this, formulate the routing node data publishing/subscribing rules, combined with Secure multi-party computing big data builds a routing node data security sharing architecture to realize the secure sharing of opportunistic network routing node data. The experimental results show that after the method is applied, the minimum value of the shared data packet loss rate reaches 4%, and the maximum value of the data sharing safety factor reaches 0.98, which fully confirms that the method has a better data security sharing effect. Keywords: Knowledge Graph · Big Data · Opportunistic Network · Routing Node · Data Security · Data Sharing

1 Introduction Opportunistic network integrates multiple concepts such as delay-tolerant network, selforganizing network and social network. It uses the encounter opportunities in people’s daily life to transmit and share messages, so as to achieve efficient networking and message delivery in harsh network environments. With the increasing popularity of a large number of inexpensive short-range handheld communication devices (such as mobile phones, iPads, Tablets, etc.) in recent years, opportunistic networks have also received more and more attention. Opportunistic network is a kind of self-organizing delay-tolerant and interruption-tolerant network in which the link is in the intermittent connection state for a long time, and uses the encounter opportunity brought by its own movement to realize message transmission. Opportunity network has the characteristics © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 139–154, 2024. https://doi.org/10.1007/978-3-031-50577-5_11

140

X. Wan and Y. Zhao

of people-centered social network. It explores and develops the social relationship and movement laws between people, and carries out efficient message sharing and transmission. It eliminates the communication barriers caused by node mobility in traditional networks. Opportunistic networks treat each movement of a node as a new transmission opportunity. It is precisely because opportunistic networks have different design concepts and processing methods than traditional wireless networks in the face of disconnection, delay, and node movement, which makes opportunistic networks have wider application requirements [1]. In addition, opportunistic networks have some properties of MANET due to their completely self-organizing and distributed architecture. In the MANET architecture, data transmission between mobile nodes relies on the end-to-end routing path established by the AODV or DSR routing algorithm. However, in the actual self-organizing network environment, due to the frequent high-speed movement of nodes, extremely sparse density, signal battery attenuation and other problems, a large number of nodes in the network are in a disconnected state, resulting in a long-term intermittent network connection. A complete and effective end-to-end communication path cannot be established between nodes. Unlike MANET, opportunistic networks do not assume a complete end-to-end path between nodes. Nodes only use local information to calculate and select an appropriate next-hop route, expecting to pass messages through the movement and data forwarding of multiple nodes. to the destination node. Furthermore, opportunistic networks do not need to acquire topology information for the entire network. Therefore, opportunistic networks have better availability and adaptability in the face of harsh environments. Especially nowadays, the popularity of a large number of low-cost handheld devices with various communication modules makes it possible for people to realize the transmission and sharing of information by forming a network by themselves through various encounter opportunities. In an opportunistic network, nodes implement message transmission in a “storecarry-forward” manner, and there is no need for a complete communication link between the source node and the destination node. Mobile nodes use local knowledge for routing selection without acquiring any network topology information. The routing problem in traditional wireless mobile networks has evolved into a simple forwarding node selection strategy problem in opportunistic networks. However, routing is still one of the core hardware in opportunistic networks. For an opportunistic network, the transmission of data depends on the encounter opportunities brought by the movement of nodes to reach the destination node in a multi-hop manner. Therefore, how to choose the most ideal forwarding target and the most ideal forwarding timing has become the core issue of the data transmission mechanism in opportunistic networks, and is also the key to the security of data sharing. Due to the uncertainty of the forwarding target of opportunistic networks, the shared security of routing nodes is affected by many factors, such as the security of route selection and the security of node movement. With the expansion of the application scope of opportunistic networks, the data sharing security problem has gradually emerged, which has attracted widespread attention from the public and has become one of the main obstacles to restrict the development and application of opportunistic networks. Therefore, an opportunistic network based on knowledge graph and big data is proposed.

Data Security Sharing Method of Opportunistic Network Routing Nodes

141

Research on data security sharing method of routing nodes. It is hoped that through the application of knowledge graph and big data technology, the security of data sharing of routing nodes in opportunistic networks will be improved, and sufficient power support will be provided for the sustainable development of opportunistic networks.

2 Research on Data Security Sharing Method of Routing Nodes 2.1 Analysis of Opportunistic Network Routing Protocols In order to improve the security of data sharing of opportunistic network routing nodes, the first step is to analyze the opportunistic network routing protocol, and lay a solid foundation for the construction of the subsequent routing node influence propagation model. The opportunistic network routing pattern is shown in Fig. 1.

Fig. 1. Schematic diagram of opportunistic network routing mode

As shown in Fig. 1, node P sends a message to node Q at time T1 , but it cannot realize end-to-end transmission because it is not in the same connected domain. Therefore, the node P first compares the encounter probability between the neighbor nodes within the communication range and the destination node Q, and then forwards the message to the node 2 with a larger encounter probability. At time T2 , node 2 meets node 5, and finds that node 5 has a higher probability of encountering the destination node Q than itself, so node 2 forwards the message to node 5. Until time T3 , node 5 just meets node Q, so the message is finally delivered to the destination node. According to different forwarding strategies, opportunistic network routing can be divided into four categories: scatter-based routing, utility-based routing, scatter-utilitybased hybrid routing, and active mobility-based routing. Among them, the routing based on dissemination adopts the method of copying or encoding, and multiple copies of the message are transmitted in parallel through multiple paths in the network, thereby improving the efficiency of message transmission; the routing based on utility value adopts the method of single copy and single path. Use the information of nodes in the network or network state to evaluate, and select an appropriate forwarding node. The

142

X. Wan and Y. Zhao

corresponding routing evaluation function uses encounter prediction, context information, link status, etc. as parameters to calculate the probability that the target node will finally forward the message successfully; the routing based on the mixed utility value of the spread effectively integrates the characteristics of the above two routings. In the routing based on active mobility, the high-performance nodes deployed in a specific area of the network provide routing services for other nodes through active mobility, so as to realize the interactive communication of nodes in the network, compared with passively waiting for communication opportunities in ordinary wireless networks. Routing mode has better transmission performance [2]. The above process completes the analysis of the opportunistic network routing protocol, expounds the data sharing mode of the opportunistic network routing nodes, and provides support for the construction of the subsequent routing node influence propagation model. 2.2 Routing Node Affects the Propagation Model Construction Based on the analysis results of the above opportunistic network routing protocol, the opportunistic network is represented based on the knowledge graph, and the influence degree of the routing nodes is analyzed. Representing the opportunistic network in the form of a knowledge graph can effectively simplify the research process. The main performance parameters of the influence degree of routing nodes are degree, betweenness and i value. Among them, the degree can represent the local importance of routing nodes. Defined as the sum of the number of all routing nodes connected to the current routing node, the expression is  αi = L(i) (1) i

routing node i; L(i) represents the routing In formula (1), αi represents the degree of node connected to the current routing node i; i represents the sum of all routing nodes connected to the current routing node., and the summed result is expressed as the degree of the final routing node. It can be seen that the degree can represent the influence of the routing node in the local scope. However, we can also analyze that the disadvantage of degree indicating the influence of routing nodes is that the degree only considers the simplest structural information of the number of connections, and does not analyze the structural characteristics of the opportunistic network in depth. Therefore, there are some deficiencies in expressing the importance of routing nodes [3]. A more widely used measure of routing node importance than degree is betweenness. Betweenness considers structural properties to a certain extent more deeply than degree. To a certain extent, it can reflect the distance relationship between a routing node and other routing nodes in the network. When the betweenness value of the routing node is large, the average distance from the routing node to other routing nodes will be small. When the betweenness value of the routing node is small, the average distance between the routing node and other routing nodes will be large. The betweenness value is the manifestation of the propagation distance of the opportunistic network in the analysis of the importance of routing nodes, and the expression is:  χjk (i) (2) βi = χjk

Data Security Sharing Method of Opportunistic Network Routing Nodes

143

In formula (2), βi represents the betweenness of routing node i; χjk represents the number of shortest paths between any two routing nodes j and k in the opportunistic network; χjk (i) represents the number of χjk passing through node i. . By taking its ratio, the resulting value is the definition of the final betweenness. In opportunistic network analysis, the most famous is the PageRank algorithm, which ranks the web page nodes by improving the degree index of the routing node and through the link form of the opportunistic web page nodes. Finally, the importance ranking of each routing node is obtained. In general, the PageRank algorithm is more inclined to be more important routing nodes, and the routing nodes it links to a relatively more important performance. The PageRank algorithm is shown in Fig. 2.

Fig. 2. PageRank algorithm

In the PageRank algorithm, the direction of each routing node link is actually the assignment of weights and the voting of importance. The PageRank value for a routing node is expressed as i = δ ∗

 βi 1 + (1 − δ) ∗ n αi

(3)

In formula (3), i represents the PageRank value of the routing node i; δ represents the damping factor, because in the link pointing, it is not always possible to visit the routing node with a 100% probability. Therefore, a certain probability is required to indicate the jumping situation of nodes; n represents the total number of routing nodes in the network. It can be seen from Eq. (3) that the link pointing behavior and voting behavior of the PageRank algorithm mainly occur in directed graphs. When dealing with undirected graphs, the edges between routing nodes provide less information, which is also PageRank. A shortcoming of the algorithm. Since the above-mentioned parameters of the influence degree of routing nodes have their own shortcomings, they need to be integrated and applied. Based on this, a routing node influence propagation model is constructed, as shown in Fig. 3.

144

X. Wan and Y. Zhao

Fig. 3. Schematic diagram of routing node influence propagation model

As can be seen from Fig. 3, the routing node influence propagation model based on knowledge graph needs to first describe the influence of routing nodes, that is, which routing nodes have more important influence in the overall opportunistic network [4]. After analyzing the important routing nodes, we divide the community of the overall opportunistic network, and then improve the modeling and analysis of the propagation model from the macro and micro perspectives. Finally, the propagation effect of the global opportunistic network is described, including the basic information of the propagation process such as the time of propagation, the scope of propagation, and the state of final propagation. The above process completes the construction of the routing node influence propagation model, which provides a certain support for the formulation of subsequent routing node data publishing/subscription rules. 2.3 Routing Node Data Publishing/subscription Rules Formulation Based on the above-mentioned construction of the routing node impact propagation model, the routing node data publishing/subscription rules are formulated to make sufficient preparations for the final data security sharing architecture construction and implementation. Since the opportunistic network implements message transmission in the mode of “store-carry-forward”, mining influential routing nodes in the network is of great significance for improving the efficiency of message transmission and reducing communication consumption. Based on the measurement idea of entropy centrality, this research first proposes a local centrality measurement method, and then proposes a global entropy centrality measurement method for the selection of publish/subscribe routing nodes from a global perspective [5].

Data Security Sharing Method of Opportunistic Network Routing Nodes

145

Given an opportunistic network G(Vn , En ), where Vn represents the set of routing nodes in the opportunistic network, and En represents the set of connection attributes between routing nodes in the opportunistic network. If the data distribution routing node at the most core position in the mining opportunity network, ei,j ∈ En represents whether there is a connection between routing nodes. Because people’s movement behavior has certain social attributes, the number of encounters between people and acquaintances is significantly higher than the number of encounters with strangers, and the probability of people and acquaintances meeting again in the future is also higher than the probability of encountering strangers again. The most stable data distribution routing node in the mining opportunity network, ei,j ∈ En represents the number of encounters between routing nodes. This section first studies the local centrality of routing nodes. If the measured routing node has the best local center position, the routing node needs to have the most neighbor nodes; if the measured routing node has the most stable local centrality, the following three factors need to be considered: the most neighbor nodes, the largest number of encounters, The most evenly distributed number of encounters with neighbor nodes. In order to meet the above requirements, the theory of information entropy is used to define the connection distribution characteristics of routing node i and other routing nodes, and the connection distribution entropy of node i is expressed as      (4) Z(i) = − P ei,j log2 P ei,j (4), Z(i) represents the connection distribution entropy of routing node  In formula i; P ei,j represents the encounter probability between node i and j. From the calculation result of formula (4), it can be seen that when the routing node has the best local center position, the larger the entropy value is, the more the number of neighboring routing nodes is; when the routing node has the most stable local centrality, if the two routing nodes have the same number of neighbors, and the routing node with a larger entropy value indicates that the number of encounters is more evenly distributed. If two routing nodes have the same probability of encountering, the routing node with a larger entropy value indicates that the number of neighbors is greater [6]. Considering that the number of encounters is also one of the important indicators to measure the local centrality of routing nodes, according to the above analysis results, this section proposes the definition of local centrality of routing nodes based on the connection distribution entropy, which is expressed as  (5) K(i) = ei,j ∗ Z(i) In formula (5), K(i) represents the local centrality of routing nodes. Then the calculation formula of the global entropy centrality of routing nodes is:    γ (i) = K(i) + K(i) + K(i) + · · · + K(i) (6) 1 2 M j∈τ (i)

j∈τ (i)

j∈τ (i)

In formula (6), γ (i) represents the global entropy centrality of routing nodes; τ M (i) represents the set of nodes with a distance of M hops from node i. If the most central node in the network is mined, the shortest path between nodes is the shortest path based on transmission delay; if the most stable data distribution node in the network is mined,

146

X. Wan and Y. Zhao

the shortest path between nodes is the shortest path based on social connections. M is the maximum number of hops, and the specific value can be set according to the actual application requirements. Since the topology of opportunistic networks changes dynamically, human social connections are more permanent and stable than the topology of networks. Mining the social connections between nodes is of great significance to divide the opportunistic network into a more stable community structure [7]. Using the social connection index as a method to detect the social connection between routing nodes, the calculation formula is  λ(i, j) = (N (i, j))ε(i,j) (7) |τ (i)∩τ (j)| ε(i, j) = min{|τ (i)|,|τ (j)|} In formula (7), λ(i, j) represents the social connection index of routing nodes; N (i, j) represents the number of encounters between routing nodes i and j; ε(i, j) represents the similarity between routing nodes i and j; τ (i) and τ (j) represent is the number of members in the neighbor set of routing node i and j. Based on the global entropy centrality measure and social connectivity index, a publish-subscribe system is constructed for opportunistic networks. Firstly, the network is divided into communities based on the social connection index, and different communities are divided according to the closeness of the relationship between nodes. Regarding the selection of proxy nodes, the global entropy centrality measurement method is used to select the node with the largest global entropy centrality in each community as the proxy node to help other nodes in the community publish or subscribe event messages. Within a community, nodes submit event messages that need to be published or subscribed to proxy nodes; among communities, through the interaction between proxy nodes, the sharing of published event messages in the network is realized. The routing node data publishing/subscribing rules are shown in Fig. 4. As shown in Fig. 4, blue circles represent proxy nodes, and white circles represent neighbor routing nodes. Based on the publish/subscribe rules constructed above, a routing algorithm is proposed. Each node obtains the member information of its own community according to the community division algorithm based on social attributes, and selects the proxy node in the community according to the global entropy centrality measurement method. If there is a node in the network that does not belong to any community, and the node also needs to publish or subscribe event messages, assign the node to a default community. Each node in the community records the proxy node of the community to which it belongs. Proxy nodes need to dynamically maintain the relevant information of proxy nodes of other communities in the network [8]. Define the proxy node i of the community that node i belongs to as aaaaaaaaa a, when node i subscribes to proxy node a, if node meets node j, if node j is proxy node a, send the subscription message to node j; if node j is not a proxy The node a, and the local centrality K(j) of the node j is greater than the local centrality K(i) of the node i, and forwards the subscription message to the relay node j. The detailed process of node i subscribing to proxy node a is as follows: when node i publishes a message to proxy node a, if node j is proxy node a, it will send the message to be published to node j; if node j is not proxy node a, At the same time, the local

Data Security Sharing Method of Opportunistic Network Routing Nodes

147

Fig. 4. Schematic diagram of routing node data publishing/subscription rules

centrality K(j) of the node j is greater than the local centrality K(i) of the node Z, and the message to be published is forwarded to the relay node j. The detailed process of node i publishing a message to the proxy node a is: when the proxy node a receives the message that needs to be published, if the proxy node a meets the node k of other communities; if the node k is the proxy node of other communities, it will need to publish the message The message is forwarded to the node k; if the node k is not a proxy node of other communities, and the global entropy centrality γ (k) of the node k k is greater than the global entropy centrality γ (a) of the node a, send the message to be published to k, and continue to send the message through the node k Proliferate other proxy nodes. The detailed process of sharing and publishing messages between proxy nodes is as follows: the proxy node a receives a message published by other communities, and if the message is a message subscribed by members in the community, it will spread the message in the community. The detailed process of the proxy node publishing messages to the community is as follows: Since the topology of the opportunistic network changes dynamically, when a new node joins the community, the newly joined node sends a joining notification to the proxy node; if the proxy node in the community changes, the old proxy node transfers the subscription list of event messages in the community to the new proxy node, and the new proxy node sends the proxy change notification to the nodes in the community and the proxy nodes of other communities. The above process completes the formulation of routing node data publishing/subscription rules, and expounds the detailed process of routing node data publishing/subscription, laying a solid foundation for the realization of final data security sharing.

148

X. Wan and Y. Zhao

2.4 Construction and Implementation of Data Security Sharing Architecture Based on the routing node data publishing/subscription rules formulated above, combined with secure multi-party computing big data, a routing node data security sharing architecture is built, so as to realize the safe sharing of routing node data in opportunistic networks. The routing node data security sharing architecture constructed in this research introduces secure multi-party computing big data into data sharing. An excellent sharing strategy can effectively protect the privacy of data. The main task of the data security sharing architecture based on secure multi-party computing big data is to atomize the secure multi-party computing big data, which is more convenient for data invocation, and can better be invoked by the integrated control module to provide interfaces for upper-layer applications [9]. The routing node data security sharing architecture is designed with a layered structure. The entire architecture is divided into three layers. The bottom layer is the data provider layer, the middle layer is the core computing layer, and the upper layer is the application layer. The data provider layer receives the data input by the routing node; the work to be done by the computing layer is to atomize various protocols and algorithms for secure multi-party computation, and then uniformly control these atomic operations to provide interfaces to the upper-layer applications. The upper layer is the application layer, which shows the data sharing process between routing nodes. Although the routing nodes of the secure multi-party computation are all equal, due to the large amount of data computation involved in the secure multi-party computation, a semi-centralized node is proposed to simplify the computation, which is different from the ordinary central node. It is not trustworthy, and the information it receives is not real information, but only part of the steps of secure multi-party computing, and it is impossible to derive private information based on relevant information. In addition, the internal structure of each routing node is basically the same, because of the equality of status, they need to complete the same functional steps in the process of secure multi-party calculation to obtain the final calculation result. The interior of the routing node is divided into a three-layer structure, the most important and the core of the secure multi-party computing data sharing model is the intermediate computing layer. This layer contains some protocols and algorithms for secure multi-party computation. Contains the operation elements of their secret sharing, and completes secure multi-party computation by means of polynomial interpolation. It also includes a secure matrix product operation element, and completes the secure two-party matrix product calculation by implementing a secure two-party protocol. At the same time, it also includes a secure comparison protocol, a simple secure summation protocol, and an inadvertent transfer protocol. The computing layer atomically operationalizes these secure multi-party computing protocol processes, and uses a hanging module for unified management to provide interfaces to the upper layer. In addition, this computing layer also includes the most important auxiliary module in the whole computing system: the communication module [10–12]. This module is an asynchronous communication module implemented with python twist. Any secure multi-party computing operation

Data Security Sharing Method of Opportunistic Network Routing Nodes

149

is inseparable from this auxiliary module, and all interactions between nodes are completed through this module. The basic hierarchical structure of a single routing node is shown in Fig. 5.

Fig. 5. Schematic diagram of the basic hierarchical structure of a single routing node

Secure multi-party computation is mainly to perform multi-party computation on the security of the data sharing process. The calculation formula is: ⎧ n  ⎪ ωij Yij ⎪ ⎪ i,j=1 ⎪ ⎪ η = ⎪ 3ζ0 ⎪ ⎨ n  ∗ μ = 4υ · Yij (8) ⎪ i,j=1 ⎪ ⎪ ⎪ n  ⎪ ⎪ ⎪ ωij Yij ⎩ ϑ = 9κˆ · i,j=1

In formula (8), η, μ and ϑ represent the security values corresponding to the data provider, the routing node and the data receiver during the data sharing process; ωij represents the weight coefficient corresponding to Yij ; Yij represents the The path between routing nodes i and j; ζ0 represents the auxiliary calculation parameter; υ ∗ represents the number of routing nodes; κˆ represents the amount of data required by the data receiver. In order to facilitate the research, the secure multi-party calculation results are fused, and the expression is ξ = 1 η + 2 μ + 3 ϑ

(9)

In formula (9), ξ represents the security value of the routing node data sharing process; 1 , 2 and 3 represent the weight coefficients corresponding to η, μ and ϑ.

150

X. Wan and Y. Zhao

Based on the calculation result of formula (9), the data sharing path selection rule is formulated as ξ ≥ σ  choose (10) ξ < σ  delete In formula (10), σ  represents the security threshold of the routing node data sharing process. Selecting the data sharing path to perform the data sharing operation according to the above can realize the secure sharing of the routing node data of the opportunistic network, and provide assistance for the development and application of the opportunistic network.

3 Experiment and Result Analysis 3.1 Experiment Preparation Stage In order to verify the application performance of the proposed method, an opportunistic network in a certain area is selected as the experimental object. Since the routing nodes in the opportunistic network appear to be in a moving state, its structure cannot be displayed.MATLAB software was selected as the experimental platform, the hardware was configured as 3.20GHz CPU, 4.00GB memory, the software was configured as Windows7SP1 PC, and the operating environment was Visual Studio2010. Build the experimental environment on the MATLAB platform. The proposed method applies several location parameters, which all affect the security of data sharing. Therefore, it is necessary to determine the optimal value of the parameters before the experiment. The experimental parameters are damping factor δ and auxiliary calculation parameter ζ0 . The relationship between the damping factor δ and the calculation accuracy of the PageRank value i of the influence degree of routing nodes is obtained through experiments, as shown in Table 1. As shown in the data in Table 1, when the damping factor δ is 0.40, the calculation accuracy of the PageRank value i of the routing node influence degree reaches the maximum value of 98%. Therefore, the optimal value of damping factor δ is determined to be 0.40. The relationship between the auxiliary calculation parameter ζ0 obtained through experiments and the accuracy of secure multi-party calculation is shown in Fig. 6. As shown in the data in Fig. 6, when the value of the auxiliary calculation parameter ζ0 is 0.3, the accuracy of the secure multi-party calculation reaches the maximum value of 90%. Therefore, it is determined that the optimal value of the auxiliary calculation parameter ζ0 is 0.3. The above process provides convenience for the subsequent experiments of data security sharing of opportunistic network routing nodes.

Data Security Sharing Method of Opportunistic Network Routing Nodes

151

Table 1. The relationship between damping factor and calculation accuracy of PageRank value δ

i

δ

i

0.05

85%

0.55

90%

0.10

75%

0.60

75%

0.15

71%

0.65

84%

0.20

68%

0.70

86%

0.25

85%

0.75

77%

0.30

80%

0.80

86%

0.35

71%

0.85

89%

0.40

98%

0.90

61%

0.45

92%

0.95

54%

0.50

92%

1.00

50%

Secure multi-party calculation accuracy /%

90 75 60 45 30 15 0

0

0.1

0.2 0.3 0.4 Parameter value

0.5

0.6

Fig. 6. Relationship between auxiliary calculation parameters ζ0 and security multi-party calculation accuracy

3.2 Analysis of Experimental Results In order to clearly show the application performance of the proposed method, the shared data packet loss rate and the data sharing safety factor are selected as evaluation indicators, and the calculation formula is:  d × 100% f = RRtotal (11) g = ∀sign(ξ )/ϕ0

152

X. Wan and Y. Zhao

In formula (11), f and g represent the shared data packet loss rate and data sharing safety factor; Rd and Rtotal represent the amount of lost data and the total amount of shared data; ϕ0 represents the data sharing security conversion factor. Based on the calculation formula of formula (11), the shared data packet loss rate obtained through experiments is shown in Fig. 7. Proposed method

Maximum limit

Packet loss rate of shared data /%

20 18 16 14 12 10 8 6 4 2 0

1

2

3

4 5 6 7 8 Test condition No

9

10

Fig. 7. Shared data packet loss rate data graph

As shown in Fig. 7, the shared data packet loss rate obtained by applying the proposed method is less than the maximum limit, and the minimum value is 4%. The method of reference [1] and the method of reference [2] are compared.The safety factor of data sharing obtained through experiments is shown in Table 2. As shown in the data in Table 2, the data sharing safety factors obtained by the proposed method are all larger than the minimum limit, and the maximum value is 0.98.The data sharing safety coefficient of the two literature methods is less than 0.70, and part of it is less than the minimum limit. The above experimental results show that after the proposed method is applied, the packet loss rate of shared data is less than the maximum limit, and the safety factor of data sharing is greater than the minimum limit, which fully confirms the effectiveness and feasibility of the proposed method.This is because the method in this paper establishes the influence propagation model of routing nodes, establishes the data release/subscription rules of routing nodes, and combines the big data calculation of secure multi-parties to build the data security sharing architecture of routing nodes, so as to realize the security sharing of routing node data in opportunistic networks.

Data Security Sharing Method of Opportunistic Network Routing Nodes

153

Table 2. Data sharing safety factor table Experimental condition number

Suggested method

Method of literature [1]

Method of literature [2]

Minimum limit

1

0.84

0.46

0.44

0.45

2

0.76

0.55

0.55

0.56

3

0.89

0.62

0.61

0.62

4

0.90

0.61

0.60

0.61

5

0.91

0.58

0.63

0.59

6

0.92

0.50

0.51

0.50

7

0.90

0.64

0.62

0.60

8

0.84

0.67

0.65

0.67

9

0.86

0.62

0.64

0.63

10

0.98

0.69

0.68

0.70

4 Conclusion This research introduces the knowledge graph theory and big data technology, and proposes a new method for data security sharing of opportunistic network routing nodes, and also provide some help for the application and development of the Opportunity Network.

References 1. Fatima, S., Ahmad, S.: Secure and effective key management using secret sharing schemes in cloud computing. Int. J. e-Collaboration 16(1), 1–15 (2020) 2. Yang, J., Wen, J., Jiang, B., et al.: Blockchain-based sharing and tamper-proof framework of big data networking. IEEE Network 34(4), 62–67 (2020) 3. Hassija, V., Chamola, V., Garg, S., et al.: A blockchain-based framework for lightweight data sharing and energy trading in V2G network. IEEE Trans. Veh. Technol.Veh. Technol. 69(6), 5799–5812 (2020) 4. Dang, Q., Ma, H., Liu, Z., et al.: Secure and efficient client-side data deduplication with public auditing in cloud storage. Int. J. Network Secur. 22(3), 462–475 (2020) 5. Chen, Y., Hu, B., Yu, H., et al.: A threshold proxy re-encryption scheme for secure IoT data sharing based on blockchain. Electronics 10(19), 2359 (2021) 6. Liu, Q., Zhang, W., Ding, S., et al.: Novel secure group data exchange protocol in smart home with physical layer network coding. Sensors 20(4), 1138 (2020) 7. Yue-bo, L., Wei-jie, Z.: Implementation of dynamic clustering scheduling algorithm for social network data. Comput. Simul. 38(1), 269–272 (2021) 8. Zhang, Z., Ren, X.: Data security sharing method based on CP-ABE and blockchain. J. Intell. Fuzzy Syst. Appl. Eng. Technol. 2, 40 (2021) 9. Tan, H.-C., Soh, K.L., Wong, W.P., Tseng, M.-L.: Enhancing supply chain resilience by counteracting the achilles heel of information sharing. J. Enterp. Inf. Manage. 35(3), 817–846 (2022). https://doi.org/10.1108/JEIM-09-2020-0363

154

X. Wan and Y. Zhao

10. Jibb, L., Amoako, E., Heisey, M., et al.: Data handling practices and commercial features of apps related to children: a scoping review of content analyses. Arch. Dis. Child. 7, 107 (2022) 11. Sharma, N., Anand, A., Singh, A.K.: Bio-signal data sharing security through watermarking: a technical survey. Computing: Archives for informatics and numerical computation 103(9), 1883–1917 (2021) 12. Singh, C., Sunitha, C.A.: Chaotic and Paillier secure image data sharing based on blockchain and cloud security. Expert Syst. Appl. 198, 116874 (2022)

Security Awareness Method of Opportunistic Network Routing Protocol Based on Deep Learning and Knowledge Graph Yan Zhao(B) and Xucheng Wan Ningbo City College of Vocational Technology, Ningbo 315199, China [email protected]

Abstract. In the opportunistic network, the security of the routing protocol operation is low, so we design a security awareness method of the opportunistic network routing protocol based on deep learning and knowledge graph. Design a data acquisition platform to implement the data acquisition of opportunistic network routing protocol. The platform can be divided into four functional modules: data acquisition, data analysis, human-computer interaction interface and system management. The steps of building ontology structure, entity extraction, knowledge reasoning, and knowledge graph storage build routing protocol knowledge graph. An opportunistic network routing protocol intrusion detection method based on DCAEs is designed to realize the security awareness of opportunistic network routing protocols. The test results show that the security perception accuracy of this method is stable at 0.96 after running for a period of time, and the overall security perception accuracy is relatively high. Keywords: Deep Learning; Knowledge Graph · Opportunistic Networks · D2RQ Tools · Routing Protocol Security Awareness

1 Introduction The development wave of industrial automation promotes the development of high and new technologies such as information sensing, data communication and data processing., wildlife migration tracking and many other fields, social development has gradually entered the Internet of Things information era. In order to meet the requirements of ubiquitous interconnection and comprehensive perception of the Internet of Things, intelligent devices need to be interconnected. Therefore, the networking technology between devices has increasingly become the focus of the Internet of Things research. In terms of urban intelligent transportation, vehicles equipped with intelligent devices use the short-range wireless communication function to form an in-vehicle self-organizing network, which realizes the mutual transmission of traffic road condition information between vehicles, improves the efficiency of traffic travel and ensures the safety of urban traffic travel; in terms of marine environment monitoring, by installing sensing equipment with wireless communication capabilities © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 155–168, 2024. https://doi.org/10.1007/978-3-031-50577-5_12

156

Y. Zhao and X. Wan

on ocean buoys, a marine wireless sensor network is formed, and the network is used to collect information on parameters such as seawater temperature, oxygen content, and pH, so as to monitor sea state information and provide marine safety. Jobs provide information assurance. However, in practical applications, ad hoc networks often face problems such as sparse distribution of nodes and drastic changes in network topology, which cannot guarantee network connectivity. Therefore, traditional mobile ad hoc network communication protocols, such as AODV and DSR, are no longer suitable for these complex scenarios. Because the condition for the application of these traditional communication protocols is to ensure that there is no less than one end-to-end link to ensure complete connectivity between any node pair in the network, and this condition is difficult to be satisfied in an actual ad hoc network, it is difficult to ensure that the network transmission performance [1]. In order to solve the above communication problems, in 2006, Pelusi et al. proposed a new communication mode called mobile opportunistic network in view of the dynamic topology and other characteristics faced by self-organizing networks. Based on the original five-layer network architecture, the mobile opportunistic network introduces a bundle layer between the application layer and the transport layer. The bundle layer transforms the original “store-forward” data communication mode of network nodes into a “store-carry-forward” communication mode, transforming the disadvantage of dynamic network topology changes into applicable features, relying on the opportunity contact generated by node movement, The relay node is selected to forward the data packet until the data packet reaches the destination node. For mobile opportunistic networks, the selection of appropriate relay nodes to carry data packets is critical to the performance of network transmission. At the same time, formulating appropriate data packet management strategies on nodes can also improve network data processing capabilities, both of which are Depends on the performance of the network routing protocol. Therefore, formulating reasonable and effective routing protocols for mobile opportunistic networks according to their network characteristics and node characteristics has become a hot research topic in this field. By further analyzing and extracting network characteristics such as mobility characteristics of mobile opportunistic network nodes, historical interaction information, data packet transmission process, etc., and then formulating routing protocols suitable for the network environment based on these characteristic information, it can ensure the effective transmission of data under extreme conditions., and then improve the network performance of the mobile opportunistic network, and promote the wide application of this communication mode in many fields. In the research of opportunistic network routing protocols, the security awareness of routing protocols is a key research problem. Now we have designed a security awareness method for opportunistic network routing protocols based on deep learning and knowledge graph.The experiment verifies that compared with the two literature methods, the overall security perception accuracy of this design method is higher, so as to ensure the network transmission performance.

Security Awareness Method of Opportunistic Network Routing Protocol

157

2 Opportunistic Network Routing Protocol Security Awareness 2.1 Opportunistic Network Routing Protocol Data Collection Design a data acquisition platform to implement the data acquisition of opportunistic network routing protocol. The data acquisition platform can be divided into four functional modules: data acquisition, data analysis, human-computer interaction interface and system management [2]. In the data acquisition module, the data acquisition equipment used is a self-designed and developed PIC controller, with 4 analog input ports and 10 digital input ports, which can realize the collection of analog and digital data of multiple channels, and It has 14 digital output ports. Using these output ports, the upper-level software can control the underlying equipment and realize the real-time data acquisition function under closed loop. The controller transmits the collected data through the network port, and the data communication format adopts a self-defined data format. In order to realize data transmission, the controller uses a self-defined data transmission format during data transmission. This is conducive to ensuring the security of data transmission, and more importantly, it is convenient for later functional expansion. The data in communication are generally transmitted in the form of data packets, which are generally referred to as a frame of data. Protocols that can ensure the correct transmission of data A frame of data is generally composed of frame header, address information, data type, data length, data block and check code. The function of the frame header is to judge whether the data packet is lost during the transmission process. It is required that the frame header should appear the least number of times during a data transmission process, so as to reduce the possibility of data transmission errors caused by judging the data as the frame header as much as possible. Sex [3]. The address information is mainly used in multi-computer communication to realize the distinction of different devices. Here, the multi-channel data acquisition is realized through the controller, and multiple controllers are not involved. Therefore, this part is not involved in the data transmission format design. The three parts of data type, data length, and data block are the main parts of data transmission. The data length is used to identify the number of valid data contained in the data frame, and the data can be easily parsed according to this identification. The function of the check code is to check whether the data is lost and whether the data is correct during data transmission. The controller uses the TCP protocol to transmit data, and the TCP protocol uses mechanisms such as deterministic retransmission to ensure the accuracy of data transmission. Therefore, in order to ensure the efficiency of data transmission, the data format design does not use error checking. The custom data transmission format of the data acquisition device includes data, port number, data length, identification code and frame header. The frame header is represented by two bytes, using 0x4111 as the start of each data frame. Identification code, one byte, used to indicate the type of the port. Data length, one byte, indicating the number of valid data following. Port number, one byte, the different ports of the controller are determined by the port number, and the upper software obtains the information of the specified port. There are 4 analog input ports with port numbers 1–4, ten digital ports with port numbers 01–10, and 14 output ports with port numbers 01–14.

158

Y. Zhao and X. Wan

Data, two bytes, is the data collected by the controller or the data that needs to be specified by the output port. For decimals, the processing method of its magnification will lose a certain precision, but it can also meet the vast majority of needs. This method can simplify the transmission of decimal data. The data acquisition module realizes the communication function with the data acquisition equipment, receives the acquisition data uploaded by the data acquisition equipment, parses the data according to the corresponding data transmission protocol, and then displays and stores the acquired data. Therefore, two methods of serial communication based on Modbus-RTU protocol and network communication based on TCP/IP protocol are implemented in the module. At the same time, in order to ensure the increased demand of developers or users for communication methods due to certain requirements in the later stage, the design of The related function interface can easily add the function implementation of other communication methods, which ensures the scalability of the software [4]. The communication part is composed of ServerManager class, including TcpServer sub-module, Udpserver sub-module, Modbus Server sub-module, and self-defined communication mode realization sub-module. The ServerManager class is a communication service management class, which is mainly responsible for managing the implementation of various communication methods, including functions such as adding, registering, and calling. Its main functions include the management and invocation of the implemented communication methods, which receive the user’s parameter configuration of the data acquisition equipment using the software, and invoke the relevant communication methods according to the user’s choice. Secondly, this class realizes the loading and registration of the communication mode implemented by the user, realizes the unified management and invocation of all communication modes, and can easily realize the function expansion of the user-defined communication mode. In the network communication mode of TCP/IP protocol, a communication server based on TCP protocol and UDP protocol is designed and implemented. The communication server designed and implemented can meet the needs of various aspects related to communication, and can meet the flexible and extensible development ideas mentioned above, and can realize the functional requirements of users adding custom data processing processes. For the realization of the communication function, the communication function of the socket is divided into two parts, one is the server class, the main function is to establish the communication with the underlying data acquisition equipment to realize the basic network communication operation, the other is the request processing class, This class mainly realizes how to process the data after uploading the data from the data acquisition device [5]. Through this form of separating communication establishment and request processing, subsequent development only needs to focus on the business logic part, that is, focusing on development and processing of the acquired collected data. This separation is conducive to the scalable development of the entire communication function. In the realization of the server, the idea of object-oriented development is adopted, and different classes are constructed according to different functional requirements, including BaseServer, TcpServer, UdpServer, ThreadTcpServer, and ThreadUdpServer.

Security Awareness Method of Opportunistic Network Routing Protocol

159

The BaseServer class is the base class of all classes. All possible related operations in the network communication process are defined in this class, but there is no specific implementation, which needs to be implemented in subclasses. Both TcpServer and UdpServer inherit from BaseSever, and implement the functions they need in the base class according to the characteristics of the TCP protocol and the UDP protocol. The two classes ThreadTcpServer and ThreadUdpServer are implemented by inheriting TepServer and UdpServer respectively. These two classes realize the function of processing multi-client communication under multi-threading. When using these two classes to establish a communication server, whenever a connection arrives Every time a new thread is established, the data transmission between the software and the data acquisition device is completed in this thread [6]. In the realization of the whole function, BaseServer is the key of the whole design, and the realization of other classes is based on the rewriting of different functions according to the functional characteristics of their respective implementations. HandleRequest The member of this class is the request processing class object. The server class will receive the request processing class object and call its related functions to realize the data parsing and processing function. This parameter is the parameter that needs to be passed in when the subclass instantiated object of BaseServer is initialized. address This member is the host IP and port that need to be bound when establishing socket communication. This parameter also needs to be passed in as a parameter when the instance of its subclass is initialized. init_() initialization function, responsible for the initialization of class instance objects. get_client() This function receives the socket request and returns the new socket object and client address to communicate with the client. handle_error() When the processing of data in the request processing class reports abnormal errors, this function is responsible for handling these exceptions. handle_error() When the processing of data in the request processing class reports abnormal errors, this function is responsible for handling these exceptions. _handle_request() This function processes a single request. In this function, the get_client() function is called to obtain the socket object, and then the HandleRequest class is instantiated, and its related methods are called to process the data. run() function, after this function is called, it will start a loop, continue to receive the data transmitted by the client and call the relevant processing functions in the request processing class to parse and process the data. In this function, the IO multiplexing technology is used. In this function, select is used to monitor network IO. When data arrives from the connection, the processing function will be called. Another important part of the communication module design is the design of the request processing class. The function of the request processing class is how to parse the data uploaded by the data acquisition device. The user may upload the collected data according to the data format defined by himself, so the corresponding data parsing function is also required on the software side. The user can rewrite the request processing class, and then the software loads and runs to realize these functions. When implementing these functions, users need to use the RequestHandle class as the base class and rewrite the relevant class member functions to ensure that they can be successfully called.

160

Y. Zhao and X. Wan

The class member functions are designed as follows: The sock member variable refers to the new socket variable after the server and the client establish a connection. client_address is the client IP address. The _init_0 function is the initialization function of the class. In this function, the member variables are assigned values and the execute() function is called. Therefore, when using this class, you only need to create an instance object of the RequestHandle subclass in the server. init() function This function is called before the execute() function, and mainly implements various initialization work before executing the processing request. When the user implements, if there is other initialization work, this function can be rewritten. execute() function is a function that must be overridden by subclasses. How to realize the correct parsing of data is implemented in this function. The function of stop() is to release the occupied related resources after the request is processed. The serial communication based on the Modbus-RTU protocol realizes the communication connection between the software and the equipment using the Modbus protocol as the communication protocol through the serial port by designing a communication module based on the Modbus protocol. According to the different device interfaces, the Modbus protocol is divided into two forms: Modbus-RTU used in the serial port and Modbus-TCP used in the network port. The designed platform provides specific implementations for these two forms, which can ensure that the data acquisition equipment using the Modbus protocol can be successfully connected to the software regardless of whether the equipment has a serial port or a network port interface. The two forms of realization are Modbus RtuServer and Modbus TcpServer [7]. The main function of the data analysis module is to provide some data analysis, numerical processing and some data visualization functions, enabling users to quickly build their own algorithms, process and analyze the collected data, and visualize them. Among them, the calling storage function of the data is shown in Table 1. Table 1. The Calling stored function of the data Serial number

Function function

Function name

1

Data stored in a table

SaveTable()

2

Read the data in the table

ReadTable()

3

Storage operation of CSV text format data

Save_ csv()

4

Read operation of CSV text format data

Read_ csv()

5

Real time data acquisition

GetRealTimeData()

Communication functions include close(), write(), read(), Connect(), and init(). Matrix functions include init_(), multiply(), dot(), inverse(), transpose(), rank(), eig_value(), eig_vector().

Security Awareness Method of Opportunistic Network Routing Protocol

161

Data preprocessing functions include outlier processing functions, data noise smoothing processing functions, data normalization processing functions, interpolation, fitting and filtering functions. Data visualization functions include data visualization functions based on Matplotlib, data visualization functions based on PyQtGraph, and 3D drawing based on PyQtDataVisualization. The human-computer interaction interface module mainly realizes various functions of the platform interaction interface, including the parameter configuration interface during data acquisition and communication with the device, the code editing interface used by the user to edit the program, and the realization of the relevant interface for data visualization display. The system function module includes other auxiliary functions of the entire platform, including platform function management, loading and registration functions of userdefined functions, and exception handling in the platform. 1.2 Building a knowledge map of routing protocols. The construction process of routing protocol knowledge graph is building ontology structure, entity extraction, knowledge reasoning, and knowledge graph storage. Ontology is a way for computers to describe everything in the world. Through ontology, consensus on information structure can be shared among software agents, and domain knowledge can be reused. Knowledge graph describes and defines the knowledge and the scope for the knowledge it describes through ontology. The ontology construction process of the routing protocol knowledge graph is as follows: first, consider the domain and scope of the ontology involved in the routing protocol, then determine the relevant professional terms, then define the involved classes and inheritance relationships, and finally define the attributes and related relationships. Knowledge extraction can be divided into three different scenarios according to the form of knowledge data, knowledge extraction for structured data, knowledge extraction for semi-structured data and knowledge extraction for unstructured data. The main consideration in the extraction of routing protocol knowledge is to extract knowledge from relational databases. There are two standards for extracting knowledge from relational databases, Direct Mapping and R2RML. Direct Mapping refers to directly mapping table names to class names and columns to attributes. This method is not flexible enough to map the knowledge data in the relational database to the previously defined ontology structure. Therefore, the R2RML standard is mainly used. The D2RQ tool completes knowledge extraction from relational databases. R2RML is a description language that maps relational databases to RDF structural knowledge. It can take a table, or view, or SQL query as input, and finally map these logically into triple maps. These triple maps are RDF triple, which finally constitutes the knowledge graph. Extract knowledge from relational databases through D2RQ software. D2RQ is a system that accesses relational databases as independent RDF graph databases. There is no need to convert relational databases into RDF data, and use SPARQL to query nonRDF databases. The specific process is as follows: First, access the PostgreSQL database through D2RQ to generate a mapping file. The mapping file is the key for D2RQ to access the relational database in the form of RDF, but the automatically generated mapping

162

Y. Zhao and X. Wan

does not meet the requirements. Next, the mapping needs to be modified according to the ontology structure defined by the previous Protege. File, and finally use the modified mapping file to generate RDF data through D2RQ. Knowledge reasoning is a way of complementing the knowledge graph. It can use the explicit knowledge in the knowledge graph to infer the undiscovered tacit knowledge, so as to expand the knowledge graph. Relatively few relational data are extracted from relational databases through D2RQ, and the relational data needs to be expanded by means of knowledge reasoning. There are many methods for knowledge reasoning based on knowledge graphs, including reasoning based on description rules, reasoning based on graph structure and statistical rule mining, reasoning based on knowledge graph representation learning, and methods based on logical probability. Here, the reasoning of knowledge graph is mainly carried out based on the description rules. And the relationship between entity data in the existing knowledge graph is expanded by means of description rules. The ontology structure defined by the previous Protege tool is in OWL format. The OWL ontology language itself provides logical reasoning, but it is relatively simple. It can only perform relatively simple reasoning such as parent class, subclass, and opposite relationship, that is, only supports predefined ones. Inference on ontology axioms, while inference based on description rules can customize rules according to specific scenarios, formulate inference rules by yourself, or combine the ontology reasoning provided by OWL ontology language. The Jena tool is mainly used for knowledge reasoning. The process of using Jena for knowledge inference is as follows: First, you need to define inference rules according to Jena’s grammar and knowledge graph expansion requirements, and save the rules as a file to be called by Jena, then start Jena, initialize Jena’s data structure, and initialize Jena’s inference engine, and then infer to obtain the expanded relational data, and finally write all the data into a new knowledge graph data file. After knowledge extraction and knowledge reasoning, the routing protocol knowledge graph is obtained. Typically, knowledge graphs model and represent knowledge in the form of graphs, and knowledge graphs are usually stored in graph databases or RDF storage systems in the form of graph data. The RDF storage system uses the RDF format as the data format. At present, the common RDF storage systems include Virtuoso, RDF4J, etc. However, compared with the graph database, the RDF storage system has disadvantages such as complex deployment and inactive communities, which hinder the knowledge graph storage system. Therefore, a graph database is used to store the knowledge graph. The graph database is specially optimized for the data of large-scale graph structure. The basic model of the general graph database is composed of attribute graphs, including nodes, edges and attributes, which can perfectly match the knowledge graph constructed by this system. Common graph databases include Neo4j, JanusGraph, ArangoDB, and TigerGraph. At present, Neo4j is the most popular graph database. The community is active and the ecology is mature. There is a specialized and easy-to-use Cypher query language, and Neo4j is used as the repository of knowledge graphs. The basic process of knowledge graph storage is as follows: First install Neo4j, since the RDF data needs to be stored in Neo4j, the main operation is to use the sematics

Security Awareness Method of Opportunistic Network Routing Protocol

163

toolkit, and then download the sematics toolkit to the plugins file of Neo4j, then start Neo4j, and then enter the command to perform data import. 2.2 Security Perception An opportunistic network routing protocol intrusion detection method based on DCAEs is designed to realize the security awareness of opportunistic network routing protocols. First, a new dilated autoencoder model is designed, which realizes a convolutional autoencoder without pooling operation by transposing the convolution, and minimizes the error of input and reconstruction to learn the hidden layer unsupervised. Features, and learn more global features without loss of information through dilated convolutions. Then, the training process of the intrusion detection model based on the dilated convolutional autoencoder is elaborated, which includes data preprocessing, unsupervised pretraining and supervised fine-tuning. The structure of the designed dilated convolutional autoencoder model is similar to the classical autoencoder. Since the knowledge graph data of the routing protocol used to train the neural network belongs to text information, the pooling operation will lose some information, so the convolutional layer in DCAEs After the pooling layer is not added, the dilated convolution is used instead of the ordinary convolution operation, and a larger receptive field can be obtained without increasing the model parameters, which is more advantageous than the pooling operation. The input of the dilated convolutional autoencoder is mapped into a feature map by the feature function:   (1) ab = g c ∗ Qb + d b In formula (1), c refers to a two-dimensional numerical matrix converted from a onedimensional numerical vector; Qb refers to the weight matrix of the b th feature map ab ; d b refers to the b th feature map ab bias vector; g(·) Refers to the ReLU activation function; ∗ refers to the dilated convolution operation. The ReLU activation function is a non-saturating function. When the input is less than 0, the output is 0, and the neuron is in a suppressed state. When the input is greater than 0, the output is proportional to the input. The ReLU function has the following advantages: Compared with the sigmoid function and the tanh function, the use of the ReLU function in the stochastic gradient descent algorithm can significantly speed up the convergence speed; the calculation is simple, and complex exponential and reciprocal operations are not required; it alleviates the phenomenon of gradient disappearance and enables training Deeper network; because the ReLU function is suppressed when the input value is less than 0, it can obtain a sparse solution, which strengthens the sparse expression ability of the neural network. The disadvantage of the ReLU function is that when the input value of the neuron has a large gradient value during the training process, the neuron has a gradient of 0 after the parameter update, and the neuron will never activate again. If the learning rate is set too high, the network may 40% of the neurons are no longer activated, so in practical applications, a reasonable learning rate needs to be set.

164

Y. Zhao and X. Wan

Next, the feature map of the hidden layer is mapped to the reconstruction of the input of the dilated convolutional autoencoder by a transposed convolution operation:    b b b c˜ = g a ∗ Q +d (2) b∈B

In formula (2), B represents the set of feature maps; c˜ and c have the same shape. Among them, dilated convolution, also known as hole convolution, introduces a dilation rate hyperparameter in the convolution layer, that is, there is a gap between filter elements. Compared with ordinary convolution operations, dilated convolution provides a larger receptive field at the same computational cost. The expansion convolution diagram is shown in Fig. 1.

Fig. 1. Expansion convolution diagram

Transposed convolution is also known as microstepped convolution, but calling it deconvolution is actually wrong, the actual mathematical operations of transposed convolution and deconvolution are different, and deconvolution enables convolution operations The mathematical inverse process of, but the transposed convolution is just a conventional convolution operation. In the image task, the image size can be increased to reconstruct the original image spatial resolution. The transposed convolution is a transformation in the opposite direction of the convolution operation. The shape of the output of the convolution operation is converted to the shape of the original convolution input and maintains an effective connection pattern, which can be used in the decoding layer of the convolutional autoencoder or to map the feature map to a high-dimensional space. The optimization goal of the dilated convolutional autoencoder is to reduce the difference between the input c and the reconstructed c˜ , so that the loss function can reach a minimum value, and the loss function is selected as the mean square error function: 2 1  cj − c˜ j m m

Y (c, c˜ ) =

j=1

(3)

Security Awareness Method of Opportunistic Network Routing Protocol

165

In formula (3), mmmmm m represents the number of feature maps; cccccccjjjjjj cj refers to the j th c matrix; c˜ j refers to the j th reconstructed c˜ . Deep neural networks can be constructed by stacking multiple dilated convolutional autoencoders, a process similar to stacking autoencoders. Specifically, the input of the next convolutional autoencoder is the output of the hidden layer of the previous convolutional autoencoder, and the process of stacking dilated convolutional autoencoders is an unsupervised layer-by-layer greedy training process. The training process of the intrusion detection model is mainly divided into three stages: data preprocessing, unsupervised pretraining and supervised fine-tuning. First, in the data preprocessing stage, the raw network traffic in Libpcap file format is converted into numerical vectors by a session-based data preprocessing module, and these numerical vectors are the training samples in the dataset. Second, in the unsupervised pre-training process, the dilated convolutional autoencoder learns important hierarchical feature representations from a large number of unlabeled training samples; finally, supervised fine-tuning is further optimized from unlabeled samples through a backpropagation algorithm and a small number of labeled samples Supervise the features learned during the pre-training process to optimize the model parameters. The structure of the dilated convolutional auto-encoder of the fine-tuning process is the same as the network structure of the final test process. The structure of the dilated convolutional auto-encoder of the fine-tuning process is a convolutional neural network without a pooling layer and replacing the ordinary convolution operation with dilated convolution. Network. In order to facilitate the dilated convolution operation, the original training samples are transformed into the shape of the image. The convolutional autoencoder has only one convolutional layer, and the fine-tuning process uses an early stopping mechanism to avoid overfitting. The fully connected layer before the Softmax classifier is the last learned abstract feature, and the Softmax classifier uses the output of the fully connected layer as the input of the classifier to perform the classification task. The network intrusion detection model based on DCAEs can handle different kinds of raw network traffic, and unsupervised pre-training does not require a large amount of labeled data, so it has good adaptability and flexibility. Dilated convolutional autoencoders combine the concepts of autonomous learning and representation learning. Autonomous learning and unsupervised feature learning are similar. The difference between the two is that the distribution of unlabeled data used for autonomous learning is not necessarily the same as that of labeled data. For example For image classification tasks, the types of unlabeled image data may be much more than those with labels. After training the neural network with these unlabeled data, the labeled data can still be classified correctly. Representation learning, also known as feature learning, can automatically learn useful feature representations from raw data, replacing the traditional machine learning method of artificially constructing features based on feature engineering. The advantages of dilated convolutional autoencoders are as follows: dilated convolutional autoencoders enable dilated convolutional autoencoders to have a larger receptive field to learn more global information without increasing the computational cost. The pooling operation is used to free the input data from information loss; the unsupervised pre-training process only requires a large amount of unlabeled data. Due to the scarcity of labeled data, the dilated convolutional autoencoder is more suitable

166

Y. Zhao and X. Wan

for practical applications; dilated convolutional autoencoders The encoder has fewer training parameters than the fully connected neural network, which is more efficient and time-saving than other fully connected unsupervised deep learning methods; since the activation function of the dilated convolutional autoencoder is the ReLU function, there is no gradient disappearance problem, so Dilated convolutional autoencoders do not have the same depth restrictions as fully connected neural networks. Generally, the deeper the neural network, the stronger the expressiveness or learning ability, which enables the dilated convolutional autoencoders to process high-dimensional data and build more Deep deep learning architectures.

3 Experimental Tests 3.1 Experimental Method Design For the designed security perception method of opportunistic network routing protocol based on deep learning and knowledge graph, its perception performance is tested through experiments. Set the number of experiment iterations as 20.The opportunistic network is first simulated. The chosen opportunistic network is a mobile social network. Mobile social networking is to using everyone’s mobile phone and the surrounding WLAN hotspots to organize into an opportunistic network. Access to the Internet to achieve global communication. When a communicator is far away from the WLAN access point and cannot directly communicate with it, the information can be forwarded by other mobile phones in the network, and the information can be forwarded to the WLAN access point in a hop-by-hop manner. After the information is obtained on the Internet, the information is returned to the correspondent in the same way. This makes full use of the storage, computing and bandwidth resources of each mobile phone, reducing the burden on the 5G network to a certain extent. The simulation parameters are set as shown in Table 2. Table 2. The setting of opportunistic network simulation parameters Serial number

Simulation parameters

Company

Parameter setting

1

Packet size

MB

5

2

Packet lifetime

Min

80

3

Packet generation interval

s

30

4

Node transmission rate

Mbps

6

5

Communication radius

m

20

6

Node cache

MB

50

7

Number of nodes

piece

120

8

Area of simulation area

m2

3200 × 5300

9

Simulation time

h

30

Security Awareness Method of Opportunistic Network Routing Protocol

167

The routing protocol is simulated through the opportunistic network simulation platform ONE1.60. The selected routing protocol is Epidemic routing protocol. The basic idea of the Epidemic routing protocol is that every message in the network can generate a copy of the message without limit. When a node moves in the network and encounters other nodes, it sends a copy of the message to the one it encounters that has not yet encountered this. The neighbor node of the message copy, the neighbor node that receives the message copy continues to generate the message copy and send it to the other nodes it encounters that have no copy of the message, and so on, until a copy of the message is delivered to the target node. The Epidemic routing protocol algorithm is essentially a flooding algorithm. Using this type of routing protocol, in theory, every non-isolated node in the network has the opportunity to receive all copies of the information, which can maximize packet transmission. It can improve the success rate of the network, reduce the transmission delay, and effectively improve the quality of network communication when the traffic in the network is small. However, with the increase of the number of nodes or the amount of network information, this transmission method will consume a lot of network communication resources, causing network congestion and greatly reducing the success rate of message delivery to the destination. The security perception of the routing protocol is carried out by the designed method, and the security perception accuracy of the designed method is tested. 3.2 Safety Perception Accuracy Test The method of literature [1] and the method of literature [2] are used as comparison methods to test the safety perception accuracy of the three methods, and the test results are shown in Fig. 2.

1.00

Design method

Method of literature [1]

Method of literature [2]

Safety perception accuracy

0.98 0.96 0.94 0.92 0.90 0.88 0.86 0.84 2000

3000

4000

5000

6000

7000

Routing protocol running time / S

Fig. 2. Safety perception accuracy test results

8000

168

Y. Zhao and X. Wan

According to the test results in Fig. 2, the security perception accuracy of the design method is low, which is unstable in the initial operation of the routing protocol and stable at 0.96 after running for a period of time, while the security perception accuracy of the two literature methods is low. Compared with the two literature methods, the overall safety perception accuracy of this design method is higher. This is because the design method builds the data acquisition platform, builds the knowledge mapping of the routing protocol, and designs the new expansion autoencoder model.

4 Conclusion As a new type of wireless network, opportunistic network has remarkable characteristics such as intermittent link connectivity, high information transmission delay and limited node resources. The security of its routing protocol is also low, so it must be aware of the security of its routing protocol. In this paper, we design a security perception method of opportunistic network routing protocol based on deep learning and knowledge graph. By using four functional modules of data acquisition, data analysis, humancomputer interaction interface and system management, we construct a data acquisition platform, construct knowledge mapping of routing protocol, and design a new expansion automatic encoder model to achieve relatively accurate security perception. In the future development, the security perception of opportunistic network routing protocol will be deeply explored to provide a strong theoretical support for the intrusion detection of opportunistic network routing protocol.

References 1. Nuruzzaman, M.T., Feng, H.W.: Beaconless geographical routing protocol for a heterogeneous MSN. IEEE Trans. Mobile Comput. 21, 2332–2343 (2020) 2. Song, H., Liu, L., Pudlewski, S.M., et al.: Random network coding enabled routing protocol in unmanned aerial vehicle networks. IEEE Trans. Wirel. Commun. 19, 8382–8395 (2020) 3. Velusamy, D., Pugalendhi, G., Ramasamy, K.: A cross-layer trust evaluation protocol for secured routing in communication network of smart grid. IEEE J. Sel. Areas Commun. 38(1), 193–204 (2020) 4. Fatemidokht, H., Rafsanjani, M.K., Gupta, B.B., Hsu, C.-H.: Efficient and secure routing protocol based on artificial intelligence algorithms with UAV-assisted for vehicular ad Hoc networks in intelligent transportation systems. IEEE Trans. Intell. Transp. Syst. 22(7), 4757– 4769 (2021). https://doi.org/10.1109/TITS.2020.3041746 5. Yue-bo, L., Wei-jie, Z.: Implementation of dynamic clustering scheduling algorithm for social network data. Comput. Simul. 38(1), 269–272 (2021) 6. Chen, C., Liu, L., Qiu, T., et al.: Routing with traffic awareness and link preference in internet of vehicles. IEEE Trans. Intell. Transp. Syst. 23, 200–214 (2020) 7. Boudouaia, M.A., Ali-Pacha, A., Abouaissa, A., et al.: Security against rank attack in RPL protocol. IEEE Network 34(4), 133–139 (2020)

Research on Pedestrian Intrusion Detection Method in Coal Mine Based on Deep Learning Haidi Yuan1(B) and Wenjing Liu2 1 Anhui Sanlian University, Hefei 230601, China

[email protected] 2 Student Affairs Office, Fuyang Normal University, Fuyang 236000, China

Abstract. Due to the complex background environment in coal mines, the timeliness and accuracy of pedestrian intrusion detection are low. In order to improve the detection accuracy and efficiency of pedestrian detection in complex coal mines, a deep learning-based pedestrian intrusion detection method in coal mines was studied. Build a pedestrian intrusion detection model in coal mine, the grayscale, denoising and illumination equalization processing is carried out for the surveillance video images of pedestrians in the coal mine. The image is preprocessed by nonlinear transformation method, gradient descriptor is obtained by gradient calculation method, HOG feature is obtained, and texture feature is obtained by LBP operator, and the features are used as input to construct a detection model using the restricted Boltzmann machine in deep learning to realize pedestrian intrusion detection in coal mines. The experimental results show that under the application of the research method, the average accuracy rate is higher, reaching more than 90%, and the FPS value is greater, reaching more than 40fps, indicating that the research method has higher detection accuracy and faster detection speed. Keywords: Deep Learning · Restricted Boltzmann Machine · Underground Coal Mine · Pedestrian Intrusion · Intrusion Detection

1 Introduction Coal mine is a high-risk industry, and its production safety has always been highly valued by the society. With the development of digital technology and the continuous advancement of smart mine policies, deep learning technology has great potential for development in coal mine safety protection. Combining computer vision with the coal mining industry has important research value and social significance in improving work efficiency, improving production environment, and ensuring production safety. Pedestrian detection, a subtask of object detection, aims to identify the precise location of people in video images using computer vision techniques [1]. Pedestrian intrusion detection is a technology that can judge whether there are illegal or illegal pedestrians by inputting pictures or video frames, and express the location information of pedestrians. This technology belongs to an important branch of target detection. At present, in © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 169–183, 2024. https://doi.org/10.1007/978-3-031-50577-5_13

170

H. Yuan and W. Liu

the pedestrian intrusion monitoring system in coal mines, accurate target detection and identification are the key to monitoring technology and the basis for subsequent target tracking and behavior analysis. In terms of intrusion detection, reference [2] proposed a video image moving target detection based on improved background subtraction. The background model is reconstructed using the image block mean method based on GMM. In the target detection stage, the method of combining mathematical morphology and wavelet semi-soft threshold function is used to denoise the detected moving targets. In the background update stage, an adaptive background update method is used to update the background. Reference [3] proposed a UAV moving target detection method combining single Gaussian and optical flow method. An improved single Gaussian model is used to model the background of the image captured by the action camera, and multiple Gaussian models of the previous frame image are fused to perform motion compensation. The obtained foreground image is used as a mask to extract feature points and perform optical flow tracking, and perform hierarchical clustering of the motion trajectories of sparse feature points. Reference [4] proposed a deep learning target recognition simulation study incorporating the inter-frame difference method. Under the framework of deep learning theory, the inter-frame difference method is integrated into the recognition process to supplement and enhance the candidate frame segmentation image, and the candidate frame is screened by the NMS algorithm. Reference [5] proposed a dynamic pedestrian intrusion detection method based on PIDNet. A special PID task for feature sharing, a module for feature clipping and a branch network for feature compression are designed, and a benchmark data set is established to evaluate the proposed method. Reference [6] proposed a pedestrian detection method based on roadside light detection and distance measurement. In order to improve the real-time performance of detection, the octree selected by ROI is introduced and improved to filter the background in each frame, thus improving the clustering speed. The detection is completed by combining the adaptive distance Euclidean clustering search radius method. The above method realizes the detection and recognition of moving objects to a certain extent. However, in the mine environment, the detection and recognition rate of moving objects in video images is low, and the detection effect is poor, which in turn affects the subsequent behavior analysis. In recent years, with the improvement of hardware equipment, deep learning technology has developed rapidly. Deep learning has better robustness, and has achieved great results in the field of computer vision such as image classification and target detection, and a large number of detection algorithms based on deep learning have appeared. In this context, a deep learning-based pedestrian intrusion detection method is proposed.

2 Pedestrian Intrusion Detection Model in Coal Mine As a high-risk industry in coal mines, a large number of surveillance cameras are installed at the entrance, exit, and underground tunnels. However, a large number of video resources have not been effectively utilized at present. The video images in the mine have problems such as complex environment, dim light, and large noise interference. In addition, the installation position of the underground camera in the mine is high, and the pedestrians detected in the surveillance video have problems such as small

Research on Pedestrian Intrusion Detection Method

171

size, low resolution, scale change, and pedestrian overlap. Due to the special environment of the underground, the underground image contains the common target distortion, multi-scale, occlusion, illumination and so on in the target detection and pedestrian detection problems. Therefore, downhole pedestrian detection has high research value and significance. Pedestrian detection is a typical target detection problem in the field of computer vision. Specifically, the training sample set is learned by a classification algorithm to obtain a classification model, and then the model is used to detect and classify the test samples. It mainly includes two steps: target positioning and target classification. Target positioning is mainly to determine the location of pedestrians, and target classification is to determine whether it is a pedestrian target. The framework of pedestrian detection is shown in Fig. 1.

Start

Sample pretreatment

Image preprocessing

Feature extraction

Window selection

Classifier training

Feature extraction

Classifier model

Classifier discrimination

Detection result

End

Fig. 1. Framework for pedestrian detection

In the training phase, a training sample set is first given, in which the positive sample set is composed of pedestrian targets, and the negative sample set is collected from background images. Then, a specific feature extraction algorithm is used on the sample set to convert the feature dimension from the image space to the feature space, and then the machine learning algorithm is trained to obtain a classification model to determine the sample category. In the detection process, the pre-calibrated detection window is mainly obtained through the sliding window strategy, and the feature extraction algorithm is also used to convert it into the feature space, and the trained classification detection model is used for screening. Therefore, feature selection and extraction is one of the key factors to determine the detection performance. By improving the feature extraction algorithm,

172

H. Yuan and W. Liu

it is an important means to improve the detection performance to make the feature have stronger representation and discrimination ability. 2.1 Preprocessing of Pedestrian Images in Coal Mines The underground environment of coal mines often contains noise and dust, which affects the quality of image shooting. The camera shake will also make the image unclear and blurred, which in turn affects the effect of detection and recognition. Therefore, image preprocessing is required before pedestrian target detection [7]. The preprocessing stage is to select images that are not clear or of poor picture quality in the video frame, perform a series of preliminary processing to make them meet the requirements of subsequent feature extraction, and then perform target detection. (1) Grayscale image Each color image is composed of three basic colors, red, green, and blue, which are represented by R, G, and B, respectively. Each component has 256 representation values, ranging from 0 to 255, so that 16.7 million colors can be combined. The grayscale of the image is to let R = G = B, then the grayscale value has 256 levels, 0 ~ 255, which can represent 256 colors. It can be seen that after the color image is grayed, the pixel value range will be greatly reduced, and a lot of workload will be reduced in the subsequent image processing process, which can effectively improve the processing speed. The component method selects the R, G, and B values as the gray value for grayscale, and then determines it according to the actual use. ⎧ ⎨ F1 (x, y) = R(x, y) (1) F (x, y) = G(x, y) ⎩ 2 F3 (x, y) = B(x, y) Among them, Fi (x, y), i = 1, 2, 3 is the gray value at the pixel point (x, y). (2) Image denoising At present, the processing of video images by computer is all digital. In the process of image acquisition or conversion, some interference factors, such as random noise, will be introduced, which will undoubtedly bring difficulties to the detection and affect the subsequent processing. Image denoising can also be called smoothing, and there are two processing methods in the frequency domain and the spatial domain. Frequency domain processing requires Fourier transform of the image, analysis of its spectral components, and then inverse transformation to obtain the result. This processing method requires a large amount of calculation and is difficult to meet the needs of timeliness [8]. Here, the neighborhood averaging method is used for denoising. Neighborhood averaging method, also known as mean filtering, is to process the local spatial domain. The expression is:  d (x, y) D(x, y) =

(x,y)∈S

N

(2)

In the formula, d (x, y) is the original image, S is the predetermined neighborhood, N is the number of pixels, and D(x, y) is the processed pixel value.

Research on Pedestrian Intrusion Detection Method

173

(3) Lighting equalization processing The underground environment of coal mines is complex, and many images will be affected. For example, the background color is similar to the target color, and the light source is mostly localized lighting, etc., so that the captured image will have extremely bright and extremely dark divisions of light and dark areas. When extracting feature points, due to the very low signal-to-noise ratio in extremely dark areas, almost no effective feature points can be detected, which brings trouble to the image registration step. According to the performance of the image with uneven illumination, it can be divided into: the overall gray value of the image is low, and the image analysis and recognition are more difficult. The light in the mine is relatively dark. In order to facilitate underground work, lighting equipment will be installed. The intensity of these lights is large and concentrated, which affects the detection effect of moving targets and increases the false detection rate. To this end, it is necessary to perform illumination equalization processing on the image. The key problem in the preprocessing of pedestrian images under uneven illumination is how to remove the influence of illumination factors, and then enhance the details of the dark areas of the original image to a certain extent. Histogram equalization has a good enhancement effect on the overall dark image, but it is not ideal for images with a large overall grayscale range and high light areas. According to the homomorphic filtering and algorithm theory, the original image can be decomposed into two parts: illumination component and detail component, but both have their own shortcomings in the decomposition process. Two-dimensional decomposition is a completely data-driven process that processes images in the spatial domain, overcoming the shortcomings of homomorphic filtering and algorithms. The two-dimensional decomposition process can decompose the image into a series of residual components corresponding to different frequencies and corresponding to the overall change trend of the image. Since the illumination component corresponds to low-frequency information, the detail component corresponds to high-frequency information. Therefore, the obtained residual component can be used as an approximate estimation of the illumination component. After the illumination component is removed from the image, the remaining IFM component is overall dark due to the removal of the illumination factor. At this time, the histogram equalization can be used to process such images, and the histogram equalization is applied to them, and then appropriate illumination compensation is given to obtain a railway image with uniform illumination distribution and enhanced details. Aiming at the image quality problem caused by uneven illumination of pedestrian images downhole, the algorithm in this paper is as follows: Step 1: First determine whether the input original pedestrian image is an image with uneven illumination; Step 2: Use the two-dimensional empirical mode decomposition method to extract the illumination components and detail components of the original downhole image with uneven illumination of people. The specific process is as follows: Step 3: Enhance the detail components of the pedestrian image through the histogram equalization algorithm to obtain a detail-enhanced image. The process is as follows: 1) List the gray level Ap , p = 0, 1, ..., 255 of the original image. Ap is the p-level gray value.

174

H. Yuan and W. Liu

2) Count the number of pixels Mp of each gray level, and Mp is the number of pixels of the gray value Ap . 3) Calculate the histogram. Calculated as follows:   Mp B Ap = n

(3)

where n is the total number of image pixels. 4) Calculate the cumulative histogram. Calculated as follows: Cp =

255    B Ap

(4)

p=1

5) Determine the mapping relationship   between Ap and Cp , and then count the pixel Mp . 6) Calculate a new histogram B Cp . Calculate the equalized mapping  value  corresponding to each gray-level pixel in the area to be equalized according to B Cp . And the mapping value replaces the pixel value before equalization of the gray-level pixel point, so as to obtain a brand-new gray-level equalized image, and then the image is obviously enhanced. Step 4: Determine whether illumination compensation needs to be performed on the detail-enhanced image. If necessary, the method of adjusting the grayscale range of the light intensity is used to obtain a suitable light compensation component; Step 5: Superimpose the detail enhancement component that needs illumination compensation and the modified illumination component to obtain an enhanced pedestrian image. If illumination compensation is not required, the detail-enhanced image is the final processing result. 2.2 Image Feature Extraction In reality, the environment in which pedestrians live is complex and changeable, and is easily disturbed by factors such as posture, clothing, lighting, and occlusion. Therefore, it has the characteristics of nonlinearity and high noise, which brings challenges and difficulties to the detection task. How to select more robust features in different scenarios, so that they can be better used to describe pedestrians, and show the greatest degree of discrimination between pedestrians and backgrounds, is the key issue to improve the detection performance. Commonly used feature descriptors include simple low-level features and complex high-level features. The simple underlying features refer to the edge contour information, color change information and structural texture information of the image [9]. The edge contour information mainly refers to the area where the brightness changes sharply in the image, which can effectively describe the contour features in the image, and can provide a lot of useful information for pedestrian targets. Structural texture information refers to patterns that change regularly in a certain area, which can describe the essential characteristics of the image and better characterize the pedestrian target. The color change information is the basic element of the image, and the appearance of the target can be described by the change of the color. The advantage of simple low-level features is that the feature is single and the calculation speed is fast, but

Research on Pedestrian Intrusion Detection Method

175

the disadvantage is that the image information contained is too single, the discrimination ability is poor, and it is difficult to achieve high detection accuracy. Based on this, two types of features are extracted here, namely texture features and HOG features. (1) HOG features HOG mainly cascades multiple gradient direction histograms that describe local edge information, so that the overall edge structure of the target can be characterized, and the shape and appearance information of the target can be effectively described. The HOG feature is the statistics of the local information of the target, so when the target has a small deformation, it can also have strong anti-interference and good robustness. HOG features are very robust and effective for the representation of pedestrian overall structure. The specific extraction process is as follows: first, the image is divided into small continuous regions by pixel points, which are called cells, and multiple cells are combined into blocks. By counting the gradient directions of all pixels in each cell, the cells in the same block are compared and normalized to reduce the influence of illumination. The final HOG feature descriptor can be expressed as a concatenation of gradient histogram vectors in all overlapping intersecting blocks. The specific calculation process is as follows: 1) Standardize the color space. In order to make the extracted features not affected by lighting factors, it is necessary to use the standardization method of nonlinear transformation to preprocess the image to enhance the contrast of the image, thereby reducing the impact of lighting factors. This step is not particularly important for the calculation process of the entire HOG feature, because the normalization of the extraction block can also reduce the influence of lighting well. 2) Gradient calculation. When counting the gradient directions corresponding to all pixels in the image, it is necessary to first calculate the gradient size corresponding to each pixel. The gradient direction reflects the local edge and structural information of the image, which is very effective for detection. The calculation of the gradient size is mainly determined by the gradient of the abscissa and ordinate of the pixel, and the final gradient descriptor is obtained through the use of the first-order symmetric template. Let Lx (x, y) and Ly (x, y) represent the horizontal and vertical gradients at the image pixel (x, y), respectively, which can be expressed as: Lx (x, y) = J (x + 1, y) − J (x − 1, y) (5) Ly (x, y) = J (x, y + 1) − J (x, y − 1) Then the gradient size and gradient direction corresponding to the pixel point (x, y) can be expressed by the following formula: Gradient size:

2 2 L(x, y) = Lx (x, y) + Ly (x, y) (6) Gradient direction: V = arctan

Ly (x, y) Lx (x, y)

(7)

176

H. Yuan and W. Liu

It can be seen from formula (5) that the gradient of each component is mainly calculated by the first-order symmetric template [-1,0,1] matrix operator. Then the final gradient magnitude and direction are calculated by formula (6) and formula (7). 3) Gradient direction histogram construction The image is divided to form multiple non-intersecting small regions (cells), and then adjacent 2 × 2 cells form a block. The gradient information in each cell is counted, and 9 projection channels are usually generated according to the different gradient directions. The gradient size is weighted and projected into these 9 different channels (bins), and the 9 bins divide the gradient direction into 9 direction blocks on an average of 360 degrees. That is, 0–180 degrees are divided into 9 blocks, and 180–360 degrees are mapped to 9 direction blocks in a diagonal manner. If the gradient direction of a pixel statistics is 60–80 degrees, it is mapped to the fourth bin. The weight of the weighted projection is determined by the gradient size of the pixel point. By counting all the pixel points in a cell, a 9-dimensional feature vector value is obtained in a cell. Concatenate the statistical vectors of all cells in the block, and finally obtain a 2 × 2 × 9 = 36-dimensional feature vector. 4) Block normalization processing In order to further reduce the influence of light and shadow, the feature vector in the block needs to be locally normalized, so as to obtain a better detection effect and make the HOG feature extraction more robust. Commonly used normalization methods are: L2-norm, L1-norm, L1-sqrt, etc. Among them, the L2-norm method is mainly used in pedestrian detection. 5) HOG feature generation By concatenating the gradient direction histograms obtained in each block, the HOG feature descriptor of the entire image is obtained. There are overlapping cell parts between blocks, so the features in the same cell will be counted multiple times. Although there are redundant calculations, they cannot be simplified. Because overlapping cell features can better reflect the context information of pedestrians, it can better represent the pedestrian target and achieve better detection results. (2) Texture features Texture is the regular arrangement of pixels on the surface of all objects in nature, and countless texture primitives are regularly distributed on the surface of things according to their own internal structure or natural laws. Generally speaking, from the tactile point of view, the textures are mostly large particles and have a certain roughness. But from a scientific point of view, even smooth marbles and clouds flowing in the air have certain texture characteristics on their surfaces, but they cannot be seen due to the limitations of the visual range of the human eye. At present, researchers have conducted a lot of research on texture related work, and found that it has strong stability, strong robustness to external lighting and noise, and does not change with the movement and rotation of the original image. The disadvantage is that the texture feature belongs to a surface pixel arrangement feature. As the feature information used to uniquely identify the image, it is not convincing enough to quickly and efficiently identify the category of the image, which affects the accuracy of image recognition. Therefore, based on this deficiency, researchers often combine a variety of image features to jointly play their

Research on Pedestrian Intrusion Detection Method

177

respective advantages, so that the extracted features are more representative, which is also conducive to the gradual development of later work. The LBP operator is another magic weapon for extracting texture information and is widely used in many related fields of machine vision. The core idea is to first count the neighborhood information of each local pixel point, and then normalize the gray level information of all pixel points to obtain the texture information of the entire image. The advantage of LBP operator to extract texture features lies in its strong stability. In most cases, it is basically not affected by external light and noise, and has strong rotation invariance. Moreover, the calculation is simple, and it has achieved good results in many application fields of machine vision. This paper will improve the encoding calculation of the original LBP mode, and propose to add regular features in the image texture space, consider adding them to the calculation process of LBP encoding values, improve the representation ability of texture features, and further improve the accuracy of image recognition. The acquisition of the texture features of the whole image is the sum of the LBP values of multiple local regions and the statistics. Therefore, the slight difference in the features of each local small region will have a great impact on the subsequent analysis and processing of the computer. The original LBP operator is defined based on a 3*3 square kernel, and the central pixel value of the kernel is defined as the threshold of the local area, multiplied by 8 pixel values are the neighborhood values under this basis. The grayscale comparison between the neighborhood value and the threshold value is used to calculate the LBP value of the local area. The calculation process is as follows: Step 1: Divide the converted grayscale image into multiple square local areas to facilitate the calculation of the LBP value. Each local area can be regarded as a 3*3 square kernel, and the pixel values of 9 positions are obtained; Step 2: Take the central pixel as the value of 1 ~ 7, and compare the grayscale with the 8 neighboring pixel values respectively. Finally get an 8-bit binary sequence (01111100); Step 3: According to the conversion relationship between different binary digits, and combined with the weight corresponding to each binary bit, convert binary to decimal, that is, the feature value of the central pixel of the local area. The texture feature of the entire pedestrian image is the sum of the LBP feature values of each local area, that is, the number of occurrences of all different LBP values is counted, and data processing and analysis are performed to obtain the texture information of the image. 2.3 Intrusion Detection Based on RBM Deep learning is the intersection of neural network, artificial intelligence, graphical modeling, optimization, pattern recognition, signal processing and other research fields. Deep learning is a multi-level machine learning algorithm that simulates complex relationships between data and is based on representation learning. An observation can be represented in a number of ways, such as pixels represented by a matrix of intensity values, some representations can make learning tasks easier for algorithms. The goal of representation learning is to seek better representations and build better models to learn these representations. There are many structures of deep learning, and most of them are

178

H. Yuan and W. Liu

branches of some original structures. Since they are not implemented on the same dataset and have different applicability and conditions, it is not always possible to compare the performance of multiple architectures [10]. Deep learning is a rapidly developing field, with new systems, structures, and algorithms emerging with each passing day. A deep network is a network with at least one hidden layer. Similar to shallow networks, deep networks can also model complex nonlinear systems. However, the multi-level structure provides a higher level of abstraction for the model, thereby improving the expressiveness of the model. Restricted Boltzmann Machine (RBM) is a two-layer directed acyclic graph, a special structure of Boltzmann Machine (BM), which is usually used as the basic unit for building deep structures. It consists of a series of binary state hidden layer units g and binary or true value visible layer units q. There is only connection between the visible layer and the hidden layer, but there is no connection between the visible layer units and between the hidden layer units. In an RBM, let the pixels correspond to the visible layer unit q, there are n nodes, the extracted features correspond to the hidden layer g, there are m nodes, and the system (q, g) composed of the visible layer and the hidden layer has energy: R(q, g) = −

n  i=1

αi qi −

m 

βj qj −

j=1

m n  

qi wij gj

(8)

i=1 j=1

Among them, αi and βj are the corresponding biases of the visible layer and the hidden layer, respectively, and wij is the weight between the visible layer and the hidden layer. It can be clearly seen from the energy formula that, given the visible layer, the hidden layer units are independent of each other, whereas, given the hidden layer, the visible layer units are independent of each other. In particular, the unit of the binary layer is an independent Bernoulli random variable. If the visible layer is a true value, the visible unit is a Gaussian variable of diagonal covariance. Usually, the expected value of the unit is used as the activation value. The joint probability distribution where both the visible layer unit vector and the hidden layer unit vector state are 1 is obtained by exponentiating and normalizing the energy function: G(q, g) =

e−R(q,g) H

(9)

Among them, H is a normalization constant, that is, all pairs of visible and hidden layers are added together. The probability that the network assigns to the visible layer vector is to sum up all the visible layer vectors to get G(q).

 −R(q,g) (10) G(q) = g e H

Research on Pedestrian Intrusion Detection Method

179

Since the visible layer units and the hidden layer units are independent of each other, there are: ⎧ m  ⎪ ⎪ ⎪ G(g|q ) = G(gj |q ) ⎪ ⎪ ⎨ j (11) n ⎪  ⎪ ⎪ ⎪ G(qi |g ) ⎪ ⎩ G(q|g ) = i

When the state of the visible layer q is given, the probability that the binary state of the hidden layer unit g is 1 is G(gj = 1|q );   n  G(gj = 1|q ) = f βj + qi wij (12) i=1

When the state of the hidden layer g is given, the probability that the binary state of the visible layer unit q, is 1 is G(qi = 1|g ). ⎛ ⎞ m  G(qi = 1|g ) = f ⎝αi + gj wij ⎠ (13) j=1

where f (·) is the activation function. It can be seen from formulas (12) and (13) that the learned weights and biases directly determine the conditional distributions G(g|q ) and G(q|g ), and indirectly determine the joint distribution G(q, g) and the marginal distributions G(q), G(g). Sampling from a joint distribution is difficult, but it can be achieved by “alternating Gibbs sampling”. Start with a random image, then update all features in parallel with formula (12) and all pixels with formula (13), alternating the two processes. After Gibbs has sampled long enough, the network reaches “thermal equilibrium”. At this point, the states of the pixel and feature detectors are still changing, but the probability of finding a system under any particular binary structure does not change. For an RBM network, there are only input layers and output layers. In the process of training RBM, the number of visible layer units is generally the original input data, and the number of hidden layer units needs to be given. The training process of RBM and the optimization of weights are as follows: Step 1: First, randomly initialize the weight matrix w and the bias vector β of the visible layer and the bias vector α of the hidden layer to the network. Step 2: Assign the original input data to the visual layer unit, and forwardly propagate the visual layer input matrix q. According to formula (12) and G(g|q ) of formula (11), the activation probability G(g|q ) of the output matrix g of the hidden layer unit is calculated by the visible layer. The activation probability of the input matrix q and g corresponds to the matrix of the node product to obtain the probability of forward propagation. Step 3: At this time, the G(g|q ) output in step 2 is the g probability value, and it is randomly binarized into a binary variable. Step 4: Use the g probability value binarized in Step 3 to propagate in the reverse direction. According to formula (13) and formula (11) G(q|g ), the activation probability

180

H. Yuan and W. Liu

of the matrix q of the visible layer is calculated, that is, a reconstruction of the visible layer is obtained. Step 5: Perform forward propagation on q again, and calculate the activation probability of the matrix g  of the hidden layer according to formula (12) and G(g|q ) of formula (11). As in step 1, the activation probability of the input matrix q and g  corresponds to the matrix of the node product to obtain the probability of back propagation. Step 6: Subtract the activation probability of g  obtained in step 5 from the activation probability of the hidden layer g obtained in step 2, and the result is used as the increment of the bias β corresponding to the input layer g. The activation probability of q is subtracted from the activation probability of the visible layer q, and the result is used as the offset α corresponding to the visible layer q. The probability vector of back propagation obtained in step 5 is subtracted from the probability vector of forward propagation obtained in step 2, and the result is used as the weight increment between the input layer and the output layer. In each iteration, the update of the weights and the update of the bias are performed simultaneously, so they should converge at the same time. Combined with its corresponding learning rate, the weights and biases are updated. In addition, in order to alleviate the contradiction between the speed of the learning rate and the stability of the algorithm, the momentum l is introduced, and the initial value is multiplied by the momentum m before updating. For example, the initial value of momentum l is set to 0.5, when the reconstruction error gradually stabilizes from a large initial decline process, the momentum l is increased to 0.9. Step 7: Repeat steps 2 to 7 until convergence or the maximum number of iterations is reached. In this way, the training of an RBM is completed.

3 Experimental Tests In this paper, the above structure is evaluated on the downhole pedestrian detection dataset to verify the performance of our algorithm. And the RBM-based pedestrian intrusion detection algorithm in coal mines is evaluated on the VOC 07 public beta data set. 3.1 Experimental Dataset The coal mine underground data set in this paper comes from a coal mine underground monitoring video. The entire data set contains a total of 23,210 pictures, all of which are 1280 × 720 in size. 11,605 pictures are selected as the training set and 11,605 pictures are used as the test set. The number of pedestrians in each picture ranges from 1 to 20, of which 80% are pedestrians without intrusion behavior, and 20% are pedestrians with intrusion behavior. 3.2 Detection Indicators E was used as the evaluation index in the experiment. E is the average accuracy rate of all categories in the intrusion detection model. The larger the value of E is, the closer to 1 it is, indicating that the detection accuracy rate of the method is higher; The smaller

Research on Pedestrian Intrusion Detection Method

181

the value of E, the lower the accuracy of the method detection. The calculation formula is: u  hk k=1 (14) E= u In the formula, u is the number of target categories in the target detection task, hk is the average accuracy of various detection targets. In practical applications, deep learning models need to meet the requirements of timeliness. Therefore, the detection speed needs to be evaluated, and the commonly used evaluation method is to evaluate the detection speed by the time required to process a single image. The number of detectable frames per second can also be used as the evaluation standard, and the latter is selected to evaluate the detection speed of the model. FPS is the number of frames per second, generally 24 fps for movies, 30 fps for TVs, and 60 fps for LCD monitors. In order to make the picture smooth, the FPS generally needs to be kept above 30 fps. In deep learning, FPS can represent the number of frames detected by the model per second, and can intuitively express the timeliness of the model. The larger the FPS value, the more frames can be detected per second, and the better the timeliness. 3.3 Experimental Results and Analysis The research method is compared with the improved background subtraction method, the combination of single Gaussian and optical flow method, and the inter-frame difference method, respectively, and the comparison results of the average accuracy of different methods are shown in Fig. 2. Intrusion

No intrusion

100

Average accuracy rate/%

80

60

40

20

0 Research methods

Improved background subtraction

Optical flow method

Interframe difference method

Fig. 2. Comparison of the average accuracy of different methods

The FPS comparison results of different methods are shown in Table 1.

182

H. Yuan and W. Liu Table 1. FPS comparison table of different methods

Method

FPS/fps

Research methods

42.36

Improved background subtraction

30.21

Combining single Gaussian with optical flow method

31.52

Interframe difference method

24.58

As can be seen from Fig. 2 and Table 1, compared with improved background subtraction, combined single Gaussian and optical flow method, and inter-frame difference method, the average accuracy rate is higher and the FPS value is larger. It shows that the detection accuracy of the research method is higher and the detection speed is faster.

4 Conclusion The mining industry is transforming into a smart mine, accelerating the development of automation and intelligence, but its safety has not been well guaranteed. The underground environment is dark and the mine is complicated, and the staff can easily enter the dangerous area. At present, it is mainly through manual shifts to monitor each work point to determine whether someone has entered the dangerous area by mistake. However, there are many monitoring points, and the monitor screen is small, and the requirements for the staff are relatively high. They are prone to fatigue and inattention for a long time, and they cannot detect the occurrence of danger in time or misjudgment certain behaviors. To this end, a deep learning-based pedestrian intrusion detection method in coal mines is studied. After testing, the accuracy and detection speed of pedestrian intrusion detection in coal mines have been improved to a certain extent. However, the algorithm in this paper does not consider the problem of multiple pedestrians occluding each other. Therefore, in the next research, the main consideration is to solve the problem of pedestrian occlusion in the coal mine. Acknowledgement. This work was supported by Anhui Provincial Education Department Foundationunder grant no.KJ2021A1176.

References 1. Astolfi, G., Rezende, F.P.C., Porto, J.V.D.A., et al.: Syntactic pattern recognition in computer vision: A systematic review. ACM Computing Surveys (CSUR) 54(3), 1–35 (2021) 2. Junhui, Z., Zhenhong, J., Jie, Y., et al.: Moving object detection in video image based on improved background subtraction. Comp. Eng. Design 41(05), 1367–1372 (2020) 3. Changjun, F., Lingyan, W., Quanyong, M., et al.: Detection of moving objects in UAV video based on single gaussian model and optical flow analysis. Comp. Sys. Applicat. 28(02), 184–189 (2019)

Research on Pedestrian Intrusion Detection Method

183

4. Hui, W., Lijun, Y., Rong, S., et al.: Research on simulation of deep learning target recognition based on inter-frame difference method. Experim. Technol. Manage. 36(12), 178–181 and 190 (2019) 5. Sun, J., Chen, J., Chen, T., et al.: PIDNet: An efficient network for dynamic pedestrian intrusion detection. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 718–726 (2020) 6. Gong, Z., Wang, Z., Zhou, B., et al.: Pedestrian detection method based on roadside light detection and ranging. In: SAE International Journal of Connected and Automated Vehicles 4(12-04-04-0031), 413–422 (2021) 7. Yang, Y., Su, W., Qin, Y., et al.: Research on object detection method of high-speed railway catenary image based on semantic label. Comp. Simula. 37(11), 146–149 and 188 (2020) 8. Wang, R., Chen, H., Guan, C., et al.: Research on the fault monitoring method of marine diesel engines based on the manifold learning and isolation forest. Appl. Ocean Res. 112(2), 102681 (2021) 9. Calvo-Bascones, P., Sanz-Bobi, M.A., Welte, T.M.: Anomaly detection method based on the deep knowledge behind behavior patterns in industrial components. Application to a hydropower plant. Computers in Industry 125(5), 103376 (2021) 10. Shatalin, R.A., Fidelman, V.R., Ovchinnikov, P.E.: Incremental learning of an abnormal behavior detection algorithm based on principal components. Comput. Opt. 44(3), 476–481 (2020)

Personalized Recommendation Method of College Art Education Resources Based on Deep Learning Sen Li1(B) and Xiaoli Duan2,3 1 School of Fine Arts, Fuyang Normal University, Fuyang 236037, China

[email protected]

2 China Academy of Art, Hangzhou 310009, China 3 Haikou University of Economics, Haikou 571132, China

Absrtact. In order to solve the problem of uneven distribution of educational resources and enable the campus network host to complete the recommendation of art education resources according to students’ learning interests, the personalized recommendation method of art education resources in colleges and universities based on in-depth learning is studied. Design the deep learning architecture of art education in colleges and universities, and determine the basic architecture, upgrade architecture and complete architecture layout of the campus network learning algorithm. According to the principle of educational weight distribution, the similarity degree of educational resource information is calculated; Based on the in-depth learning algorithm and the resource vector to be allocated, the solution expression of the recommendation table is derived, the personalized recommendation list of college art education resources is formulated, and the design of personalized recommendation method of college art education resources based on in-depth learning is completed. The implementation results show that under the effect of the deep learning recommendation method, the proportion of learning allocation of the selected art education resources exceeds 90%, and the uneven distribution of education resources has been well solved. Keywords: Deep Learning · Art Education Resources · Personalized Recommendation · Education Weight · Similarity · Recommendation List

1 Introduction Deep learning is a meaningful learning based on understanding and the pursuit of transfer application. It promotes the development of higher-order knowledge and realizes the application of these knowledge in new situations or the generation of new higherorder knowledge by encouraging students to participate in learning deeply and adopt advanced learning strategies appropriately. “Learning based on understanding for transfer” focuses on multi angle deep reflection, application of knowledge and energy in new situations or generation of new high-level knowledge and energy, which is the purpose © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 184–199, 2024. https://doi.org/10.1007/978-3-031-50577-5_14

Personalized Recommendation Method of College Art Education Resources

185

of deep learning; Under this purpose, deep learning “focuses on the development of high-level knowledge and ability”, which is the result of deep learning, focusing on the development of knowledge and ability such as implementation (i.e. application in new situations), analysis, evaluation and creation; In order to achieve this result, in-depth learning focuses on guiding students to “adopt the in-depth learning strategy appropriately”. As a way to achieve the results of in-depth learning, it can be determined by three criteria: “whether it is based on understanding”, “whether it pursues transfer application”, and “whether it is active”; As the premise of realizing this approach, “deep participation in learning” focuses on the degree of students’ learning engagement and their flow state after engagement [1]. At present, the traditional art education resources are mainly textbooks and textbooks, and the form is relatively simple, which can not fully reflect the characteristics of the visual art of art. Fine arts can be called plastic arts or visual arts. It is the art of creating visual images with certain space and aesthetic value by using certain material materials (such as paper, cloth, wood, clay, marble, plastic, etc.) through plastic means. However, the rapid development of network information technology has provided a new opportunity for online art education resources. We can fully use this platform to establish a network art education resource library, integrating text, graphics, images, audio, video, etc. The network art education resources use the open and equal network environment with no center that is unique to the modern information network tools to communicate across time and space, interact and share information to develop the students’ personality, which can not only fully reflect the special attribute of art as a visual art, but also give full play to the subjective and interactive advantages of teaching and learning. In recent years, the construction of educational online courses has been constantly carried out. With the implementation of “distance education” in some schools, the research on the use of online art education resources has been greatly promoted. Therefore, the strategies and methods of using network art education resources are very meaningful research work. The recommendation technology of educational resources is the combination of the relevant research of educational resource platform and the research of recommendation algorithm. The user behavior data in the educational resource platform is an important basis for the recommendation algorithm. At the same time, reasonable design and application of the recommendation algorithm can help users to conduct efficient resource screening and independent learning in the Internet education platform. Literature [2] proposes a personalized teaching resource recommendation method based on learner portrait. Firstly, the data of learners are obtained by using crawler technology, and then the learning data of learners are analyzed quantitatively; The two-way long-term and short-term memory network based on attention mechanism is used for emotional analysis, and a learner portrait feature model including three dimensions of learner’s basic information, behavior and bullet screen text is constructed. On this basis, the relationship model between teaching resources and learner portraits is established by using deep neural network; The model is used to predict learners’ new learning needs and provide personalized course recommendation services. Literature [3] studies and constructs a learner model suitable for personalized recommendation of online learning resources based on education and teaching theory and relevant data of learners in online education

186

S. Li and X. Duan

platform. Taking the collaborative filtering recommendation method as the starting point, the collaborative filtering method is improved by integrating the static and dynamic features of the learner model, and the collaborative filtering recommendation method of online learning resources is established by integrating the learner model. However, there are still some shortcomings in the existing research results: the main purpose of user behavior analysis in education resource related research is to design reasonable courses, and the data analysis at home and abroad is aimed at obtaining the characteristics and laws of universal users’ learning behaviors. There is a lack of analysis of user interaction data, and it is impossible to establish personalized interest models for users. In the context of the Internet, the education resource platform reflects the collective wisdom of users, such as the user-defined tags that users add to education resources. However, the current research lacks semantic analysis of user tags and does not make full use of the collective wisdom. Therefore, this paper puts forward the personalized recommendation method of college art education resources based on deep learning.

2 Definition of Deep Learning Framework of Art Education in Colleges and Universities The improvement of the deep learning algorithm of art education in colleges and universities requires the construction of a basic learning architecture, an upgrading architecture and a complete learning architecture. This chapter will focus on the above contents. 2.1 Basic Learning Architecture Architecture refers to the frame structure formed by combining things that play a supporting role. This is consistent with the definition of architecture: architecture is the process and product of planning, designing and building buildings or any other structure. By analogy, it is proposed that learning architecture refers to the process and product of learning design. However, the scope of this definition is too broad and too vague to present the learning architecture for students and teachers. In the field of computer science, learning architecture refers specifically to software architecture. As a metaphor, it is also similar to architecture in the field of architecture, but more operational than the above definition of learning architecture. Specifically, the software architecture is a top-level design structure, which involves the description of the constituent elements of the software system, the interaction between the elements, the element grouping mode and the constraint conditions of the mode. As the blueprint of software system and its development, it makes a plan for the tasks that the design team needs to perform. Learning, as a complex system, has been recognized by the education community. For example, the popular view of learning ecosystem based on ecological theory in recent years, based on which, learning architecture can also be compared to software architecture and positioned as the top-level design structure of the learning system [4]. Similar to software architecture, learning architecture is also just a framework structure, which provides a blueprint for teaching design. It does not impose specific restrictions

Personalized Recommendation Method of College Art Education Resources

187

on the “bricks and tiles” filled in the framework, such as the gender of students, the presentation form of learning materials, the shape and placement of desks, etc. Similar to software architecture, learning architecture also focuses on and is based on learning elements. In the teaching structure, the stability of the teaching process structure is usually characterized by the interaction of the four elements of teachers, students, content (or textbooks) and media. Considering that the learning architecture is a representation of the flexibility of teaching and learning, when recommending college art education resources, the learning architecture should be defined and constructed from the interaction of learning tasks, activities, processes, decisions and other influencing factors. The advantage of this is that the teaching structure and learning structure, as a dual feature of teaching and learning, can work in harmony with each other like the left and right channels in stereo, rather than either one or the other. This is of practical significance in classroom teaching: stability is conducive to teachers organizing teaching and improving teaching efficiency; Flexibility helps students participate in learning and improve learning depth. The recommended basic learning architecture of art education resources in colleges and universities is shown in Fig. 1.

Fig. 1. Basic Learning architecture

The direct reason why the learning architecture highlights the flexibility is to more fully meet the different needs of students, and let learners have greater control, that is, to reflect the personality and initiative of students. In the teaching mode formed based on the learning architecture, students can learn by any path according to their own preferences, and can also decide which learning tasks to choose and how to learn by themselves. This is consistent with the concept of learning structure. Learning task is the main learning content of deep learning, and students’ ability to decide learning content independently according to needs is an important dimension of flexibility. In deep learning, the effectiveness and interest of learning tasks are two attributes that teachers should focus on when providing optional content. The former ensures the efficiency of the classroom, while the latter helps students to participate deeply.

188

S. Li and X. Duan

2.2 Learning Architecture Model Upgrade For college art education, the significance of learning events can be demonstrated through the interaction of participation and materialization; Time can be designed, but learning events emerge; The space for students to participate in activities is local, but affected by the overall situation; Power (such as discourse power) is embodied through personal identification and discussion of such identification. From this perspective, these elements define the possible method space for solving design problems: learning design is completed by solving the problems concerned by the four pairs of binary elements. In deep learning algorithm cognition, the two dimensions of each pair of dual elements (such as participation and materialization) are not mutually exclusive, and they can be balanced and coordinated to jointly affect the learning quality [5]. As the upgraded deep learning architecture is oriented to the construction of the learning space for Unicom learning, the architecture mainly considers the learning environment, learning tools and content, enterprise level background systems, other traditional related applications and other elements, as shown in Fig. 2.

Fig. 2. Upgrade model of deep learning architecture

It can be seen from Fig. 2 that learning environment mainly refers to online environment enabled by technology, such as learning management system, portal, MOOCs, simulation environment, etc. Learning tools and content are mainly tools and content embedded in the learning environment, in which tools are subdivided into general tools and specific tools. Enterprise level background systems refer to enterprise level systems that conduct data mining and learning analysis in the background. They provide data information about students’ learning conditions. Other traditional related applications mainly refer to the applications that are currently used and can provide specific services. They are usually isolated and cannot exchange data between systems. Since this architecture is based on open standards and services, it is also called “open architecture”. It can not only realize the “plug and play” integration of various software, but also realize the sharing of applications or data between different learning

Personalized Recommendation Method of College Art Education Resources

189

spaces. In order to realize the openness and fairness of contemporary art education, it is mainly reflected in the access to educational resources and information. Based on this understanding, the establishment of online art education resource library and its application strategies and methods are particularly important. Let α, δ represents two unequal learning behavior vector, Iα is based on coefficient α student learning characteristics, Iδ represents students learning characteristics based on coefficient δ, β depth value measurement coefficient, O represents core marker coefficient, I represents the unit of student learning characteristics, establish the above physical quantity, can be the depth of college art education algorithm expression is defined as: β · (Iα − Iδ )2 √ P= 2 O − 1 |I |

(1)

Educational resources are defined by educational elements. Educational resources refer to various resources supporting teaching activities. Network education resources are the virtualization of physical resources. A more interesting transformation way is to transform the connotation of physical resources and their electronic and virtual forms into network resources. Of course, online education resources are not passive transformation of physical resources, but also innovative. In the description of recommendation algorithms, items are often used to describe things to be recommended. In different application backgrounds, items can represent different entities. 2.3 Improvement of Learning Architecture Model The complete deep learning architecture includes six dimensions to promote the structure of learning experience, namely, students’ independence level, students’ initiative, extended school year/study day, mixed age/peer learning, community learning, and learning space, as shown in Fig. 3.

Fig. 3. A complete deep learning architecture

190

S. Li and X. Duan

It can be seen from Fig. 3 that the level of independence of students is the basis for grouping. Compared with grouping by age or ability, this grouping can provide more autonomy support for students’ growth. Learning initiative focuses on how to make students “own” their own learning, such as setting meaningful goals and taking responsibility for their own learning and personal development. Extended school year/study day extends students’ immersive learning to school holidays, which provides more time and opportunities for students to use school space to achieve their goals. Mixed age/peer learning aims to encourage students to learn from and teach each other (mainly older students teach younger students), and promote the establishment of collaborative relationships and other effective relationships. Community learning provides students with more learning opportunities, such as lectures, field research, community services, and interaction with other members. The learning space provides a supportive environment for student centered learning experiences. In the process of recommending college art education resources, the learning methods under the in-depth learning framework are personalized and reflect students’ initiative: the learning experience is customized according to students’ learning path and pace, and the education of each student is jointly developed and shaped by students and teachers. Students spend half of their time learning through projects. All projects are interdisciplinary and focus on exploring solutions to real problems. Such a learning framework can promote the production of in-depth learning results. The efficiency of classroom teaching is one aspect that teachers pay most attention to. The classroom teaching oriented to deep learning should pay more attention to classroom efficiency, otherwise it is difficult to have time to cultivate high-level abilities and their transfer applications. In deep learning, classroom efficiency can be reflected by the effectiveness of learning tasks, which is generally measured by the competency to promote students’ development towards goals [6]. Goals can guide teachers to determine which learning tasks are (or are not) important. When they are clearly presented to students and the designed tasks focus on these expected goals, learning tasks are highly effective. The open evaluation indicators can let students know clearly what performance they should eventually have. With the instant feedback function of the smart classroom environment, students can know their learning progress and current performance in real time, which helps students to make self adjustment in time. The evidence used to evaluate students’ performance, in addition to traditional test scores and assignments, is more important than performance evaluation, which points to deep learning.

3 Personalized Recommendation of Art Education Resources With the support of deep learning algorithm, to improve the personalized recommendation method of college art education resources, it is also necessary to calculate resource similarity based on the distribution of education weight values, and develop a recommendation list by combining relevant education resource vectors. 3.1 Distribution of Education Weight On the platform related to college art education resources, the user behaviors mainly include watching course videos, participating in course discussions, browsing course

Personalized Recommendation Method of College Art Education Resources

191

chapters, participating in course forum interactions, etc. In the above various user behaviors, relevant detailed behaviors will be generated. For example, when users watch a video course, they will have the detailed behavior of watching the video, including the exit point, fast forward, fast backward position and times of watching the video. The user behavior studied in this paper is mainly from the perspective of behavior frequency statistics. Different user behavior times are taken as an important basis for user item scoring estimation, and the detailed behavior data is not discussed. Based on the standardized records of user project behavior times, the behavior information entropy can be calculated according to the definition of information entropy. The specific meaning of behavior information entropy is that the more uniform the distribution of behavior times, the greater the behavior information entropy and the weaker the personalized characteristics of behavior. In the determination of user behavior weight, the larger the information entropy value of learning behavior is u, the smaller the corre sponding behavior weight value is Yu . When the learning behavior vector is constant R, the solution expression of weight value Yu can be defined as: − Yu =

+∞  χ =1

+∞  χ =1

 log R ×P·u

(2)

R1 · R2 · · · · · Rn

In the formula, χ represents the information entropy index based on the depth learning algorithm, R1 , R2 , and Rn represent the tag coefficients of n randomly selected art education resources, and the inequality value condition of R1 = R2 = · · · = Rn is always valid. In the art teaching practice, the computer network can transmit colorful image information to students. These plane and three-dimensional sound light color combination, multi form, multi touch and all-round image thinking language have greatly broadened the students’ vision, enriched their imagination space, stimulated their thinking ability and improved their creativity. Modern computer multimedia teaching can play its unique role in painting design, craft production, works appreciation and other aspects [7]. Network information resources have greatly broadened the content of art teaching, and the resources are more abundant. The knowledge package of network resources has huge capacity and covers a wide range of fields. It is a treasure house for learners to gain knowledge through research learning. It has unlimited openness and sharing of resources, and it makes up for the shortcomings of other media communication methods. On the basis of formula (2), let w represent the established allocation parameter of art education resources, φ represents the allocation coefficient based on the deep learning algorithm, and q represents the core allocation value, jointly establish the above physical quantity, and derive the art education weight allocation expression is as follows: E=

1 − Yu q − φw

(3)

The network information resources, especially the images, animations, images and sounds, make these learning resources more vivid, transform the static beauty solidified

192

S. Li and X. Duan

in traditional textbooks into dynamic beauty, and fully display the artistic beauty such as artistic conception, music and poetry contained in textbooks. Traditional culture is often static, while modern media art is mainly dynamic. In modern media art, network media is an important form that best represents the epochal characteristics of modern media art. With the development of the Internet, there are more and more web pages. Static text, pictures and music are being displayed on the network platform as a new dynamic image through the production of animation software. 3.2 Resource Similarity Resource similarity refers to the degree of similarity between the art education resources of colleges and universities to be allocated. In the deep learning algorithm cognition, the higher the level of similarity, the greater the resource information recommendation pressure that the campus network host needs to bear; On the contrary, if the level of similarity is relatively low, it means that the campus network host needs to bear less pressure on resource information recommendation. Today’s art education in colleges and universities has never really got rid of the single realistic education model, which has led students to have a wrong understanding of some related concepts of art, so that students just regard it as a professional skill, without a deeper and more comprehensive understanding of the essence and connotation of art. This education model will even have an impact on training objectives and teaching thinking, Thus, students’ perception of things becomes dull, their thinking is rigid and inflexible, and their learning motivation gradually decreases, not to mention the cultivation of innovation ability, which ultimately leads to students’ confusion and loss of their own goals. The teaching form and method of teachers are too old and single, the teaching efficiency is not high, and the teaching approach is monotonous and lagging behind, which leads to the students’ low learning initiative and enthusiasm, and the students’ lack of independent thinking ability. Most art teachers use the teaching method when teaching art theory courses. Because of the lack of interaction with students in the teaching process, it is difficult to stimulate the students’ initiative, making the classroom atmosphere more depressed, boring Dull. The teaching process pays too much attention to the grasp of the basic skills of modeling, thus neglecting the cultivation of students’ creativity. This will lead to the accumulation of a large number of art education resources, which not only brings a great burden to students’ daily learning, but also increases the pressure of resource recommendation borne by the campus network host. The orientation of college art education is biased, and college art education is not given due attention. Although art education in colleges and universities is attracting more and more attention from the public, many colleges and universities still do not attach great importance to it as a “foil” or auxiliary discipline, so their investment in it is also very limited. This problem has always been a source of great distress for some art education researchers in colleges and universities, because the development of art education in colleges and universities is very unfavorable if it goes on like this for a long time, It will eventually lead to such an empty shell of college art education, which does not actually contain any substantive content [8]. Due to the insufficient investment of art education funds in colleges and universities, and then the treatment of art teachers

Personalized Recommendation Method of College Art Education Resources

193

can not be improved, which leads to the lack of energy input of teachers, no enthusiasm for creation, and no enthusiasm to continue to study and explore art education in colleges and universities, so that their teaching ability and scientific research ability are stagnant, which has a great impact on the development of art education in colleges and universities. Set γ and ϕ to represent two independent indicators for personalized recommendation of college art education resources, the calculation formula of the personalized recommendation index of college art education resources is as follows: Sγ · Sϕ A=E×        Sγ  · Sϕ 

(4)

In formula, Sγ represents the recommendation coefficient based on the coefficient γ ,  Sγ represents the step length value of the recommendation instruction execution based on the coefficient Sγ , Sϕ represents the recommendation coefficient based on the coefficient ϕ, and Sϕ represents the step length value of the recommendation instruction execution based on the coefficient Sϕ . College art education belongs to art, and art is an important part of human culture. It is not only the carrier of culture, but also the product of culture. The government should urge all colleges and universities to use local materials to find art teaching content and carry out art teaching activities according to local actual conditions. More importantly, these art forms are closely related to the daily life of local students, which is very helpful to stimulate students’ enthusiasm and enthusiasm for learning. Students’ creative thinking, autonomy and scientific research ability have been greatly improved in art teaching practice, At the same time, it also played a significant role in the inheritance and development of traditional culture. 3.3 Recommended List The recommendation list determines the distribution ability of the campus network host to the college art education resources. A complete recommendation list includes both named nodes, coded nodes and statistical indicators. Under the effect of deep learning algorithm, the following issues should be paid attention to when making personalized recommendation list of college art education resources: Provide personalized recommendations: such as popular ranking. Switch to personalized recommendation when user behavior data is collected. Use registration information, such as age and gender, to make coarse grained recommendations. Users are required to give feedback on some items when registering. Collect user interest information. Users can be classified according to their registration information, and the classification can be multi classification. Avoid a single art education resource occupying the recommended path of the recommended campus network host for a long time.

194

S. Li and X. Duan

Let G represent the core recommendation index of art education resources in colleges and universities, and the solution formula is as follows: G=

1 A · (f − 1)2 φ

(5)

In the formula, φ represents the action strength of the deep learning algorithm, and f represents the personalized allocation vector. On the basis of formula (5), let ε represent the transmission coefficient of art education resources in the campus network environment, l1 represents the accumulation of named node, l2 represents the accumulation of coding node, L represents the statistics of art education resources in universities per unit of time, and j represents the statistical coefficient. Based on the above physical quantity, the function expression of personalized recommendation list of university art education resources can be defined as:   +∞ l12 + l22 G· K= (6) j × |L| ε=1 Table 1 reflects the necessary constraints for the recommendation list. Table 1. Recommended list of university art education resources The name of the node

Content

Named nodes

Decide the naming method of fine arts education resources in colleges and universities

Coding node

Code the resources of college art education

Statistical indicators

Record the recommended contents of fine arts education resources in colleges and universities

Informatization is the major trend of the world’s economic and social development. Information technology, with network technology and multimedia technology as its core, has become a creative tool to expand human capabilities. The application of information technology in art learning is not only the introduction of technology, but also an all-round reform. The application of network technology to learning has changed the traditional learning mode and learning method. The popularity of the network has made education and teaching enter a higher stage of development again. The art teaching style under the network environment is more unique. The art teaching has diversity, flexibility and visibility. The application of network teaching has very obvious advantages in cultivating students’ innovation ability. 3.4 Analysis of Personalized Recommendation Characteristics Personalized education, based on the premise of discovering and respecting students’ personality differences, provides learners with diversified educational resources conducive

Personalized Recommendation Method of College Art Education Resources

195

to their own personality development for their own choice, promotes the improvement of learners’ personality to the maximum extent, and finally enables learners to develop their personality fully and freely. Taking college art education resources as the research objective, we can see that personalized recommendation methods have three main characteristics: democracy, pertinence and diversity: Democracy is one of the characteristics of individualized education. The democracy of individualized education is mainly reflected in giving certain rights, making it maintain its relatively democratic position in the education process, so that students’ personality can be maintained, developed and improved. Personalized people are in the main position in learning. Only after recognizing the subjectivity of learners can personalized education be possible [9]. In personalized education, teachers are not only teachers but also friends of students. Students are not passively accepting knowledge, but actively learning. The democracy of personalized education makes personalized education pay more attention to shaping a harmonious environment, so that learners can fully develop their personality in a democratic learning environment. Targetedness refers to providing democratic education suitable for individual characteristics for learners with different personalities. As mentioned above, everyone is unique, and the differences between people are shown in gender, age, personality and other aspects. Personalized education is based on the uniqueness of people, so it should be different from the unified education. It is a negation of the unified education thought. Personalized education attaches importance to the individual characteristics and unique values of people. When implementing personalized education, it should be targeted to provide different learning environments, learning contents and learning methods for different learners. Diversity means that when implementing individualized education, educational methods, educational systems, educational evaluation and educational content should be diversified. Learners can choose their own educational content and methods from different educational contents and methods according to their different personality characteristics [10]. As far as educational evaluation is concerned, different students should evaluate with different standards; As far as the educational content is concerned, the same knowledge point should be expressed in multiple forms, so as to meet the learners with different personalities. Personalized education emphasizes the pertinence, diversity and presentation of learning content. In this paper, students’ personality characteristics are taken into account, combined with collaborative recommendation, to provide learners with appropriate learning content, and presented in a preferred way.

4 Example Analysis 4.1 Experimental Preparation Collect 540G data of art education resources to be tested on the public network platform of art education resources in a university, and store them all in the art education resources database. The information samples obtained in the experiment are divided into two parts, one part is the experimental object of the experimental group, and the other part is the

196

S. Li and X. Duan

experimental object of the control group. The campus network platform structure and experimental equipment are shown in Fig. 4 and Table 2 respectively.

Fig. 4. Campus Network platform

Table 2 records the selection of the relevant experimental equipment. Table 2. Selection of experimental equipment Campus network host

GTX1650

The operating system

Windows 10

Data transceiver

AMD Eight nuclear A9

Data processing unit

RTX3060

Data analyser

RTX2060

In order to ensure the absolute fairness of the experimental results, except for the different experimental methods, the selection of other instruments and equipment in the experimental group and the control group is completely consistent. 4.2 Test Steps The specific implementation process of this experiment is as follows: Step 1: Select the personalized recommendation method of college art education resources based on in-depth learning as the application technology of the experimental group; Step 2: Select the conventional recommendation strategy as the application technology of the control group;

Personalized Recommendation Method of College Art Education Resources

197

Step 3: Use the experimental group method to control the campus network platform, and record the proportion of learning allocation of the selected art education resources under the effect of this method; Step 4: reset the display of experimental equipment to zero, use the control group method to control the campus network platform, and record the proportion of learning allocation of the selected art education resources under the effect of this method; Step 5: Compare the proportion of learning allocation of art education resources between the experimental group and the control group, and summarize the rules of this experiment; 4.3 Data Processing and Experimental Results Figure 5 reflects the specific numerical change of the learning allocation ratio index of art education resources under the recommended methods of the experimental group and the control group.

Fig. 5. Index of the proportion of learning allocation of art education resources

Experimental group: when the category of art education resources is 4, the proportion of learning allocation in the experimental group reaches the maximum of 96.71%; When the categories of art education resources are 8, 9 and 10, the minimum value of the learning allocation proportion index of the experimental group is 91.04%, and the difference between the two is 5.67%. Throughout the experiment, the average value of the learning allocation proportion index of the art education resources of the experimental group is always greater than 90%. Control group: when the category of art education resources is 10, the proportion of learning allocation in the control group reached the maximum of 55.02%; When the category of art education resources is 1, the minimum value of the learning allocation proportion index of the control group is 35.99%, with a difference of 19.03%, which is greater than the difference level of the experimental group. However, the average value

198

S. Li and X. Duan

of the learning allocation proportion index of the art education resources of the control group is always lower than the experimental group throughout the experiment. To sum up, the conclusion of this experiment is: (1) Under the effect of the conventional recommendation strategy, the value level of the proportion index of learning and distribution of art education resources is always relatively low, which means that the application ability of this method in solving the problem of uneven distribution of education resources is relatively limited, and cannot guarantee the personalized recommendation ability of the campus network host to art education resources. (2) The recommendation method based on deep learning can greatly improve the value level of the proportion index of learning and distribution of art education resources, which means that this method can better solve the problem of uneven distribution of education resources, and has relatively strong application ability in ensuring the personalized recommendation ability of campus network hosts for art education resources. The reason why the recommendation method designed in the article has comparative advantages is that it first designs the deep learning structure of art education in colleges and universities, which provides a good structural basis for subsequent recommendation. According to the principle of educational weight distribution, the similarity of educational resource information is calculated, which helps to improve the accuracy of resource data recommendation; Based on the in-depth learning algorithm and the resource vector to be allocated, the personalized recommendation list of art education resources in colleges and universities is formulated, and the proportion index of learning and allocation of art education resources is finally improved.

5 Conclusion In this information age of education, the overwhelming art resources are full of the network. In order to use the art resources on the network more effectively, the strategies and methods of using the network art education resources are particularly important. To improve the effectiveness of the recommendation of the campus network host to the art education resources, we need to go through a certain screening process, improve our own sensitivity to excellent resources, and ultimately achieve better use of the network art education resources and improve the learning efficiency of students. In general, the personalized recommendation method of college art education resources based on deep learning can establish the user’s knowledge point interest model according to the analysis of user behavior, and recommend the resources to the user in the current learning state according to the model. However, the proposed algorithm still has many shortcomings. The future research work mainly includes the following aspects: (1) Optimize the interest model of the deep learning algorithm, and try to introduce other parameters to express students’ interest characteristics. (2) Consider optimizing the personalized recommendation method when the art education resources are sparse, and modify the deep learning algorithm to improve the performance of the recommendation list when the sparsity is low, and improve the accuracy of the recommendation.

Personalized Recommendation Method of College Art Education Resources

199

(3) Considering the time interval optimization of students’ interest transfer in art knowledge points, a dynamic time interval adjustment scheme suitable for students’ personalized learning behavior is established. (4) Improve the deep learning algorithm model to make it have the parallel processing ability, so that it can still maintain a high level of recommendation accuracy when facing massive data and student users. Acknowledgement. Fundamental Education Research Achievement Cultivation Project of Fuyang Normal University: Research on the Path to Improve the Teaching Ability of Fine Arts Students (2017JCY12).

References 1. Liu, S., Wang, S., Liu, X., Gandomi, A.H., Daneshmand, M., Muhammad, K., Victor, H.C.: De Albuquerque, Human Memory Update Strategy: A Multi-Layer Template Update Mechanism for Remote Visual Monitoring. IEEE Transactions on Multimedia 23, 2188–2198 (2021) 2. Lili, W., Weitong, G., Hongwu, Y.: Study on realizing personalized course recommendation by using learner portraits. E-education Research 42(12), 55–62 (2021) 3. Fang, L., Feng, T., Xin, L., et al.: A collaborative filtering recommendation method for online learning resources incorporating the learner model. CAAI Transactions on Intelligent Systems 16(6), 1117–1125 (2021) 4. Bigham, B.S., Noorizadeh, F., Khodayifar, S.: A polynomial time algorithm for big data in a special case of minimum constraint removal problem. Evol. Intel. 13(4), 1–8 (2020) 5. Pane, S.F., Awangga, R.M., Majesty, D.: Mapping log data activity using heuristic miner algorithm in manufacture and logistics company. TELKOMNIKA (Telecommunication Computing Electronics and Control) 19(13), 781–791 (2021) 6. Daves, M., Ervin, E.K., Zeng, C.: Experimental data interpretation using genetic algorithm for global health assessment of reinforced concrete slabs subjected to cracking. Adv. Struct. Eng. 24(3), 411–421 (2021) 7. Dillon, P., Aimmanee, P., Wakai, A., et al.: A novel recursive non-parametric DBSCAN algorithm for 3D data analysis with an application in rockfall detection. J. Disaster Res. 16(4), 579–587 (2021) 8. Casado-Vara, R., Rey, M.D., Affes, S., et al.: IoT network slicing on virtual layers of homogeneous data for improved algorithm operation in smart buildings. Futur. Gener. Comput. Syst. 102(1), 965–977 (2020) 9. Shekhawat, S.S., Sharma, H., Kumar, S., et al.: BSSA: binary salp swarm algorithm with hybrid data transformation for feature selection. IEEE Access 9(99), 14867–14882 (2021) 10. Helong, Y., Lin, S.: Accurate recommendation algorithm of agricultural massive information resources based on knowledge map. Computer Simulation 38(12), 485–489 (2021)

Global Planning Method of Village Public Space Based on Deep Neural Network Xiaoli Duan1,2(B) and Sen Li3 1 China Academy of Art, Hangzhou 310009, China

[email protected]

2 Haikou University of Economics, Haikou 571132, China 3 School of Fine Arts, Fuyang Normal University, Fuyang 236037, China

Abstract. The current overall planning method of village public space is inefficient in land integration, resulting in poor planning effect. In order to solve the above problems, a global planning method of village public space based on deep neural network is proposed. Firstly, the hidden neurons are determined by using the data information obtained from the deep neural network, the green coefficient and spatial comfort are extracted, and the change value of the load curve is calculated at the same time, and the data is filtered. Then, the land planning is realized through data analysis, data preprocessing, rural land planning mapping database and the update mechanism of the database, and the variation coefficient is determined, and the standard deviation is calculated; Finally, the global planning is realized through neural network. The experimental results show that the proposed method can effectively improve the efficiency of land integration and plan a user satisfactory scheme. Keywords: Deep Neural Network · Village Public Space · Spatial Global Planning · Global Planning Method

1 Introduction Social and economic development drives the development of villages. The space of villages is gradually expanding and the layout is becoming more and more perfect. While bringing convenience to people, related problems are also emerging. The contradiction between resources and development is becoming more and more prominent. How to solve the problems brought about by the development of villages has become an urgent issue for people. In the process of village construction, we should not only consider the ecological environment, but also take into account the socio-economic development. The ecological environment layout of villages has become the main trend of the world development. China has invested a lot of human and material resources in establishing ecological villages, and has made certain achievements in economic development, social prosperity and ecological protection. In view of the spatial layout of the village ecological environment, scholars in relevant fields have conducted in-depth research and proposed the spatial layout of “San Sheng”, © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 200–214, 2024. https://doi.org/10.1007/978-3-031-50577-5_15

Global Planning Method of Village Public Space

201

a mountainous characteristic town based on the constraints of micro geomorphology. Under the strategic requirements of industrial prosperity and ecological livability, various data have been determined, while ecological functions have been studied, and the mountain micro geomorphology has been used as the constraints to complete the optimization of spatial layout, This method can distribute the unique ecological space and agricultural space of villages and towns, better complete spatial connectivity and aggregation, optimize the layout of the integrated waterfront spatial pattern, and ensure that urban and rural areas can achieve green change through endogenous green theory [1]. Traditional spatial layout optimization methods mostly focus on the layout of a specific area, lacking global thinking. Based on the deep neural network analysis, this paper designs a new village ecological environment spatial layout system.

2 Feature Extraction of Village Public Space Global Planning The public space architecture integrates the two concepts of ecology and architecture. The principle of public space construction is to use green environmental protection materials to the maximum extent based on the building demand during the design, so that green buildings can live in harmony with nature, which is consistent with the concept of sustainable development advocated by the contemporary society. The design process of public space buildings mainly involves the geographical factors and environmental factors of the building space. The design mainly completes the space planning, design, construction and operation. In this process, the laws of ecological change and the rules of architecture should be considered to finally form a complete structure of green building space [2]. The characteristics of village public space are new energy, environmental protection and people-oriented. The environmental protection feature of village public space refers to the disposal of some construction waste in the design of a green building space. According to the space design requirements, the waste will be reused to reduce environmental pollution. The feature of environmental protection is that the raw materials used in the space assembly process adopt the most environmentally friendly series, which cannot improve the design profit and cause environmental pollution. The feature of space architecture based on human text is that the design should not only be based on the concept of green ecology and low carbon environmental protection, but also based on the design principles of residents. Otherwise, the situation of putting the cart before the horse will occur, which does not conform to the construction principles in the field of architecture. This paper studies a new global planning method of village public space based on depth neural network, extracts village features, and establishes a depth neural network as shown in Fig. 1: The depth neural network is used to collect relevant characteristics, calculate characteristic parameters, calculate corresponding measurement function index parameters, and judge whether relevant planning is suitable, as shown in Formula (1): A=b+

2 r(s − w)

(1)

202

X. Duan and S. Li

Input layer

Hidden layer

Output layer

Fig. 1. Structure of Deep Neural Network

In Formula (1), A represents the value of the function index parameter; b stands for upper limit of test; r represents optimization coefficient of functional parameters; s represents the scope of testing; w represents the lower test limit. After the above calculation, determine the corresponding test function index parameters, and use the calculated index parameters to evaluate the basic environmental conditions of the test. Once the value range is exceeded, it indicates that the test environment is abnormal and must be re selected. If the measurement results are within the value range, it indicates that the test environmental conditions are relatively stable, suitable for spatial planning, and subsequent operations can be carried out. According to the construction of neural network detection algorithm, an intelligent feature extraction model based on depth neural network is established. In the extraction process, an independent intelligent neuron RBM is set. Only when the RBM neuron reaches the inactive condition, can the true integrity of the measured data be determined. After detection, hidden neurons can be determined according to the data information obtained. The calculation formula is shown in Formula (2):  j + 2f (2) M = 2 In formula (2), M represents the defined value of hidden neurons in the detection model; j stands for functional parameter index; f represents the function execution coefficient. After the above operations, we can get the corresponding detection mode hidden unit definition value, use the value range to build the actual detection mode, remove the influence of RBM neurons, so that the entire test mode has a higher flexibility and application. At the same time, the hidden neurons can also use the deep neural network to calculate the hidden function, which further expands the test scope of the method. For the detection of planning parameters, it is necessary to collect the transmission function parameters of village public space planning parameters, and use the depth neural network function detection model to test. The detection of transmission function

Global Planning Method of Village Public Space

203

of village public space planning parameters is divided into two indicators, namely green coefficient and space comfort [3]. Detect the green coefficient. Due to external pollution, the green coefficient will be affected to some extent. The mathematical expression of green coefficient detection is shown in Formula (3):  ρ 1 E S= = (3) U U T In Formula (3), T represents the parameter coefficient of village public space planning. The calculation process of space comfort P is expressed by formula (4): P=

M2 Z1 + Z2

(4)

For the detection of the processing function of village public space planning parameters, the data to be processed needs to be put into the detection model to make the processing function of village public space planning parameters run. After the detection, the processing rate of data in the same channel is checked. The specific formula is as follows:   1 −O (5) L = 2β + 2 In Formula (5), L represents the actual processing capacity in processing function detection; β represents the measurement range of processing function data; O represents the operation parameters of village public space planning parameters. After the above calculation, we can finally get the processing rate of the processing function in actual operation, analyze the processing rate effect, set the threshold value, and compare with the threshold value. If the processing rate is always higher than the threshold value, it proves that the processing effect is good, otherwise it represents a lack of good processing capacity. The detection system for abnormal information of village public space planning parameters generally requires two testing technologies, namely, longitudinal difference testing technology and longitudinal difference identification of abnormal data. The whole detection process requires two benchmark indicators, namely, the curve to be measured, the daily load data curve, and the average value of the difference between sampling nodes; The daily change characteristic curve on the curve to be measured and the change rate of the sampling point [4]. The daily load curve and daily load characteristic curve representing the test shall be established. Take a point on the daily load curve to check the i point on the daily load curve to be tested, where i ∈ {1, 2, · · · , N } and N are the sampling points; Establish the maximum load value, compare it with the load value corresponding to the i point of the daily load characteristic curve, count the load value of the load curve to be tested and the i point of the daily load characteristic curve, and analyze the proportion of the load value of the load curve to be tested and the i point of the daily load characteristic curve. The difference between the two curves d is expressed by the average of the difference

204

X. Duan and S. Li

between the sample points of the curve to be tested and the daily load characteristic curve. After calculating the change value of the load curve, measure the change rate at the sampling point, calculate whether the difference between the change values of the two curves is equal to the maximum value of the deviation, analyze whether the change value is within the reasonable range, and determine whether the load curve is an abnormal load curve. If any of the above requirements is not met, the curve to be measured is an abnormal curve, and the load value at the i moment is an abnormal value. By calculating the proportion of abnormal values in all abnormal data, the screening rate of abnormal data of village public space planning parameters can be calculated [5].

3 Overall Planning of Village Public Space Land Integrate and manage various types of village land surveying and mapping data, provide users with corresponding data retrieval functions, and ensure that the data retrieved by users are organized, so as to achieve the effect of high-quality planning. In this paper, the data types of village land planning and mapping are divided into spatial data and non spatial data by using depth neural network. The spatial data is mainly embodied in CAD, and the non spatial data can be embodied in any plane format. Most surveying and mapping data of village land planning belong to spatial data, so it is necessary to focus on the characteristics of CAD data format. In the data analysis program, the geometric information and solid data contained in CAD data can be satisfied The display function and expansion function of dwg format, and because CAD data has more complex attributes than other plane data, it is necessary to extract in-depth attribute information of the corresponding data, so a set of data attribute table is set up in the data analysis program, which can accommodate a large number of CAD data attributes. The training process based on deep neural network is shown in Fig. 2: Output layer

output, display

Hidden layer Transfer and Reverse Iteration Errors

Input layer

Data input and pretraining

Database

Fig. 2. Training process based on deep neural network

In this paper, the data preprocessing program of the GIS based village land planning surveying and mapping data integration software system is designed. In this program, all village land planning surveying and mapping data are divided again into control planning data, overall control planning data, constructive planning data, land planning data and special planning data. These data are spatial data, Therefore, only after the audit of the data analysis program can the data be transferred into the data preprocessing program. The conversion process can be divided into data extraction step, format conversion step, organization restructuring step, classification code replacement step and quality audit

Global Planning Method of Village Public Space

205

step. The data extraction step is to lock the plannable data in the village land planning surveying and mapping data, extract the data hierarchically, and make the extracted data match the geographical elements [6, 7]. The format conversion step is to reposition the initial data according to the corresponding level after extraction, that is, to adjust the attribute structure of the data. This step has certain requirements on the integrity and standardization of the data, otherwise the data will remain in an error mode, which will lead to the data unable to achieve code conversion. Figure 3 shows the process of data format conversion steps: Start

Dat a Extraction

Format convers ion

Organizational rest ructuring

Cl assi fication code replacement

Quality audit

End

Fig. 3. Data Format Conversion Steps

The organization reconstruction step is to set the data hierarchy and naming according to the village land planning surveying and mapping standards. The classification code replacement step is to replace the classification codes of all data elements entering this step with GB/T 13923–2006, which may cause some codes to be inapplicable or unrecognized in the process of replacement. At this time, the code should be expanded or replaced with reference to other village land planning and mapping data code standards, so as to ensure that there is no omission, loss, etc. of village land planning and mapping data. The last step is to review the data that has completed the code replacement, mainly from the aspects of data attribute, content, format type, integrity, etc. In addition, we also need to check some data to determine whether the location overview, edge clarity, and other information contained in the data are clear and complete [8]. The design of the village land planning and mapping database can achieve data integration more quickly and accurately, and also facilitate the user’s data retrieval. The database designed in this paper sets a working mode of synchronous action of centralized construction and operation management, which can well handle the integrated data in the traditional database and the newly added data to be integrated. In the process of designing

206

X. Duan and S. Li

the database, this paper always adheres to the principles of security, standardization, practicality and maintainability, so as to ensure that the geographic spatial data of village land can be extracted and integrated sustainably without leakage. The management software in the database is Oracle R2. All attributes in the database are in Oracle, and all spatial data are stored in the Oracle 11g data repository. This database has higher flexibility and more accurate computing power, and has lower management costs than other types of databases [9, 10]. In the structural design of the database, this paper pays more attention to the design of the basic village land planning and mapping database. In the basic database, nearly 100 layers are used to reflect the characteristics of the dataset. The names and descriptions of the layers are mainly as follows, as shown in Table 1: Table 1. Layer Name and Description Layer Name

Layer representation

MAP500-JTSS-PT

Point traffic facilities

MAP500-JTSS-LN

Linear traffic facilities

MAP500-JTSS-PY

Surface traffic facilities

MAP500-JTYS-LN

Along the road

MAP500-JTYS-LN

Along the railway line

MAP500-GXSB-PT

Pipeline equipment point

MAP500-GX-LN

Pipeline

MAP500-DM-LN

Geomorphic surface

MAP500-ZB-PY

Vegetation surface

At the end of this paper, the updating mechanism of the database is redefined. Although the traditional village land planning and mapping database is also in the realtime updating state, the traditional database still has some defects in the accuracy and practicality of planning and mapping data. The database update mechanism designed in this paper will be completely based on the GIS data management system to update the data in the traditional database rapidly and comprehensively. This paper also studies the matching degree between the basic data space and thematic geographic database and the data update mechanism, and proposes to improve the update efficiency of the database while ensuring the normal use of all village land planning and mapping data. Considering that most data in the database is in dwg format, and the combination and identification between data and data require unique code, in the new update mechanism, the data code is used as the update node to realize the common interconnection between code elements and attributes in GIS database [11]. In the environment where the code and GIS database attributes are interconnected, the original format in the database and the data format to be updated can be directly transformed into data format and synchronized with attributes through the data processing module, which reduces a lot of format editing operations. There are also some function modules in the data update mechanism. The main function of these modules is to load the data in dwg format into shp format data.

Global Planning Method of Village Public Space

207

The processed shp format data can be directly entered into the temporary data base, and then the spatial data conversion, data content and type editing, attribute collection, coordinate system collection, data update and data storage process can be carried out in the temporary database. 3 Global planning of village public space based on deep neural network The depth neural network is used to distribute the space through continuous polygons. In the distribution process, the nearest neighbor principle is followed, and the depth neural network optimization results are applied to the grid optimization, which can ensure that the selected seed points are distributed in a more uniform manner, while improving the utilization rate and reducing the work cost consumed in the spatial distribution process. The village ecological environment spatial layout system software established in this paper needs to use two-dimensional plane space to represent the village environment, and express the discrete points inside the space in the form of sets, as shown in Formula (6): Z = {z1 , z2 , · · · , zn }

(6)

In formula (6), Z represents a set of discrete points. It should be noted that any two points of the assembly point inside the plane are not equal. The element zi (1 ≤ i ≤ n) inside the plane is extracted. After determining the various adjacent relationships in space, Voronoi polygon tpi (1 ≤ i ≤ n) is formed. The calculation formula is as follows:   tzi = s : s − zi  ≤ s − zj  (7) zi , zj ∈ Z, i = j In Formula (7), zi represents the production element inside the tzi Voronoi polygon. According to Formula (7), the distance from any original point s inside Voronoi polygon tzi to generator zi is less than or equal to the distance from any point s to other generators zj . After this certain law is satisfied, a plane can be formed, and the plane can be discrete, and the two-dimensional plane space partition can be obtained through continuous integration and distribution. After discretization, point sets are distributed in different ways in plane space, namely uniform distribution, random distribution and aggregate distribution. The uniform distribution mode is shown in Fig. 4: According to Fig. 4, after uniform distribution, the plane where the point set is located forms multiple Voronoi polygons. The area difference of these polygons is small, and they are relatively evenly distributed in the same plane. The random distribution is shown in Fig. 5: According to Fig. 5, after the random distribution is adopted, the point set lacks a fixed distribution mode, and there are some differences in area. Some point sets are relatively uniform, while some point sets are relatively clustered. The aggregation distribution mode is shown in Fig. 6: It can be seen from Fig. 6 that the aggregation distribution mode can centralize point sets in the same location. The aggregation area is not fixed. Some aggregation areas are relatively large, while others are small. The Voronoi polygon is quantized, the polygon area is determined, and different distribution patterns are obtained according to the changed area. In order to judge the

208

X. Duan and S. Li

Fig. 4. Point Set Uniform Distribution Mode

Fig. 5. Random distribution pattern of point set

area change status of Voronoi polygons more accurately, the variation coefficient is set in this paper to determine the continuous change process of the generated area of Voronoi polygons during the formation process. The calculation formula of coefficient of variation is shown in Formula (8): ς=

σ s

(8)

In Formula (8), σ represents the standard deviation connected by n Voronoi polygons, and s represents the mean value obtained by statistical analysis of Voronoi areas. The standard deviation is calculated as follows:   n 1

σ = (9) (xi − xi )2 n i=1

Global Planning Method of Village Public Space

209

Fig. 6. Aggregation distribution mode

In Formula (9), xi represents the area obtained by the i th Voronoi polygon in the plane space. Calculate multiple areas to obtain the mean value, which is represented by xi . The average value is calculated as follows: 1

s= xi n n

(10)

i=1

According to Formula (8) ~ (10), the calculation formula of polygon area change is as follows: n 1 (xi − xi )2 n ς=

i=1 1 n

n

(11) xi

i=1

The unit mean value in the plane can be judged according to the coefficient of variation, and the dispersion degree and distribution state can be determined at the same time. The distribution of spatial points can be determined according to the value results of the coefficient of variation, and whether the distribution pattern is uniform, random and clustered can be judged. Because the division method is artificial, there are some problems. Due to geological reasons, the edges of Voronoi polygons may be incomplete polygons. Calculate the coefficient of variation of trees, establish a buffer zone, realize edge correction, and better realize spatial layout analysis.

4 Experimental Study In order to verify the effectiveness of the village public space global planning method based on the depth neural network designed in this paper, the system designed in this paper is selected for the spatial layout of the ecological environment of a city.

210

X. Duan and S. Li Table 2. Experimental Parameters

Project

Data

Voltage

200 V

electric current

50 A

power

80 W

Memory

1024 MB

CPU main frequency

1.4 GHz

CPU hard disk capacity

1.2 T

CPU speed

7200 rpm

Operating system

Windows 8

Operating environment

VC ++ 6.3

Operating ambient temperature

-20°C-70°C

Average scan rate

64 kHz

Display resolution

1600*1280

response time

3 ms

The experimental parameters are set as shown in Table 2: After the successful construction of the method designed in this paper, a large number of village land planning and mapping resources were introduced to test the integration capability, focusing on the basic functions of the data integration system and the efficiency of data integration, to verify whether the system has independent operation capability or data integration advantages in some aspects. In the experiment, a big data integration system and an intelligent office system were used to compare the performance with the system designed in this paper. Before the experiment, the following specifications were made for all documents, software and hardware conditions in the three systems: the performance of all data interfaces and ports in the method was unified to ensure that external data entered into different systems can show the same data format and content. Before the experiment, routine tests should be carried out on the characteristic components or programs in each system to ensure that no accidents occur in the process of data integration. At the same time, a certain amount of abnormal data should be added to each system to test the system’s ability to process fault data. The software of the method shall be subject to static test according to the unique code corresponding to the software. Ensure that the statement coverage and branch coverage of village land planning surveying and mapping data reach 100%. On this basis, verify the data format of all village land planning surveying and mapping. The depth neural network in this paper is applied to spatial layout, and the depth neural network of spatial distribution of ecological environment in village public space is shown in Fig. 7:

Global Planning Method of Village Public Space

211

)RUHVWU\DUHD

ULYHUIURQW

ULYHUIURQW

Fig. 7. Spatial Distribution of Ecological Environment in Village Public Space

During the experiment, regardless of the topographic and geomorphic characteristics of the village public space ecology, the entire village public space is set as a twodimensional plane. If there are trees in the village public space, the trees are distributed in a point form. After the depth neural network is formulated, relevant parameters are obtained, and the spatial structure is analyzed in a qualitative way. According to Fig. 7, the deep neural network can well reflect the state of spatial trees, determine the ecological mode of village public space and the location of trees, and thus judge the ecological environment of village public space. The growth state of trees directly affects the distribution pattern of village public space. After applying the spatial layout system in this paper to the village public space ecology, the edge needs to be corrected to obtain relevant parameters. The ecological distribution of the public space of the system villages studied in this paper is shown in Fig. 8:

Coefficient of variation

1.0 0.8 0.6 0.4 0.2 0

20

40

60

100 120 80 Sampling scale/m

Fig. 8. Ecological Distribution of Public Space in the System Villages

It can be seen from Fig. 8 that if the sampling scale is small and the spatial aggregation degree is intensity aggregation mode when setting up the ecological space layout of the village public space, and the trees in the village public space are obviously clustered together, this area can be determined as a forest area. With the increase of the sampling scale, the coefficient of variation is also increasing. When the sampling scale is 60m,

212

X. Duan and S. Li

the state presented is basically stable. If the public space scale distance of the village is large, the spatial pattern is no longer a gathering state, but a random distribution state. It can be seen that the sampling scale directly affects the coefficient of variation, so it is necessary to expand the number of samples included in the scale to determine the authenticity of the obtained spatial pattern. The experimental results of detection accuracy of abnormal data screening function are shown in Table 3: Table 3. Detection Accuracy Results of Abnormal Data Filtering Function Amount of detected data/GB Detection accuracy/% Task scheduling OTP repair and adjustment Neural network function 1

62.31

54.25

99.91

2

60.44

53.32

99.87

3

58.73

50.24

99.85

4

56.21

48.69

99.73

5

54.33

45.73

99.64

It can be seen from Table 3 that the ability of traditional detection methods to filter abnormal data is extremely poor, and the limitations of task scheduling and OTP modification methods on abnormal data detection are very obvious, which is less than 65%. The detection effect of the abnormal data filtering function of the detection method proposed in this paper is always more than 99%, which can well determine whether the planning data can filter abnormal data and whether the data information can be transmitted smoothly. The planning time experiment results are shown in Fig. 9: Task scheduling Neural Networks OTP trim function 2.0 1.8

Planning time/s

1.6 1.4 1.2 1.0 0.8 0.6 0.4 0.2 0

100

500 400 200 300 Detection data volume/GB

600

Fig. 9. Planning time experiment results

Global Planning Method of Village Public Space

213

According to Fig. 9, with the increase of the planning data, the detection time is also increasing. When the planning data is less than 300GB, the detection time of the planning method based on OTP debugging function is lower than that of other detection methods. When the planning data is more than 300GB, the detection time of the planning method proposed in this paper is lower than that of the traditional detection method, which has a good planning effect. To sum up, the proposed method can well reflect the status of space trees, determine the ecological mode of village public space and the location of trees, and thus judge the ecological environment of village public space; The sampling scale directly affects the coefficient of variation. It is necessary to expand the number of samples contained in the scale to determine the authenticity of the spatial pattern obtained; The detection effect of the anomaly data filtering function of the proposed method is always above 99%, and the detection time is short, which has a good planning effect.

5 Conclusion This paper designs the village public space layout system based on the depth neural network. Through the depth neural network, the overall spatial state of the village is distributed, and the overall spatial layout is obtained. Set the coefficient of variation, seek the relationship between the coefficient of variation and the angular scale, and combine the characteristics of the village pattern to obtain the spatial structure. The deep neural network can well display the spatial ecological layout state method. Users can judge the aggregation degree and sparse area of villages according to the deep neural network, determine the areas that need to be further improved according to the dense information, and implement effective security prevention strategies. The research in this paper has a positive significance for the ecological development and economic development of villages. In the future, we need to further consider the spatial structure information, and determine a more perfect village ecological space layout model based on the obtained spatial structure information, so as to realize the transformation from two-dimensional plane space distribution to three-dimensional space distribution. Acknowledgement. 2018 Hainan Province Philosophy and Social Science Planning Project: An Analysis of the Characteristics of Public Spaces in Haikou Volcano Villages from the Perspective of Natural Symbiosis NSK(YB)18-57.

References 1. Cristescu, A.: In: Peter, N., Karima, K., Adriana, K.-M. (eds.) Sustainable Villages and Green Landscapes in the eNew Urban Worlde,Shaker Publishing, 2019. Romanian Journal of Regional Science 15(1), 126–128 (2021) 2. Xu, Y., Zhao, B., Zhai, Y., et al.: Maize diseases identification method based on multi-scale convolutional global pooling neural network. IEEE Access 1(13), 27959–27970 (2021) 3. Zheng, Q., Fu, D., Wang, Y., et al.: A study on global optimization and deep neural network modeling method in performance-seeking control. Proceedings of the Institution of Mechanical Engineers 234(1), 46–59 (2020)

214

X. Duan and S. Li

4. Brown, R.D.: Estimating terrestrial radiation for human thermal comfort in outdoor urban space. Atmosphere 12(12), 1–10 (2021) 5. Widyaningsih, E., Radiaswari, D.A.R.: Lost Space Utilization for Public Activities at Railway Crossing in Mejing and Sedayu Village, Yogyakarta. Journal of Physics: Conference Series 1823(1), 12–18 (2021) 6. Kohiyama, M., Oka, K., Yamashita, T.: Detection method of unlearned pattern using support vector machine in damage classification based on deep neural network. Struct. Control. Health Monit. 27(8), 1–23 (2020) 7. Jin, X., Xiong, J., Gu, D., et al.: Research on ship route planning method based on neural network wave data forecast. IOP Conference Series: Earth and Environmental Science 638(1), 12–33 (2021) 8. Xw, A., Qt, A., Lz, B.: A deep neural network of multi-form alliances for personalized recommendations. Inf. Sci. 531(8), 68–86 (2020) 9. Zhang, Y., Wu, X., Gach, H.M., et al.: GroupRegNet: a groupwise one-shot deep learningbased 4D image registration method. Phys. Med. Biol. 66(4), 45–57 (2021) 10. Xie, R., Meng, Z., Wang, L., et al.: Unmanned aerial vehicle path planning algorithm based on deep reinforcement learning in large-scale and dynamic environments. IEEE Access 2(9), 24884–24900 (2021) 11. Xiaoyuan, L., Liyan, Z., Jun, L., et al.: Mining outliers in multi-scale time series data based on neural network technology. Computer Simulation 38(1), 231–235 (2021)

A Multi Stage Data Attack Traceability Method Based on Convolutional Neural Network for Industrial Internet Yanfa Xu1(B) and Xinran Liu2 1 Department of Information Engineering, Shandong Vocational College of Science and

Technology, Weifang 261053, China [email protected] 2 State Grid Liaoning Marketing Servide Center, Shenyang 110000, China

Abstract. In order to accurately define the network area to which data attacks belong and avoid multi-stage delay in industrial Internet, a multi-stage data attack traceability method based on convolutional neural network is proposed for industrial Internet. The convolution neural network is used to solve the training expression of the classifier. Combined with the multi-stage attack data and information samples of the industrial Internet, improve the expression conditions of the encryption algorithm, and realize the construction of the multi-stage consensus mechanism of the industrial Internet. Define the value range of multi-stage data of workflow meta industrial internet, so as to determine the function of the traceability automatic capture mechanism on data samples, and complete the traceability of multi-stage data attacks of industrial internet. The comparative experiment results show that the proposed method can accurately define the sample interval of data attack behavior in the six network regions selected in this experiment, and has strong practical value in solving the multi-phase delay problem of industrial Internet. Keywords: Convolutional Neural Network · Industrial Internet · Data Attack · Classified Training · Metadata · Automatic Capture Mechanism · Network Area · Multi Stage Delay

1 Introduction Convolutional neural networks have been widely used in various machine learning tasks. In the process of updating the neural network parameters, the gradient of each layer of the network is obtained by using the back propagation algorithm, which is the chain rule of the neural network [1]. Tracing the big data model processing process can help to evaluate the data quality and reproduce the data generation process; When a certain result data is suspected, its source can be traced and traced through audit; If there is an error in the data, you can locate the location and cause of the error, and finally each data can be relied on. As an important means of data analysis, traceability can help decisionmakers remove noisy and useless data and maximize information decision-making. The © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 215–227, 2024. https://doi.org/10.1007/978-3-031-50577-5_16

216

Y. and X. Liu

traceability technology has been widely studied in the fields of database, workflow and distributed system interaction, but the traceability based on big data model analysis is still a great field worth exploring. Traditional traceability methods are mainly applicable to databases or data warehouses, but not in the field of big data due to its diversity, heterogeneity, large-scale and other characteristics. Data traceability is information about source data and data creation process, which can be used to evaluate data quality, audit and track data sources, and quickly locate the location of errors. At present, there are many researches on data traceability in the field of database, but few in the field of big data [2]. Multi stage data attacks on the industrial Internet will affect the stability of the network operation, which will not only cause the network system to show multi-stage delay, but also make data samples unable to be transmitted to the correct node units. In order to avoid the occurrence of the above situation, a multi-stage data attack traceability method based on convolutional neural network is proposed for industrial Internet. On the basis of establishing the multi-stage consensus mechanism of industrial Internet by using convolution neural network, the data traceability framework is constructed. By defining workflow metadata, the automatic traceability capture mechanism is improved, so as to realize the smooth application of the multi-stage data attack traceability method of industrial Internet.

2 Multi Stage Consensus Mechanism of Industrial Internet As the basis for tracing data attacks, the multi-phase consensus mechanism of the industrial Internet includes three execution processes: convolutional neural network construction, classification focused training, and encryption algorithm improvement. This chapter will focus on the above contents. 2.1 Convolutional Neural Network Convolution neural network (CNN) is a multilayer feedforward neural application network. Each layer uses a set of convolution checks for multiple transformations. Convolution operation is helpful to extract useful features from locally related data, and distribute the output of convolution kernel to nonlinear processing units. This nonlinearity generates different activation modes for different reactions, which is helpful to learn the semantic differences in images. CNN is specially designed for processing data samples. Therefore, neurons in each layer are organized in the three dimensions of height, width and depth, just as information parameters in data samples will distinguish different definition values. The important attributes of CNN are hierarchical learning, automatic feature extraction, multi task processing and weight sharing, which are mainly composed of convolution layer, incentive layer, pooling layer and full connection layer. When facing low dimensional data, each layer of the neural network can be designed as a full connection layer. However, it is impractical to connect neurons to all neurons in the previous layer when dealing with high-dimensional inputs such as data samples. To this end, we can divide the data sample space into multiple regions for consideration, and then connect each neuron to a local region of the input. The range of this connectivity is

A Multi Stage Data Attack Traceability Method

217

called the receptive field of the neuron, which is equivalent to the size of the filter [3]. When dealing with spatial dimension and depth dimension, it is important to emphasize this asymmetry. The connection is local in space, but it is always global in the whole depth of input. Let α, δ and ε represent three unequal neuron parameters, and their minimum values are equal to the natural number “1”. lα represents the data sample connection feature based on the parameter α, lδ represents the data sample connection feature based on the parameter δ, lε represents the data sample connection feature based on the parameter ε, χ represents the data sample space planning coefficient, β represents the data sample input coefficient, and the above physical quantities are combined, Convolution neural network can be defined as formula (1): χ j=

n  n  n  α=1 δ=1 ε=1

(lα · lδ · lε )2

β

(1)

The increase of neural network layers produces a large number of parameters, and CNN uses the weight sharing mechanism to control the number of parameters. Assuming that the weight of each neuron connection data window is fixed, the number of parameters can be greatly reduced through parameter sharing. It is generally understood that scanning the image with the same filter is equivalent to feature extraction once, thus obtaining a feature map. The parameters of the filter are fixed, so each different area of the data sample is scanned by the same filter, so the weight values are the same, which is called weight sharing. Figure 1 shows the classic CNN structure applied in the traceability method of industrial Internet multi-phase data attacks.

Convolutional neural network

First sample interval

Second sample interval

Industrial Internet multi-stage data sample

Nerve node

Convolution node 1

Convolution node 2

Fig. 1. Classical CNN structure of convolutional neural network

218

Y. and X. Liu

After the two steps of local perception and parameter sharing, the number of weights generated in the original training process will be reduced to a certain extent, but the feature dimensions will increase, leading to the occurrence of over fitting. Therefore, high-dimensional features need to be reduced before training the classifier. Therefore, pooling operation is designed to reduce the complexity of convolutional neural network. Like the convolution layer, the pooling layer also connects neurons to a square area through the width and height dimensions of the previous layer. The main difference between convolution and pooling is that neurons in the convolution layer can learn weights or deviations in the training process, while neurons in the pooling layer do not learn weights or deviations in the training process, but perform certain fixed functions on their inputs, so pooling operation is a non parametric process. 2.2 Classified Training The transition from source model to intermediate model is a typical transfer learning based on convolutional neural network. The combination of pre training model and layer freezing method is used for the construction and training of CNN model, including the following key processes: (1) Initialization stage: retain the weight parameters of the pre trained CNN model as the feature extraction source in the first stage of model training; (2) Load the parameters and weights of the pre training model other than the last fully connected classification layer, freeze the convolution layer and pooling layer of the pre training model, and only train the new classifier layer; (3) The second stage: transfer the reserved parameters of the convolution and pooling layer of the pre trained model on the CNN model, and dock with the new fully connected classification layer trained in the first stage before to obtain the intermediate model. Convolutional neural networks usually freeze the weights of the first a layers in the pre training model, and then use the data from the target domain to retrain the next s − a layers. This process is called fine-tuning. The purpose of fine-tuning is to extract high-level features of the target domain, reduce the content difference between the source domain and the target domain, and improve the model recognition rate [4]. In general, increasing the number of network layers can help extract more high-level features. Therefore, it is considered to appropriately increase the number of network layers while conducting migration learning, that is, to add an adjustment module, so that the model can obtain the unique high-order statistical features of the target domain. The weight value calculation results based on the data samples of the front a layer and the rear s − a layer are shown in Formula (2) and Formula (3). da = j 2 − φa

S˙ a2 Smax − Smin

ds−a = j 2 − φs−a

2 S˙ s−a Smax − Smin

(2)

(3)

A Multi Stage Data Attack Traceability Method

219

In the formula, φa represents the content difference coefficient of the data samples in the front a layer, φs−a represents the content difference coefficient of the data samples in the back s − a layer, S˙ a represents the value characteristics of the data samples in the front a layer, S˙ s−a represents the value characteristics of the data samples in the back s − a layer, Smin represents the minimum value of the residual error of the data samples, and Smax represents the maximum value of the residual error of the data samples. The consensus method based on convolutional neural network minimizes the distribution difference between the two domains by projecting the data sample domain and the target domain to the same feature space. Under the new feature space, the distribution difference between the data sample domain and the target domain decreases; The weighting methods based on convolutional neural network focus more directly on comparing the distribution of source data and target data. These methods weight or filter the information parameters in the data sample domain to minimize the distribution difference between the sample domain and target domain. On the basis of Formula (2) and Formula (3), let γ represent the coding coefficient of the data sample parameters in the convolutional neural network, ϕa represent the convolution vector based on the data samples of the first a layer, ϕs−a represent the convolution vector based on the data samples of the next s − a layer, combine the above physical quantities, and derive the retraining of the convolutional neural network classifier as Formula (4): − D=

n  γ =1



γ da · ds−a

2 ϕa2 + ϕs−a

(4)

Calculate the relative weight value of the source domain samples, delete the source domain samples that differ greatly from the target domain data distribution, and select the training data with higher weight value as the new source domain samples. Therefore, although the data distribution of the source domain and the target domain is very different, after calculating the relative weight, the data distribution of the left source domain samples and the target domain samples is relatively small. 2.3 Encryption Algorithm The multi-phase data encryption algorithm for industrial Internet based on convolutional neural network involves public key and private key, which are selected and saved by users themselves, and can be transferred to anyone. The public key can be calculated through the private key, but the private key cannot be calculated in polynomial time through the public key. The public key cryptosystem can be used to encrypt messages. The sender uses the public key to encrypt the message to get the ciphertext. After receiving the ciphertext, the receiver uses the private key to decrypt the ciphertext to get the plaintext. The encryption algorithm is built on the basis of the secret sharing scheme. The main idea is to disperse the secrets and let them be kept by multiple people [5]. When it is necessary to reconstruct the shared secret, if the number of people involved in recovering the secret exceeds a given limit, the whole original shared secret can be recovered through

220

Y. and X. Liu

joint efforts. If the number of people involved in recovering the secret is less than the specified limit, even if these people work together, they cannot get any information about the original shared secret, and the specified limit is called threshold or threshold. The threshold encryption algorithm should meet the following requirements: If the sub key exceeds the threshold value, the whole plaintext can be recovered; if the sub key is less than the threshold value, no information of the whole plaintext can be obtained. No master key information can be obtained through the sub key. Let L1 represent the key threshold value in the first encoding process, and L2 represent the key threshold value in the second encoding process. The value relationship between the two satisfies L1 ≥ L2 . The specific definition of threshold values L1 and L2 is Formula (5): ⎧ m  ⎪ ⎪ ⎪ L = D · ι|l1 |2 ⎪ 1 ⎪ ⎨ ι=1 (5) m ⎪  ⎪ ⎪ 2 ⎪ D · ι|l2 | ⎪ ⎩ L2 = ι=1

where, ι represents the ciphertext transcoding coefficient, l1 represents the ciphertext sample value in the first encoding process, and l2 represents the ciphertext sample value in the second encoding process. In order to establish L1 ≥ L2 , it is required that the values of l1 and l2 indicators should meet l1 ≥ l2 . The consensus mechanism is an agreement that ensures the state and elasticity of the convolutional neural network connection to reach consensus, and it is also a key component of the network system. In order to ensure the continuous and normal operation of network templates, it is necessary to design appropriate consensus mechanisms according to their different application scenarios [6]. While the consensus mechanism of the industrial Internet based on the improvement of the CNN model can be applied to the cloud storage platform, there are still problems such as the centralization of equity resources, reuse and information disclosure.

3 Data Attack Traceability Scheme On the basis of convolutional neural network, a data traceability framework is constructed, and then the automatic traceability capture mechanism is improved by defining workflow metadata, so as to achieve the smooth application of the traceability method of industrial Internet multi-phase data attacks. 3.1 Data Traceability and Tracking Framework The core of data traceability is traceability metadata. Any traceability system needs to manage its traceability information well, which inevitably requires an overall traceability framework. The traceability of industrial Internet multi-phase data attacks was first

A Multi Stage Data Attack Traceability Method

221

developed in the database field, which specifically refers to scientific databases or regulatory databases, that is, such databases need to be annotated under special monitoring management. These annotations are authoritative descriptions of professionals and have a wide range of applications [7]. For example, an encyclopedic database such as Wikipedia needs a large number of professionals to edit behind the scenes, and finally the content presented on the network will contain a large number of other database information, so this regulatory database brings about the problem of data ownership. The traceability methods in the database are mainly annotation method and inversion method. The annotation method records and stores data items and transfers them together with the data; The inversion method does not need additional storage, and directly constructs the inversion expression for reverse tracking. The concept of workflow is to transfer information or tasks among participants according to certain rules to achieve certain effects. It can be divided into scientific workflow and business workflow. Scientific workflow is to execute the traditional scientific research process in the form of workflow. It does not provide a series of operating steps to describe the whole process as the traditional scientific research process, but uses a data driven approach, taking the output of the previous stage as the input of the next stage. Scientific workflow pays more attention to the reliability of data, and puts forward higher requirements for the accuracy of each step of data traceability. represents the data traceability set, which is defined as Formula (6):  ⎫ ⎧ ⎨ (f1 g1 )2 + (f2 g2 ) ⎬ (6) = k|k = ⎭ ⎩ L1 × L2 where, k represents a random data sample in the traceability set, g1 and g2 represent two unequal data stream parameters, f1 represents the tracking coefficient based on data stream parameter g1 , and f2 represents the tracking coefficient based on data stream parameter g2 . Let k1 and k2 be two non coincident sample parameters in the data traceability set , whose value is k1 , k2 ∈ . On the basis of this value range, let ˙j represent the attack behavior vector of industrial Internet multi-phase data, and deduce the definition condition of data traceability framework as Formula (7):

 k + k 2 1 2 J = 1 − ˙j 2  k1 · k2 

(7)

In the industrial Internet environment, data quality evaluation often needs to determine its source or creation process through the context of data. In today’s big data environment, the resulting data often comes from various sources or is obtained through continuous and complex transformation and long-term accumulation. In the scientific experiment environment, it is more and more difficult to track the data source and complex data conversion relationship for a specific scientific research result [8]. However, data traceability can capture various traceability metadata in the data generation process, through which the dependency between data or transformations can be analyzed, and then data quality assessment and reliability verification can be carried out.

222

Y. and X. Liu

3.2 Workflow Metadata The previous section has introduced the data traceability framework, which is designed to ensure the execution of tasks. However, the actual traceability does not require all its data. The key is to extract the relationship between models [9]. The workflow metadata is designed for traceability. It mainly includes the global basic description of the workflow and the dependencies between models. The details are as follows. The industrial Internet multi-phase data attack behavior represents the global description of the model workflow, as shown in Table 1. Table 1. Workflow Model of Data Attack Behavior Name

Category

Explain

Paths

String

Set of IDs of relationship edges between models

ModelflowID

String

Unique identification of the workflow

ModelList

String

Model ID collection contained in Workflow

Description

Text

Workflow description information

ExecuteType

String

Execution type of the model

Version

String

Model version number

Parameter

String

Constrains the transmission time of data samples from one model area to another

Defining workflow metadata can also determine the capability of other traceability objects. (1) Cloud storage users: have their own cloud data and have data sharing relationships with other cloud storage users. (2) Cloud service provider: mainly responsible for the registration of industrial Internet users, the storage and monitoring of cloud storage data. When a cloud storage user attacks multi-phase data, CSP monitors its operation to generate cloud storage data source information, and manages the generated cloud storage data source information. CSP will strictly perform its own functions, but it also remains curious about cloud data of cloud storage users. (3) Data source information database: mainly responsible for storing all cloud data source information. Cloud storage data source information will be used to track user violations. (4) Blockchain: mainly responsible for publishing cloud storage data source information and generating block vouchers for cloud storage data source information. Blockchain stores the hash value of cloud storage data source information in the block. When the source information of cloud storage data needs to be verified, the blockchain generates a block voucher related to the source information of cloud storage data. (5) Data source information auditor: mainly responsible for updating and maintaining the data source information database. PA uses block vouchers to verify the cloud

A Multi Stage Data Attack Traceability Method

223

storage data source information, and adds the verified results to the corresponding cloud storage data source information in the data source information database. PA strictly performs its own functions, but will also try to obtain information of interest. 3.3 Traceability Automatic Capture Mechanism Coarse grain traceability of multi-stage data attacks on industrial Internet will lead to process traceability, which is caused by black box effect in workflow. Because the processing algorithm in any workflow system is transparent to users, as far as workflow process management is concerned, this will undoubtedly reduce the burden on users. However, for traceability management, encapsulation will mean shielding the details. The general workflow traceability model can only be represented as process level traceability, that is, coarse-grained traceability, and has no knowledge of the processing process of nodes, which will inevitably lead to the dependency differentiation problem of data items, That is, it is impossible to analyze the exact source item of a certain item in the result data. It can be seen that the key problems to be solved in realizing the automatic traceability capture mechanism include: (1) How to perform traceability capture. This process needs to analyze the internal execution process of the model. Because each model handles different businesses, and we cannot do specific business analysis for all models, we have to solve this problem from a more abstract level; (2) How to realize the tag storage of traceability information. The tag storage of traceability information is mainly used to record the dependency between the front and back data items during the model execution process, so as to achieve traceability; (3) How to trace the source. Tracing is a query algorithm based on the traceability information storage in (2). Let μ represent the performance strength of industrial Internet multi-phase data attacks, bˆ represent the performance characteristics of attacks, and v1 , v2 , . . . , vn represent the coding coefficients of n traceable nodes that are not zero. With the support of the above physical quantities, simultaneous formula (7) can define the traceability automatic capture mechanism as formula (8): M = minn ×

bˆ × J v1 × v2 × . . . × vn

(8)

The searchable encryption algorithm is used to process the industrial Internet multiphase data attacks, and the ciphertext is stored in the CSP cloud storage database. By using this data storage method, we can not only ensure the confidentiality of data samples, but also achieve flexible multi-user data sharing [10]. By monitoring users’ operations on cloud data, we can collect cloud data source information, so as to accurately trace the source of attacks. Traceability metadata management refers to the operations of adding, deleting, querying and modifying the traceability metadata of the model workflow. The most critical stage is to describe the input, output and intermediate data of the model before the implementation of the model. Metadata is added by the model developer before the implementation of the model, which provides a basis for subsequent traceability.

224

Y. and X. Liu

Traceability is the core function of the traceability system, including coarse grain traceability, fine grain traceability and traceability diagram display. The coarse grain traceability function is based on the traceability metadata and aims to track users’ needs at different levels, including tracking the data source according to the result data, tracking its generation process according to the result data, tracking its source according to the intermediate data and storing the traceability map in sequence. The data targeted by this process are all file level; The fine-grained traceability uses a recursive method to track the data items in the result file. Its implementation depends on the underlying Hadoop extension, mainly including the capture and storage of traceability information in the map and reduce processes; The traceability map display function mainly displays the traceability information in a visual form.

4 Example Analysis 4.1 Variable Description and Experimental Process The accuracy of the selected traceability method’s definition of the sample interval of industrial Internet data attacks can affect the probability of multi-stage delay in the network system. In general, the higher the definition accuracy is, the lower the probability of multi-stage delay. On the contrary, the higher the probability is. The accuracy of the selected traceability method in defining the sample interval of industrial Internet data attacks is defined by Formula (9): ω =ξ ·ψ

(9)

where, ξ represents the network complexity coefficient, and ψ represents the aggressiveness intensity index. The specific implementation process of this experiment is: Step 1: Build the industrial Internet system as shown in Fig. 2; Step 2: use the tracing method based on convolutional neural network to control the Internet host, and record the numerical changes of network complexity coefficient and aggressiveness intensity index; Step 3: Use big data traceability method to control the Internet host, and repeat step 2 again; Step 4: release the temporary data information samples in the Internet host; Step 5: Control the Internet host by using the multi-source traceability method, and repeat step 2 again; Step 6: Compare three groups of different data variables and summarize the experimental rules. 4.2 Experimental Results Figure 3 and Fig. 4 shows the numerical changes of network complexity coefficient (ξ ) and aggressiveness intensity index (ψ) under the influence of convolutional neural network based traceability method, big data traceability method and multi-source traceability method.

A Multi Stage Data Attack Traceability Method

225

Offensive data

Behavior intensity statistics Industrial Internet host

Data recovery

Data traceability Industrial Internet host

Fig. 2. Industrial Internet Experimental Structure Multi source Traceability Method Big data Traceability Method Source tracing method based on convolutional neural network Network complexity coefficient

2.1 1.8 1.5 1.2 0.9 0.6 0.3 0

1

2

3 4 Network area number

5

6

Fig. 3. Network complexity coefficient (ξ )

It can be seen from the analysis of Fig. 3 that the mean value of ξ index is relatively high under the effect of the traceability method based on convolutional neural network, and the maximum numerical result reaches 1.95 in the whole experiment process; Under the effect of big data traceability method, the average level of ξ index is low, and the maximum numerical result can only reach 1.50 in the whole experiment process; Under the action of multi-source traceability method, the average level of ξ index is between the traceability method based on convolutional neural network and big data traceability method. During the whole experiment, the maximum numerical result is 1.65. It can be seen from the analysis of Fig. 4 that the mean value of ψ index is the highest under the effect of the traceability method based on convolutional neural network, and

226

Y. and X. Liu Multi source Traceability Method Big data Traceability Method Source tracing method based on convolutional neural network

Attack intensity index

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

1

2

3 4 Network area number

5

6

Fig. 4. Aggressiveness intensity index (ψ)

the maximum numerical result reaches 0.65 in the whole experiment process; Under the action of multi-source traceability method, the mean value of ψ index is the lowest, and the maximum numerical result can only reach 0.35 in the whole experiment process; Under the influence of big data traceability method, the average level of ψ index is between the traceability method based on convolutional neural network and the multisource traceability method. In each experiment process, the maximum numerical result is 0.50. Combine the experimental results of ξ and ψ indicators in Fig. 3 and Fig. 4 to make statistics on the definition accuracy of the sample interval of the selected traceability method for industrial Internet data attacks. In order to reduce the amount of numerical computation, the maximum numerical results of ξ and ψ indicators are selected. The specific calculated numerical values are shown in Formula (10): ⎧ ⎪ ⎨ ω1 = 1.27 ω2 = 0.75 (10) ⎪ ⎩ ω3 = 0.58 where, ω1 represents the definition accuracy of the convolutional neural network based traceability method, ω2 represents the definition accuracy of the big data traceability method, and ω3 represents the definition accuracy of the multi-source traceability method. According to the numerical results of Formula (10), the convolutional neural network based traceability method has the strongest ability to accurately define the sample interval of industrial Internet data attacks, and the multi-source traceability method has the weakest ability to accurately define the sample interval. The definition ability of big data traceability method lies between the two methods, that is, the convolutional neural network based traceability method is most suitable for accurately defining the network

A Multi Stage Data Attack Traceability Method

227

area of data attacks, The original intention of the design is to avoid multi-stage delay in the industrial Internet.

5 Conclusion In order to ensure the data quality after big data platform processing and the process of debugging result data generation, the introduction of data traceability is necessary. Therefore, the traceability of industrial Internet multi-phase data attacks based on convolutional neural network is a topic worthy of research. Firstly, the traceability information metadata model of the model workflow is designed, and a data sample traceability method based on convolutional neural network is proposed based on this metadata model, and the advantages and disadvantages of this method are analyzed; Secondly, specific traceability objects are discussed according to the situation; Finally, the whole industrial Internet environment is tested and the feasibility of the system and the effectiveness of the traceability method are demonstrated through experimental analysis, and the expected goal is finally achieved.

References 1. Liu, S., Li, Y., Fu, W.: Human-centered attention-aware networks for action recognition. Int. J. Intell. Syst. 37(12), 10968–10987 (2022) 2. Zhu, Y., Liu, X.: Big data visualization of the quantification of influencing factors and key monitoring indicators in the refined oil products market based on fuzzy mathematics. J. Intel. Fuzzy Sys. Appl. Eng. Technol. 40(4), 6219–6229 (2021) 3. Le, D.-N., Parvathy, V.S., Gupta, D., et al.: IoT enabled depthwise separable convolution neural network with deep support vector machine for COVID-19 diagnosis and classification. Int. J. Machi. Learn. Cybern. 12(11), 3235–3248 (2021) 4. Chattopadhyay, A., Mitra, U.: Security against false data-injection attack in cyber-physical systems. IEEE Trans. Cont. Netw. Sys. 7(2), 1015–1027 (2020) 5. Youzhi, F., Yunfeng, Z.: Simulation of automatic encryption algorithm for high-speed network multi-segment support data. Computer Simulation 38(12), 237–240 (2021) 6. Li, H., Spencer, B.F., Champneys, M.D., et al.: On the vulnerability of data-driven structural health monitoring models to adversarial attack. Structural Health Monitoring 20(4), 1476– 1493 (2021) 7. Mahapatra, K., Ashour, M., Chaudhuri, N.R., et al.: Malicious corruption resilience in PMU data and wide-area damping control. IEEE Trans. Smart Grid 11(2), 958–967 (2020) 8. Battur, R., Amboji, J., Deshpande, A., et al.: A Distributed deep learning system fornweb attack detection on edge devices. Control. Eng. Pract. 4(11), 25–36 (2020) 9. Moussa, B., Al-Barakati, A., Kassouf, M., et al.: Exploiting the vulnerability of relative data alignment in phasor data concentrators to time synchronization attacks. IEEE Transactions on Smart Grid 11(3), 2541–2551 (2020) 10. Bordel, B., Alcarria, R., Robles, T.: Denial of chain: evaluation and prediction of a novel cyberattack in blockchain-supported systems. Futur. Gener. Comput. Syst. 116(11), 426–439 (2021)

Machine Learning Based Method for Mining Anomaly Features of Network Multi Source Data Lei Ma(B) , Jianxing Yang, and Jingyu Li Beijing Polytechnic, Beijing 100016, China [email protected]

Abstract. Aiming at the problem that the current network multi-source data anomaly diagnosis is not effective, this paper proposes a method of network multi-source data anomaly feature mining based on machine learning. First of all, a multi-source data feature recognition model is built based on the multi-level structure of machine learning. Then, the network multi-source data feature classification algorithm is designed and optimized to identify and locate the abnormal data features based on the classification results. Finally, the network multi-source data abnormal data screening model is constructed to mine the abnormal characteristics. The experimental results show that this method has high practicability and accuracy, and fully meets the research requirements. Keywords: Machine Learning · Network Multi-source Data · Abnormal Data · Data Mining

1 Introduction With the rapid development of big data technology, the amount of data generated by people has increased exponentially, realizing the sharing and multi-purpose of large-scale data. However, in the process of data transmission, storage and interaction, information may be connected into a complex data network [1]. With the continuous expansion of data network scale, the number of multi-source target data is also increasing. The stability of the network can be guaranteed by analyzing the characteristics of multi-source data and mining the abnormal data [2]. Generally, in the process of analyzing and mining network multi-source data, it is necessary to establish a multi-source data fusion tracking model, and combine big data mining and information reconstruction methods to conduct data fusion detection and feature analysis, so as to improve the mining ability of abnormal data [3]. On the basis of existing research, this paper designs a new method for mining abnormal features of network multi-source data based on machine learning. The design idea is as follows: Based on the principle of machine learning, a multi-source data feature recognition model is constructed. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 228–242, 2024. https://doi.org/10.1007/978-3-031-50577-5_17

Machine Learning Based Method for Mining Anomaly Features

229

Implement feature classification for network multi-source data, and identify and locate abnormal data based on classification results. Build a network multi-source data abnormal data screening model and mine the abnormal features.

2 Network Multi-source Data Learning Abnormal Feature Mining 2.1 Construction of Multi-source Data Feature Recognition Model Machine learning data mining is a key link in the process of network data anomaly feature mining. This study uses the multi-level structure of machine learning to achieve data collection. The model structure is shown in Fig. 1.

Data source Data preprocessing First level fusion target extraction Second level fusion situation assessment

Support database

Third level fusion threat estimation

Fusion database

Fourth level fusion process optimization Human computer interface

Fig. 1. Machine learning data collection model

Based on the model in Fig. 1, the element information is modeled according to the prior template structure in the knowledge base. Match the data observation behavior with the assumed template. If the matching degree is less than the set threshold, the data will be discarded, otherwise, the data behavior sequence will be interpreted with the assumption template. Figure 2 shows the principle of data feature mining. Data mining technology is a process of discovering valuable and meaningful rules behind these data through some technical means from the massive information in the database. The function of the data mining model is to mine and analyze the feature information useful for command and decision-making from the massive and complex situation machine learning, and finally present the analysis results to the decision-maker as a situation visualization multi view. Therefore, this paper presents the corresponding data mining model functional framework, as shown in Fig. 3.

230

L. Ma et al.

Database

Select situation template

Calculate the matching degree Incident supplement Observed action sequence

Selected situation assumptions

Fig. 2. The principle of multi-source data feature mining

Database system

Target intention recognition

Man-machine interface

Target grouping

Visualization of situation results

Fig. 3. Functional structure of the multi-source data feature evaluation model

The functional requirements of the model include: human-computer interaction interface design, mainly including user login and logout operations, selection of core functions within the model, real-time display of situational results, and query and display of feature results; core function design of machine learning data mining models, including data preprocessing operations such as cleaning, integration, specification, transformation, etc. for the absence, anomaly, format, and redundancy of situational machine learning; Mining and analysis of target groups The behavioral state of the target group is used to identify the target’s combat intention and establish a functional relationship view between the target objects: combine the former two to establish a target group interaction relationship view; feature mining data management function design, including situational machine learning and real-time storage of situational results, query, modify, delete and other operations. 2.2 Multi Source Data Feature Anomaly Recognition Algorithm The data features mined above are unprocessed. At this time, the data may have different feature specifications, and the data cannot be compared together because of their different feature specifications. Machine learning can solve the problem of feature. The method of machine learning can be divided into region zoom a and standardization A. Before

Machine Learning Based Method for Mining Anomaly Features

231

standardization, it is required to have the characteristic value a of the phenomenon of normal distribution, or after normalization, it can meet the standard normal distribution r. The standardized formula is as follows x =

a−A S

−r

(1)

Then the boundary information is used to shorten the range of features to a feature point. For some features that need to be quantified, the information contained in them is not numerical, but a literal interval. At this time, it needs to be transformed into quantitative characteristics. In addition, there is also the problem of large numerical range of some quantitative characteristics. At this time, you can set a threshold value. If it is greater than the threshold value, it is set as 1. If it is less than or equal to the threshold value, it is set as 0. The method is as follows:  1, a > threshold  x = (2) 0, a ≤ threshold The principle of collective outlier detection based on data distribution fitting is mainly divided into three steps. First, the sample data is fitted, and the distribution function obtained is regarded as the distribution structure of normal sequence data; Secondly, the actual data distribution is fitted, and the distribution function obtained is the mixed distribution structure of medium normal data and collective outliers. Finally, the parameter characteristics of the two distribution functions are compared to identify the collective outliers. The details are as follows: For sample data set f with normal labels, its distribution structure θS is simulated according to the formula. Parameter p represents the data distribution function, λ represents the parameters in the distribution function, and t represents the data instance.  D(n) = λ tx + fS (p + θS ) (3) For the actual sequence data, according to formula (3), the data distribution D(n) is fitted by the mixed distribution function model. The parameter ϕ represents the mixture distribution function, θs represents the distribution function of the collective outliers, θa represents the proportion of the collective outliers in the sequence data, B is the parameter in the mixture distribution function, and a is the parameter in the abnormal data distribution function. The distribution structure of the sample dataset is as follows:   D(n) = tx (1 − ϕ)fs (p + θs ) + λfs (p + θa ) (4) Further, the data series that are significantly different from the distribution structure D(n) of the sample dataset in the actual sequence data are identified as collective outliers. Assume that the sample set x = {x1 , x2 , . . . , xn } of m data objects, the number of clusters is k, set c = {c1 , c2 , ..., cn } is the known initial cluster center of the sample set, and m < k, the shortest distance  from  the sample point z in the sample set to the known initial cluster center set s is xj , c , the shortest distance obtained from the sample set ds is arranged in ascending order, and the shortest distance from the point to the set

232

L. Ma et al.

is the probability: p(xi ) =

|D(n)−D(n)+m|2  2 k xj ∈x ds xj , c

(5)

In the process of data mining, the sample points of machine learning are not rigidly divided into corresponding categories by 0 or 1, and the probability that the broken data points tend to different categories is introduced. Given a sample set of n data objects, uijm and vi are the number of clusters. By minimizing the objective function, the cluster center and the membership degree between the sample point and the cluster center are updated iteratively, and finally the similar sample points are as accurate as possible. The samples are divided into the same cluster, and the clusters composed of dissimilar sample points are divided as far as possible. The data feature mining objective function is defined as follows: J =n

n c  

uijm dij2 + p(xi ) − vi 2

(6)

i=1 j=1 (r−1)

represents the fuzzy weighting coefficient, m ∈ [, +∞), in general, m If vi = 2. The membership matrix of cluster center η and sample points and cluster center is updated iteratively by using the Lagrange multiplier method, so that the objective function value f (n) gradually decreases until it converges to the minimum value. The updating formula of the membership matrix is obtained by using the Lagrange multiplier method as follows:



(r−1) −2/m−1

J − vi

(l) −η (7) uij = p(xi ) − vi 2 It can be seen that the machine learning method, after repeatedly updating U and V in turn, finally finds the most suitable result that meets the target. The calculation process of the machine learning method is simple, but there are still some shortcomings. The algorithm of the machine learning method is affected by the initial cluster center, and several methods of initializing the cluster center can be adopted. 2.3 Implementation of Network Multi-source Data Mining Self organizing network will become an important technology of 5G. 5G will be a heterogeneous network with integrated and cooperative multi system coexistence Technically, the coexistence of multi-layer and multi wireless access technologies will lead to a very complex network structure. The relationship between various wireless access technologies and network nodes with various coverage capabilities is complex. The deployment, operation and maintenance of the network will become a very challenging work. In order to reduce the complexity and cost of network deployment, operation and maintenance, and improve the quality of network operation and maintenance, the future 5G network should be able to support more intelligent The unified SON function can

Machine Learning Based Method for Mining Anomaly Features

233

uniformly realize the joint self configuration, self optimization and self-healing of multiple wireless access technologies and coverage levels Recently, the SON technology for LTE, LTE-A, UMTS, and WiFi has been developed relatively well, and gradually began to be applied in the newly deployed network.However, the existing SON technologies are all oriented to their respective networks, and perform independent self-deployment and self-configuration, self-optimization and self-healing from the perspective of their respective networks, and cannot support the collaboration between multiple networks. SON technology for constructing network, such as node self-configuration technology based on wireless backhaul in heterogeneous network, self-optimization technology in heterogeneous system environment, such as coordinated wireless transmission parameter optimization, coordinated mobility optimization technology, and coordinated energy efficiency optimization technology, cooperative admission control optimization technology, etc., as well as cooperative network fault detection and location under different systems, so as to realize the self-healing function. The multi-source data mining of machine learning acquisition and analysis network is designed and implemented according to the combined prediction model based on machine learning and data features. Figure 4 shows the functional architecture of the model. User information management module User management sub module

User rights management module User registration module

User interaction module

Data visualization module Interface display mode Search engine data collection module

Internet multivariate data acquisition and analysis system

Data acquisition module

Social network data acquisition module Data preprocessing module Single data source model training module

Data analysis module

Combined model training module Data preprocessing module

Log management module

System logging module User operation record module

Fig. 4. Functional structure of machine learning network data feature model

In general, the model has designed three main functional modules, which are data acquisition module, data analysis module and user interaction module. User interaction module is mainly divided into data visualization sub module and interface presentation sub module. The data collection module is divided into search engine data collection module, social network data collection module, content filtering module and data preprocessing module. The data analysis module is divided into single data source model

234

L. Ma et al.

training sub module, combined model training sub module and data preprocessing sub module.The model also includes log management sub-module and user management sub-module. The comprehensive index finally obtained by the feature analysis of the search engine data needs to construct the corresponding prediction model. The basic flow of the prediction model is shown in Fig. 5.

Establish basic equations containing exponential variables

Variable filtering

Introduction of historical influenza like case data

Output equation

Supervision multiple linear regression equation

Make predictions

Significance test of regression coefficient

Get the prediction results

Fig. 5. Basic process of data feature recognition and classification

The data feature recognition and classification module is responsible for model training and data prediction. This module includes three sub modules: single data source model training module, combined model training module and data preprocessing module. Multi source cloud data architecture usually adopts C/S structure, B/S structure, and combination of C/S and B/S. However, these three framework connection methods all need to use cloud computing database as the main bearing container of multi-source isomers, and require that the database must use the core server as the basic installation background, and all big data to be connected must maintain a tight connection with the front processor. For the multi-source cloud data interface, every program file output from the cloud computing database must enter the central processing layer structure through this structure. In order to avoid the accumulation of cloud data, each multisource cloud data interface is directly connected to the client processor, and under the promotion of an external power supply device, it provides continuous data support for cross-source scheduling of big data. When the basic connection form is not enough to meet the transmission requirements of multi-source heterogeneous big data bodies, some data will release the node information carried by itself before entering the cloud computing database, and transmit the information to SQL temporary storage through the multi-source cloud data interface structure to avoid the occurrence of scheduling efficiency reduction events due to data loss [4]. Traditional big data scheduling processes mostly use API interfaces, which can maintain the structural stability of big data multisource isomers to the greatest extent. However, with the continuous increase of the total amount of data to be scheduled, such interfaces cannot maintain a long-term continuous working state, which is easy to cause data accumulation, and is not conducive to in-depth integration of underlying network traffic. Because the training data required by

Machine Learning Based Method for Mining Anomaly Features

235

the combined model training module needs to wait for the prediction data given by the single data source model training module, the multi-source heterogeneous data formats are divided into four categories for interpretation by using the data conversion system during data processing, namely: text files, mainly including text format files; Binary file class, mainly including binary format files; GDAL file class, mainly including NetCDF format file, GrADS format file, HDF format file, remote sensing image file; MATLAB file class, mainly including Mat format files. Therefore, during the processing of the data analysis module, the data shall be preprocessed first, then the single data source model shall be trained, and finally the combined model shall be trained. The overall processing flow of the module is shown in Fig. 6.

Fig. 6. Optimization of abnormal data screening and positioning process

According to the data selection criterion, the ant colony algorithm is used to fuse the active-active data to obtain the fusion solution. The specific steps are as follows: (1) Convert the data values collected by the two data centers into fuzzy values to obtain the local decision of the active-active data center; (2) Calculate the distance between active-active data centers; (3) Establish a distance matrix, thereby determining the matrix relationship; (4) According to the determined result, a directed graph can be obtained; (5) According to the directed graph, two active data centers can be merged into a new group, and the decision value can be calculated. Use this decision value to replace the fused active data, and return to step (2) until the distance between active data centers is 0, and the maximum active data connection group can be obtained. So

236

L. Ma et al.

far, the integrated access to multi-source big data of the distribution communication network has been completed. In the training module of single data source model, the prediction models used are the described prediction models based on search engine data, prediction models based on social network data, and prediction models based on official data. Multi source regression algorithm is used in the first two models, and autoregressive moving average model is used in the prediction model of official data. The training results and prediction results of the model will be stored in the database, and the data results will be input into the preprocessing module; The combined model training module uses the prediction results of a single data source model as the training set, which is used to train the gradient lifting decision tree algorithm.The final training results and prediction results will be stored in the database, and the data results will be input to the preprocessing module; the data preprocessing module is mainly used for data conversion between models. In addition, the evaluation criteria need to be calculated according to the training results and prediction results of each model. The evaluation results of the model will also be stored in the database to facilitate the display of subsequent interactive modules. In order to ensure the effectiveness of abnormal data mining of data nodes in the security resource pool, the abnormal data feature collection algorithm is optimized [3]. During the collection of abnormal features of data nodes in the security resource pool, its minimum support sequence vector set is: A = {A1 , A2 , · · · , An }. In the above set, n represents the number of items in the minimum support sequence vector set. If there is a characteristic value ε in the set, and ε is a special sequence parameter in the set [4]. If the data support degree in the set is  , record the special sequence item value κ of abnormal features, and define the support degree of abnormal data feature sequence as Ra. Based on this, the least support frequent itemset L is calculated as follows.   ε + ( − κ)n−1 (l) (8) L = lim uij 0→∞ A + Ra In the process of in-depth collection and query of data features, in order to ensure the effectiveness of data mining, it is necessary to further improve the automatic abnormal mining natural data center, so as to optimize the abnormal data mining process. Based on the above steps, the abnormal feature data of the data node is further mined. According to the initial clustering population value M collected by the data model, the individual characteristics of the optimal population are judged. If the number of characteristic clusters collected is j, the numerical algorithm for the dynamic adjustment range of abnormal data is: j − 1 ζ =λ (9) (L − M ) Before a node j fails, the operation data is dynamically transferred to other nodes Ck (t) to ensure that the task is completed before that time. Requirement:   Cj (t) = (L + ζ ) + Ck (t) di − ξj (t) − t0 ≥ ωi (t) (10) It is known from the failure prediction model that at the time of ξj (t), node di will experience resource failure, and the end time of t0 ’s operation data is > ωi (t). Based on

Machine Learning Based Method for Mining Anomaly Features

237

the above algorithm, global optimization and mining processing are carried out for the differential node numerical operator of abnormal data. Based on the distributed resources of multiple targets, the real-time perception of data resources based on multi-target genetic algorithm is realized. Like the DAG parallel graph, there is a data dependency between running data. For example, T1 is the subsequent running data of a running data L2 and 3. You can arrange running data 1 to virtual machineresource n, and assign    t and T tj . running data 2 and t3 to virtual machine resources Tpredexec j   pred eamm  Then the data transmission between running data Tstarl tj , vt and Tstarl tj , vt should be completed by the communication link between virtual machine resources Tcomm (ti ) and Texec (ti ), Then the parallel running data planning problem can be described as follows: on virtual machine resource U, the reliability of the whole task is increased and the task schedule length is reduced m ⎧ max C ⎪ ⎪  j (t)  i=1 Priority    ⎨ tj + UTpred eamm tj Tstarl tj , vt  ≥ Tpredexec  ⎪ t , v = maxUTcomm (ti )|ti ∈ Pred t T j ⎪ ⎩ starl j  t  Tprederec tj = max UTexec (ti )|ti ∈ Pred tj

(11)

The absolute error average value filled with the distribution network data can better reflect the real filling deviation, and the less the average absolute error, the better the filling result. Here yi represents the padded data and xi represents the corresponding actual data, so the mean absolute error of n samples is defined as the following       1   Tstarl tj , vt − Tstarl tj , vt − Tprederec tj  n n−1

MAE(yi , xi ) =

(12)

i=0

Between the measured data and the measured data, the sample benchmark deviation is selected as the optimal prediction index. The smaller the mean value is, the more accurate the result is. Here, yi represents the filled data, and xi represents the corresponding real data. Therefore, the average error of n samples is defined as:   n−1 1    y − xi 2 (13) RMSE =  i n i=0

The normalization of the current multi-index, according to the characteristics of its indicators, can be attributed to two kinds of linear and nonlinear. This method can be used for both the anterior index and the reverse index, but the conversion index cannot be used to represent the correlation between the initial indexes. Ratio conversion: After conversion, convert the front and rear indices into a positive index. Although there will be different indices, it cannot well reflect the interaction between the reverse indices.Local abnormal feature numerical search based on machine learning principle is carried out for individual and group features of collected abnormal data. In order to further accelerate the speed of mining convergence, the above algorithms need to be processed iteratively. Because the automatic mining method of data node exceptions is relatively complex and a complete process, the mining steps are usually relatively tedious, time-consuming and

238

L. Ma et al. start Source data Feature data extraction Data normalization

Data cleaning

Clustering processing Characteristic data acquisition Local density calculation Maximum characteristic value Whether the excavation is completed

N

Y Abnormal factor characteristics

end

Fig. 7. Optimization of abnormal data feature mining steps

prone to deviation, Therefore, the process of abnormal feature data mining is optimized, and the specific steps are shown in Fig. 7. In the process of data mining of abnormal features, in order to minimize the complexity of data mining, the mining steps are further improved, as follows: 1. Data preparation. Randomly collect characteristic values of data nodes in the security resource pool. 2. Select Feature Data. Mining the internal and external characteristic data information of the collected data. 3. Data preprocessing. The characteristic data is further analyzed and transformed, and a corresponding analysis model is established, and a corresponding data center is established. 4. Data mining. Combining the feature data and the data center detection value to realize automatic mining, obtain and output the data feature value. Analyze and define the dimension boundary of the characteristic boundary of abnormal values. Based on constraint conditions and iterative processing method, the numerical mining method of abnormal data intrusion features in flow data is improved, as follows: 1. Feature mining algorithm optimization. By optimizing the data feature mining algorithm, the speed and accuracy of data mining are improved. 2. sdn data center structure optimization. In order to better expand the depth of data mining, the data features are clustered in combination with the CAF principle and data feature collection standards.

Machine Learning Based Method for Mining Anomaly Features

239

3. Data cleaning. In order to ensure the rationality of data mining, it is necessary to perform routine denoising processing on abnormal feature data to clean the data and ensure the effectiveness of data mining. 4. Data output. Output and store the collected and cleaned data, and check and repair the abnormal data by comparing the characteristic values of standard data. Based on the above methods, we can effectively optimize the automatic mining method of security resource pool data node exceptions, so as to achieve the research goal of improving the accuracy and effectiveness of data mining.

3 Analysis of Experimental Results In order to verify the practical value of the network multi-source data anomaly feature mining method based on machine learning designed in this paper, the following comparative experiments are designed. Take two computers with the same configuration, one of which carries the traditional Bayesian multi-source data anomaly feature mining method as the control group; The other one carries the method of this paper as the experimental group. Before starting the experiment, complete the experimental parameter setting according to Table 1. Table 1. Experimental parameter setting table Experimental parameters

Experience group

Control group

TAD/(T)

6.75 × 109

6.75 × 109

DSI

0.79

0.79

DSR/(T/s)

1.45 × 104

1.45 × 104

Dmt/ (class)

IV

IV

KVS/(%)

97.57

97.57

AOT/(s)

47.96

47.96

First of all, the performance indicators of the two methods are tested with correlation coefficient r, mean square error MSE, root mean square percentage error RMSE, and maximum absolute percentage error MAPE as indicators. The results are shown in Table 2.

240

L. Ma et al. Table 2. Evaluation results of application effects of different methods

Evaluating indicator

Traditional methods

The method of this paper

Pearson correlation coefficient

0.4723

0.9845

Root mean square error (%)

1.1178

0.1088

Root mean square percentage error (%)

6.0133

1.4259

Maximum absolute percentage error (%)

5.0622

2.6311

It can be seen from Table 2 that compared with the traditional method, the method in this paper has better application effect in the test of Pearson correlation coefficient, root mean square error, root mean square percentage error, and maximum absolute percentage error. On this basis, the mining time of abnormal data features of different methods is analyzed, and the results are shown in Fig. 8. 100 Traditional method

Data identification time(s)

Method in this paper 80

60

40

20

0 0

20

40 60 Percentage of dataset size%

80

100

Fig. 8. Comparison of abnormal data mining time under different methods

It is not difficult to find out through analysis based on the above comparative detection results. Compared with the traditional methods, the method in this paper takes significantly less time to identify abnormal data in the actual application process. The error of abnormal data mining of the two methods in the same environment is further compared for comparative detection. The specific detection results are shown in Fig. 9. Based on the contrast detection results in Fig. 9, it is not difficult to find that compared with the traditional methods, the data anomaly feature mining method proposed in this paper has significantly lower error rate, and the overall mining accuracy is significantly better, fully meeting the research requirements. Further compare the mining effects of different methods with the contour coefficient and standard mutual information as indicators, and the results are shown in Table 3.

Machine Learning Based Method for Mining Anomaly Features

241

60 Traditional method Method in this paper

Error rate %

50

40

30

20

10 0

100

200

300

400

500

Data volume (GB)

Fig. 9. Error detection results of abnormal data mining by different methods Table 3. Comparison of the mining results of the two methods Traditional methods

The method of this paper

Contour coefficient

Contour coefficient

Standard mutual information

Standard mutual information

Iris

Max

Min

Max

Min

Seeds

0.688

0.598

0.833

0.768

Survival

0.699

0.606

0.846

0.816

Knowledge

0.679

0.589

0.928

0.748

Perfume

0.618

0.598

0.902

0.719

As shown in Table 3, compared with the traditional mining methods, the advantages of the method in this paper are that it first eliminates the randomness of the initial parameter selection, so that the clustering results are always unique, the clustering process is more stable, and the mining effect is better.

4 Conclusion With the development of network technology, data sources are more complex and diverse, and there are fuzziness and uncertainty in them. Mining abnormal features from network multi-source data is very important for maintaining network security. For this reason, this research designs a method of mining abnormal features of network multi-source data based on machine learning. In this paper, a multi-source data feature recognition model is built based on machine learning principle, and then a network multi-source data anomaly feature screening model is built on the basis of multi-source data feature classification, and has achieved good application results. In the future research, the research scope will be further narrowed and accurate mining will be carried out for data anomaly features in feature sources.

242

L. Ma et al.

References 1. Ren, F., Goa, C., Tang, H.: Machine learning for flow control: Applications and development trends. Acta Aeronautica ET Astronautica Sinica 42(04), 152–166 (2021) 2. Cheng, Y.: Anomaly data mining method for network platform based on improved clustering algorithm. Changjiang Information & Communications 35(04), 38–40 (2022) 3. Liu, A., Li, Y., Xie, W., et al.: Estimation method of line parameters in distribution network based on multi-source data and multi-time sections. Automation of Electric Power Systems 45(02), 46–54 (2021) 4. Lian, J., Fang, S., Zhou, Y.: Model predictive control of the fuel cell cathode system based on state quantity estimation. Computer Simulation 37(07), 119–122 (2020) 5. Zhang, Y., Lin, K., Feng, S.: Research on automatic data node anomaly mining method for security resource pool. Automation & Instrumentation (07), 73–76 (2020) 6. Li, K., Li, J., Shao, J., et al.: High-dimensional numerical anomaly data detection based on multi-level sequence integration. Computer and Modernization (06), 73–82 (2020) 7. Kang, Y., Feng, L., Zhang, J.: Research on subregional anomaly data mining based on naive bayes. Computer Simulation 37(10), 303–306+316 (2020) 8. Lei, J., Yu, J., Xiang, M., et al.: Improvement strategy for abnormal error of data-driven power flow calculation based on deep neural network. Automation of Electric Power Systems 46(01), 76–84 (2022) 9. Yang, Y., Bi, Z.: Network anomaly detection based on deep learning. Computer Science 48(S2), 540–546 (2021) 10. Li, L., Chen, K., Gao, J., et al.: Quality anomaly recognition method based on optimized probabilistic neural network 27(10), 2813–2821 (2021)

Data Anti-jamming Method for Ad Hoc Networks Based on Machine Learning Algorithm Yanning Zhang(B) and Lei Ma Beijing Polytechnic, Beijing 100016, China [email protected]

Abstract. The current data anti-jamming methods lack the feature classification process, which leads to poor anti-jamming effect. In order to solve this problem, this paper proposes a data anti-jamming method based on machine learning algorithm for ad hoc networks. First of all, based on machine learning algorithm, the data transmitted in the ad hoc network is processed by feature mining and classification, and the ad hoc network information transmission management platform is constructed. Then optimize the steps of extracting the anti-jamming information features of the ad hoc network data, and combine machine learning algorithm to optimize the anti-jamming evaluation algorithm of the communication data of the internet of things, so as to achieve the identification and protection of the ad hoc network interference data. The experimental results show that this method has high practicability and can meet the research requirements. Keywords: Machine Learning Algorithm · Ad Hoc Network · Data Anti-Interference · Multichannel Transmission

1 Introduction The use environment of the ad hoc network is relatively complex. In order to meet the needs of the use, the multi-channel routing of the ad hoc network has become a more popular network form. Among them, the multi-channel transmission technology of the ad hoc network is to transmit the cells belonging to the same virtual connection to the opposite node. Most of today’s ad hoc network switching devices regard the different channels of the channel group connected to the same node as separate entities, ignoring the problem of interference between the channel group and the same pair of nodes. Even if the cell-level traffic changes, it will To a certain extent, the problem that some channel cells are overloaded and the remaining channels are basically idle can be avoided. In addition, because the data transmission of the ad hoc network will suffer a certain degree of interference, which limits the application of high-frequency signal scenarios, especially in the absence of anti-interference measures, the false alarm rate will be greatly increased. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 243–258, 2024. https://doi.org/10.1007/978-3-031-50577-5_18

244

Y. Zhang and L. Ma

In order to improve the anti-jamming capability of ad hoc network data, relevant experts proposed an anti-jamming avoidance method for ad hoc network sensor data based on adaptive link and inter-symbol interference suppression. This method optimizes the deployment of sensor nodes on the basis of constructing a transmission channel model. According to the baud interval, the sensor sensing channel of the Internet of Things is equalized, and the filtering process is completed through inter-symbol interference suppression to complete data anti-interference. Some scholars have also established a transmission link model to complete interference filtering according to spread spectrum channel modulation, use fractional interval equalization to suppress multipath links in ad hoc networks, achieve load balancing control of ad hoc links through bit error rate feedback modulation, and complete data anti-interference control. Based on the above research, this paper proposes a new data anti-interference method for ad hoc networks based on machine learning algorithm. The idea is as follows: ➀ Based on machine learning algorithm, the characteristics of the data transmitted in the ad hoc network are mined, and the mining results are classified; ➁ Build an ad hoc network information transmission management platform, optimize the data anti-interference evaluation algorithm with machine learning algorithm, and realize the identification and defense of malicious interference data.

2 Anti-interference Method of Ad Hoc Network Data 2.1 Classification of Data Interference Types Due to the wireless communication environment and its own characteristics, ad hoc networks have weak resistance to interference. The multi-channel transmission platform of ad hoc network mainly consists of three modules: micro control chip, rf chip and sensor. The architecture of the multi-channel transmission platform of the ad hoc network is shown in Fig. 1. Data sending end Sensor Micro control chip RF chip Data filtering

Data receiving end

Required data

Fig. 1. Schematic diagram of the architecture of the ad hoc network information transmission management platform

It combines the versatility and expansibility in the data processing environment of the ad hoc network, and can normally control the high-confidential data collection process,

Data Anti-jamming Method for Ad Hoc Networks

245

so that the technical control effect is better. Therefore, in order to realize the versatility and scalability of machine learning technology, it is necessary to use machine learning as the basic framework to realize the control of massive high-confidential data clients in the ad hoc network, and to complete the control between the client and the server by managing different distributed servers. Therefore, the intelligent control of the collection of high-confidential data in the distributed network in the two-layer mode is realized, and the expandable space of the device application is provided. Therefore, the networked control architecture is designed as shown in Fig. 2.

Control server

Data acquisition 1

Data acquisition 2

Data acquisition (n)

Control equipment 1

Control equipment 2

Control equipment n

Database

Middleware of networked control system

Control data analysis

Control process simulation

Control realtime display

Fig. 2. Data feature classification control architecture

It can be seen from Fig. 2 that the structure mainly includes four layers, namely, data acquisition layer, control layer, communication layer and application layer. Including: Data acquisition layer realizes data acquisition through different measurement and control equipment. The control layer accesses the real-time data in the networked database to realize the comprehensive control of the data. The communication layer is mainly composed of middleware. Good compatibility between different instruments and equipment is achieved through information transmission and interface processing. The returned information status can be obtained by processing the external equipment, and the good status information can be transferred to the database for storage. The application layer is the outermost basic application in the distributed networked control structure. It realizes the intelligent control of data collection by processing the data information collected by the middle layer.

246

Y. Zhang and L. Ma

For the management of different control equipment, it is necessary to analyze one by one according to the distribution in the entire network, and transmit the information to be controlled to the data processing center, and the data processing center analyzes and processes the information to be controlled, and the two-way interactive communication method, to ensure the cooperation between multiple control devices. Due to the mobility of the ad hoc network, the interfered nodes can frequently change their attack targets and perform malicious acts on other nodes in the network, so it is difficult to track the malicious acts performed by the interfered nodes in the network, especially in large ad hoc networks. The rules are different when sending, so it is necessary to consider all neighbor types to select time slots and channels. The rules for selecting channels and time slots for the four types of nodes are shown in Table 1. Table 1. Transmission channel and time slot selection rules Node type

One hop neighbor node type

Channel and slot selection

Normal node

Normal node

The first time slot is sent on the channel

Internal node

External node

The entire time slot is transmitted on the channel

Edge node

Internal node, edge node, connection The second time slot is transmitted node on the channel

Connection node Connecting node, normal node, edge The first time slot is sent on the node channel: the second time slot is sent on the replacement channel

Threats from the interfered nodes inside the network are much more dangerous than attacks from outside the network, and these attacks are difficult to detect because they come from the interfered nodes and perform well before being interfered. It can be seen from this that special attention should be paid to the threat from the interfered nodes in the ad hoc network, and mobile nodes and infrastructure should not easily trust any node in the network, even if they all perform well, because it is likely that they have been infected. Since the mobile ad hoc network does not have a centralized management mechanism such as a server, this leads to some vulnerable problems. The lack of a centralized management mechanism makes attack detection a very difficult problem because it is not easy to monitor traffic in a highly dynamic and large-scale ad hoc network. The lack of centralized management mechanism will hinder the trust management of nodes in mobile ad hoc networks. Some algorithms in mobile ad hoc networks depend on the cooperative participation of all nodes and infrastructure. Because there is no centralized permission and the decisions in mobile ad hoc networks are sometimes decentralized, attackers can take advantage of this vulnerability and execute some attacks that undermine the collaboration algorithm. Network data protocols include TCP/IP, MICROSOFT and NOVELL. The specific network protocol selection basis is shown in Table 2.

Data Anti-jamming Method for Ad Hoc Networks

247

Table 2. Network data protocol selection basis table Protocol type

Selection basis

TCP/IP

Directly integrate with data script to avoid untimely data adjustment

MICROSOFT Promote the rapid progress of identification operation and improve the quality of data transmission NOVELL

And exists in parallel with the Microsoft protocol

In the process of data anti-interference, the lack of centralized management mechanism will lead to loopholes. This vulnerability may affect many aspects of operations in mobile ad hoc networks. Mobile ad hoc networks are built on open shared media, so interference sources can easily launch interference attacks to reduce network performance, such as directly paralyzing some nodes, increasing packet transmission time, and reducing PDR. The interference source usually attacks the network in the form of sending invalid wireless signals or data packets, causing the channel of network communication to be occupied or the received data packets to be damaged. Interference sources can be divided into basic interference and advanced interference, while basic interference can be further divided into active interference and passive interference, and advanced interference can be further divided into function specific interference and intelligent mixed interference, as shown in Fig. 3. Interference type

Basic interference

Advanced jamming

Active jamming

Reactive interference

Function specific interference

Intelligent hybrid jamming

Continuous interference; Random interference

RTS / CTS interference; Data / ack interference

Follow interference; Impulse noise interference

Control channel interference; Flow interference

Fig. 3. Schematic diagram of the division of data interference types

Because of its wireless environment and its own characteristics, the mobile ad hoc network has weak resistance to interference. Therefore, many anti-interference technologies and schemes have been studied in the academic circles to improve the communication reliability of the mobile ad hoc network. For the multi-channel transmission platform of the Internet of Things, due to the integrity of the sensor, the micro control chip and the self-organized network structure, the data of the platform can be in and out. Among them, the micro control chip will use various algorithms to compare the collected

248

Y. Zhang and L. Ma

data with the received data, and then realize data processing and fusion. Based on the anti-interference problem of Internet of Things communication data, the point-to-point model should be used for the transmission platform. 2.2 Design of Anti-jamming Algorithm for Ad Hoc Network Data Machine learning algorithms establish mathematical models based on sample data or training data, or interact with the environment so that they can predict or make decisions to perform tasks without explicit programming. Existing machine learning algorithms can be classified by the expected structure of the model, mainly including supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, the machine learning algorithm has a labeled training data set, which is used to build a model representing the learning relationship between input, output and parameters. Unlike supervised learning, unsupervised learning does not need labeled data sets. Its goal is to classify the sample sets into different groups or clusters by obtaining the similarity between input samples. Reinforcement learning algorithms learn through the interaction between agents and the environment, that is, online learning. Finally, since some algorithms share characteristics of supervised and unsupervised learning methods, they do not fit into these three categories, these hybrid algorithms are often referred to as semi-supervised learning and aim to inherit the advantages of these main categories while maximizing the reduce their weaknesses. Machine learning is a research area of artificial intelligence that aims to make computers learn autonomously like humans, thereby speeding up the speed at which computers process data. The ultimate goal of machine learning is to gain knowledge from data. The machine learning model design generally consists of four parts: environment, learning element, knowledge base and execution element, as shown in Fig. 4. Environment Get data Learning element To update Knowledge base Execution element

Guide learning

Fig. 4. Machine learning model design diagram

The machine learning model is used to process the big data of the ad hoc network. First, the machine learning based link node clustering of the ad hoc network uses iterative calculation theory as the core processing basis. l center point is determined from all the operating state data of the ad hoc network equipment to minimize the sum of the distances from other data points to the cluster center point. The selection of cluster center point l has strong randomness. With the increasing initialization effect of machine learning theory, these center points will gradually move in a unified direction. In order to reduce

Data Anti-jamming Method for Ad Hoc Networks

249

the number of clustering iterations of the link nodes in the ad hoc network, all operation data can be allocated to the range of classes contained in the nearest center point one by one, and the clustering processing can be completed once by accumulating the distance between the data points and the center point for many times. In each clustering process, the position of the next center point l is determined by counting the average value of the ad-hoc network link nodes. When the coordinates of all center points remain unchanged or within the class range of each center point l After the blank positions are no longer included, the iterative processing is stopped, and the clustering processing of the ad hoc network link nodes based on machine learning is completed. The specific processing principle and operation formula are as follows: √  l · ht · (s + p)d (1) F= 3k + g Among them, F represents the clustering processing result of the link nodes in the ad hoc network based on machine learning, h represents the cumulative times of the distance between data points and center points, t represents the change interval of the class range, k represents the average value of the link nodes in the ad hoc network, g represents the average value of the clustering weight, s represents the iteration constant of the link data in the ad hoc network, represents the p-term coefficient of the node processing constant, and d represents the node parameters of the clustering processing. The principle of link node clustering in an ad hoc network based on machine learning is shown in Fig. 5. Build the random matrix structure of the communication data of the ad hoc network, analyze the reception of the communication data, deduce the acquisition matrix of the received communication data as the autocorrelation moment library of the noise, analyze the characteristics of the matrix, solve the optimal anti-interference judgment threshold expression, and complete the multi-channel Design of anti-jamming algorithm for ad hoc network communication data under transmission. The machine learning channel hopping strategy takes into account the channel selection and time slot division rules for each type of node in broadcast mode. The time slot and channel selection rules of nodes when receiving are the same as in unicast mode. Set the random variable of independent and identically distributed ad hoc network communication data as m, and the random variable matrix is n. Then the following formulas are used to describe the definition center and calibration constant respectively: √ √ (2) μ = ( n − 1 + m)2 In order to improve the success rate of data transmission, the basic idea of this strategy is to replace the disturbed channel with the corresponding replacement channel. Before enabling this strategy, the node will switch to the channel indicated by the frequency hopping pattern every time slot to send and receive data. After this policy is enabled, when the frequency hopping pattern indicates the need to switch to the disturbed channel v, the internal nodes and edge nodes will switch to the alternative channel v, while the connecting nodes and normal nodes will still switch to the channel v. However, only channel replacement will prevents the adjoining node from receiving packets from certain

250

Y. Zhang and L. Ma

Iterative computing theory

Uniform relative displacement of center point

Operation data class range assignment

Initialization of machine learning theory

The distance between the data point and the center point is accumulated for many times

Cluster processing of communication nodes in distribution network

The coordinates of the center point remain unchanged The class scope does not contain empty spaces

Fig. 5. Principle of clustering of ad hoc network link nodes based on machine learning

types of neighbors. The cumulative distribution function of the first-order du distribution is expressed as F1 , and the defining formula of this function is as follows:    1 ∞ (μ + v)du (3) F1 (t) = exp − 2 1 According to the random matrix characteristics of machine learning, we can fully understand the channel characteristics of the current communication, and its covariance matrix can verify the parameter attributes of the channel received data. The node can select channels by referring to the historical channel selection and the observation of its real state. Some scholars have shown that the optimal channel decision under multichannel is based on the statistical data of all historical decisions and observations. The reasonable application of computer and network management can increase the transmission speed of engineering information, improve the work efficiency of intelligent engineering management and construction management personnel, improve the work efficiency of various departments of the project, and enhance the coordination, communication and coordination among various personnel of the project.The long-distance wireless ad hoc network preferentially selects nodes with farther distances as data packets, which can greatly increase the number of hops in the network transmission process, thereby reducing the delay of packet distribution, reducing redundant information in the network, and effectively improving data. Distribution Control Efficiency. By calculating the transmission distance of the data packet, the signal strength of the receiving node and the location information transmitted by the node can be obtained. To this end, the forwarding delay protocol is set using the distance, so that the distribution delay of

Data Anti-jamming Method for Ad Hoc Networks

251

the receiving node is inversely proportional to the distance of the sending node. The calculation formula of node distribution delay is: d = MDT ×

Se − X e Se

(4)

Where, MDT is the maximum waiting time of the node; S is the node signal transmission range; X is the distance between the data sending node and the receiving node, and e is a constant. Under this protocol, as the node density increases, the number of nodes actually distributed will also increase. In order to solve the problem of continuous increase of nodes, a forwarding probability that is inversely proportional to the node density is added to keep the number of distribution nodes constant, thus realizing the design of data distribution control for wireless ad hoc networks with no time delay and long distance. When the model is initialized, if there is no historical data to reference, the confidence vector will be initialized. Each element in the vector can be obtained from the Markov chain. The process is as follows: ωoi =

i p01 i + pi p01 10

(5)

The anti-interference function of multiple channels can be realized through the Myopic channel selection scheme. However, this method has a disadvantage. Only when the channel state transition is positive correlation, this method can achieve the performance that almost the optimal strategy can achieve in any number of channels. When the channel state transition is negatively correlated, that is p1 < p0 . This method can match the performance of the optimal strategy only when the number of channels is 2 or 3. 2.3 Realization of Anti-interference of Ad Hoc Network Data Due to the data transmission of the ad hoc network, the following two situations will occur: the existence of the main user data and the existence of the noise data. The former indicates that the channel is occupied, and the latter is in an idle state. If s(k) is the directional received data energy in the ad hoc network, l is the energy information of the data, n(k) is the energy information of white Gaussian noise, the channel occupancy is set to H1 , and the idle state is H0 , the data receiving expression is as follows  · (s(k) + n(k)) H1 l x(k) = max (6) lmin · n(k) H0 Assuming that there are A(θ ) directional channels at the receiving end of the ad hoc network, the data A(θ ) collected at the receiving end at time k is described by the following expression x(k) = A(θ )s(k) + n(k)

(7)

In the formula, the direction matrix of k = 1, 2, . . . , M and M directional channels to collect user data is denoted as A(θ ), the energy of main user data at a certain moment

252

Y. Zhang and L. Ma

in the collected data is s(k); the energy of noise data is n(k). Because the communication data is more or less disturbed by noise, based on the premise that γ is established, the false alarm probability is used to distinguish 0 of the communication data types, and the description formula is as follows ρmax − ρmin >γ E∗ (N )

(8)

Based on the characteristic analysis results of mean value M , the expression of false alarm probability p is established as follows:    4M NM Pf = φ √ − (9) 2 2γ In the formula, the probability integral function is φ(∗), and the delay mainly occurs in the information collection process between the master and the slave clock. The slave clock sends a delay information request packet to the master clock, unlike the synchronous data packet with a period of 2 m. The information data packets sent are sent only when the delay request data packets occur, and the specific process is shown in Fig. 6.

Master clock Deemed synchronous Sync() Actual Slave clock

Fig. 6. Data packet sending situation

In case of network delay in sending data packets from the slave clock, the current delay time shall be recorded in time, the time of data packet transmission from the slave clock to the master clock shall be calculated, and the delay request data packet shall be sent. The current time T1 shall be recorded while transmitting. When the master clock receives the delay request data packet, the time T2 shall be recorded immediately. Since time T2 is required for data transmission delay calculation at slave clock, a delay request response packet needs to be sent from master clock to slave clock. The overall delay is: T = T1 − T2

(10)

Since the transmission between data packets has a symmetrical property, the transmission time of the data packets from the master clock to the slave clock is consistent with the transmission time of the data packets from the slave clock to the master clock. Therefore, the network transmission delay can be calculated. When the slave clock

Data Anti-jamming Method for Ad Hoc Networks

253

receives the time stamp T1 , the offset error offset between the master-slave clock can be estimated by using the formula, which is: offset = T1 − T0 − delay

(11)

In the formula, T0 is the time of receiving synchronization packets from the clock; delay is delay error between master and slave clocks. The adaptive connection of the ad hoc network equipment integrates and processes the relevant operation status data according to the development requirements of machine learning theory. On the premise of meeting the automatic maintenance requirements, these data are arranged according to the current network status requirements of the ad hoc network equipment, so as to improve the detection efficiency of the operation data. In the self-adaptive connection method of the ad hoc network equipment, the direct access method, the network access method, and the private line access method are three common communication methods. Among them, the direct access of the ad hoc network equipment can fully connect each communication node, and strictly limit the maintenance access mode of the running status data. The private line access of the ad hoc network equipment needs to occupy one or more data transmission channels, and change the original state of these data. The machine learning theory is applied to break up the state data of the equipment in the ad hoc network, and these scattered data are combined into a new state maintenance ring of the equipment in the ad hoc network in the way of connecting the first and last in order. See Fig. 7 for the detailed adaptive connection principle of MANET data. Automatic maintenance requirements

Machine learning theory

Integration and processing of status operation data of relevant distribution network equipment Rearrange according to the current network status of distribution network equipment

Direct access

Network access

Special line access

Connect all communication nodes

Network transmission protocol

Distribution network equipment status data scattered

The access mode for maintenance is strictly limited

The status data does not generate transmission communication

The head and tail are connected in sequence

Fig. 7. Detailed diagram of the principle of self-organizing network data adaptive connection

On the basis of the adaptive connection, the data anti-interference implementation process is designed, as shown in Fig. 8. The specific implementation process of the algorithm is as follows: Assuming that there are K convolution cores and N kinds of output layers, then the weight parameter

254

Y. Zhang and L. Ma Start Network nodes a and B are connected

Exchange information list

Y

Is node B the destination node of the message

Communicate Put the message into node B for caching

N N

Number of message copies in node a is less than or equal to 1

Put the message into node B for caching Node B cache space Insufficient

Y The number of message copies in node a is halved

N

Y Delete cached message

End

Fig. 8. Data anti-jamming implementation process

θ of the output layer is a A × B matrix, which can be expressed as θ ∈ C A×B , and the feature of the pooled sample X is a K dimension vector, that is, f ∈ C K . The probability that sample X is divided into the Y category is: e(cy ·f +qy ) P = (Y|X , C) = N (cy ·f +qh ) h=1 e

(12)

In the formula: qh represents the h bias term of the fully connected layer, and the loss function can be obtained by maximizing the likelihood probability: W= −

R

log p gy |xy , θ

(13)

y

In the formula, R is the training data set, and gy is the real data type of the y sample. In order to prevent over fitting, it is necessary to simplify the ad hoc network structure according to a certain probability to ensure that the weight does not work. After feature compression, the internal state and behavior control of the data can operate freely on the basis of stable database storage space. Through the above process, parallel recommendation for diversity key data is realized. According to the networked control architecture, the data acquisition layer, control layer, communication layer and application layer are analyzed. According to the distribution of different test equipment in the whole network, the purpose of controlling the comprehensive ability of data acquisition is to study the data acquisition and control. The

Data Anti-jamming Method for Ad Hoc Networks

255

pin function of the LPC2292 controller is configured as a GPIO bus expander to design the data acquisition process, store the collected data in the database for comprehensive control, and complete the timing read and write according to the ARM static storage control mechanism. The specific implementation process is as follows: Step 1: divide the control program into three parts, namely, data acquisition module, control read-write module and bus communication module. The bus communication module is located at the top layer, the data acquisition module and the control read/write module are located at the bottom layer, and the data transmitted through the bottom layer module is used for integrated digital signal processing. Step 2: For the data acquisition module, the parallel clock drive ADC_CLK needs to be controlled. In the process of writing it, the data collected along the reading is required to have stable and reliable properties. The FPGA field programmable gate array is the driver of the photo for the two channels. The collected data is preprocessed to filter out the 50 Hz interference frequency interference before the data can be written into the memory. Step 3: The signal CE selected for the control read-write module is relatively low. When using the ARM static memory of CY7C1021DV33 model to read, pay attention to the OE signal, pull it down first and then raise it, and complete the data writing along the signal direction. According to the above process, realize the intelligent control of the data in the ad hoc network without delay.

3 Analysis of Experimental Results In order to verify the practical application performance of the data anti-interference method based on the machine learning algorithm, the following experiments are designed. The simulation experiment environment is Intel Pentium i5-2520M processor, the running memory is 6GB, the operation is Windows XP, and MATLAB2013 software is used as the experimental platform. The experiment adopts the multi-channel transmission parameters of the ad hoc network as shown in Table 3. The data size used in the experiment is 1000 Mb. Table 3. Multi-channel transmission parameters Parameter name

Selection range

Primary carrier frequency

20000 Hz

Load wave frequency

1600 Hz

Data signal symbol rate

1300 Baud

FSK adjustment index

0.6

Channel environment

white Gaussian noise

256

Y. Zhang and L. Ma

In order to avoid too single experimental results, the method in this paper is compared with the traditional anti-interference avoidance method based on adaptive link and intersymbol interference suppression for sensor data in ad hoc networks. First, verify the distribution of data by different methods, and the results are shown in Fig. 9. 20

Data / GB

16

Paper system

12 Traditional control system

8 4 0

4

8 12 16 Number of download nodes

20

Fig. 9. Comparison results of two distribution data quantities

According to the results of Fig. 9, the method in this paper has a better effect on the distribution of data in ad hoc networks, and the distribution process is more stable. However, the distribution process of traditional methods is not stable enough, and the distribution data volume fluctuates greatly. On this basis, the application performance of this method is verified. LRT channel is used to conduct the test, and the anti-interference process simulation diagram of this method is obtained. The results are shown in Fig. 10. In Fig. 10, the upper graph is the successful packet sending rate curve of each cycle of the method in this paper, the lower graph is the cumulative reward curve, and the abscissa is the training time step. Figure 10 illustrates the effectiveness and practicability of our method in the multi-channel state. On this basis, keep the interference frequency of the receiver and transmitter unchanged, and the network data of the transmitter will change with the change of the interference frequency. In this process, the network data of the transmitter is recorded as the maximum value. Under three different coupling coefficients (0.004, 0.005, 0.018), after analyzing and applying the method in this paper, the relationship between the ad hoc network data and the interference frequency change, the results are shown in Fig. 11. It can be seen from Fig. 11 that the interference frequency is uniformly distributed around 202 kHz, and is relatively small due to the change of interference data. According to the above changes in the network data of the transmitting end, the operating interference frequency of the transmitting end is adjusted according to the minimum value to ensure that the transmitting end and the receiving end are in a matching state, so that the experimental results are more accurate. Based on the comprehensive analysis of the above experiments, it can be seen that the data anti-interference method proposed in this paper based on machine learning

Data Anti-jamming Method for Ad Hoc Networks

257

Packet delivery ratio(%)

100 80 60 40 20 0 0.0

0.2

0.4

0.6

0.8

0.6

0.8

1.0

Time step

Accumulated reward

10 8 6 4 2 0

0.2

0.0

0.4

1.0

Time step

Fig. 10. Simulation diagram of interference perception and anti-interference of ad hoc network under multi-channel

algorithm has high practicability and effectiveness in the practical application process, and repeatedly meets the research requirements. 20

Current / A

16 12

0.018 0.005

8

0.004

4 250

255

260

265

270

275

Frequency / kHz Fig. 11. The relationship between data and interference frequency under different coupling states

4 Conclusion Due to the increasing support policies of ad hoc networks, more and more multichannel network environment fields are favored, and the interference generated during communication is generally artificial and non-artificial.

258

Y. Zhang and L. Ma

In order to make the communication effect of ad hoc network more ideal, this paper proposes a method of communication data anti-jamming based on machine learning algorithm. This method can effectively suppress data interference and optimize data transmission performance.

References 1. Jia, F.: Anti-jamming algorithm for IoT communication data under multi-channel transmission. Comput. Simul. 37(12), 122–126 (2020) 2. Zhan, L., Liu, Y., Zeng, J., et al.: Multi-node ad hoc network wireless data transmission system in highway environment. China Meas. Testing Technol. 48(S1), 185–188 (2022) 3. Zhao, B., Ji, W., Weng, J., et al.: Trusted routing protocol for flying Ad Hoc networks. J. Front. Comput. Sci. Technol. 15(12), 2304–2314 (2021) 4. Liu, Z., Xue, M., Yang, L., et al.: Research and application of wireless differential protection for a distribution network based on a regional ad-hoc network. Power Syst. Prot. Control 49(21), 167–174 (2021) 5. Cai, J.: Simulation of anti-jamming recommendation algorithms for massive transaction data. Comput. Simul. 38(6), 311–314+438 (2021) 6. Li, B., Liu, X., Feng, J.-C., et al.: V2V data transmission mechanism and routing algorithm in 5G cellular network-assisted vehicular ad-hoc networks. J. Univ. Electron. Sci. Technol. China 50(3), 321–331 (2021) 7. Xu, Y., Guo, H.: Research on text data privacy protection method based on random interference. J. Beijing Inf. Sci. Technol. Univer. 36(1), 51–56 (2021) 8. Lin, T., Wu, Y., Zhu, R., et al.: Study of radio frequency interference mitigation method based on wavelet transform. Acta Astronomica Sinica 62(3), 95–104 (2021) 9. Su, G., Li, G., Fan, C., et al.: Research on remote monitoring method of massive fault panoramic data in power grids. Adv. Power Syst. Hydroelectric Eng. 38(3), 42–46+54 (2022) 10. Xian, J., Zhang, Z., Zhan, L., et al.: Anti-jamming prediction method of dual frequency abrupt signal in electronic communication network. Comput. Simul. 39(8), 403–406+518 (2022)

A Data Fusion Method of Information Teaching Feedback Based on Heuristic Firefly Algorithm Yuliang Zhang(B) and Ye Wang Wuxi Vocational Institute of Commerce, Wuxi 214153, China [email protected]

Abstract. In order to obtain more accurate teaching feedback information, an information-based teaching feedback data fusion method is designed based on the heuristic firefly algorithm. Establish the entity network of the data to be fused, feedback the characteristic matrix of the data, and obtain the correlation degree of parameters such as grade grade grade, individual grade, class grade, and score rate of each question. The cosine similarity and basic similarity of different feedback data are obtained, and the data fusion entity framework is constructed on this basis. The data fusion method is designed based on the heuristic firefly algorithm, and the parameter dynamic control of the firefly algorithm is implemented to realize the information teaching feedback data fusion. The experimental results show that the number of data fusion categories is proportional to the fusion time. The more data categories to be fused, the longer the fusion time. Keywords: Heuristic Firefly Algorithm · Teaching Feedback · Information Based Teaching · Data Fusion

1 Introduction Teaching evaluation is based on teaching objectives, according to scientific standards, using all effective technical means to measure the teaching process and results, and give value judgments. It includes: the evaluation of students’ academic achievements, the evaluation of teachers’ teaching quality and curriculum evaluation. The information obtained from the evaluation can enable teachers and students to understand their own teaching and learning situation. Teachers can revise teaching plans and adjust teaching methods according to the feedback information. Students can improve learning methods according to the feedback information, improve the weak links in learning, so as to achieve learning goals. Teaching feedback, namely teaching information feedback, refers to the process in which both sides of education evaluate each other and transmit the evaluation information to each other in the process of teaching implementation. It is an important means to promote students’ growth, teachers’ professional development and improve teaching quality [1]. In teaching activities, such as students’ homework, test paper, behavior, expression, language and even classroom atmosphere are all teaching feedback, and serve as the basis for monitoring and regulating the teaching process. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 259–272, 2024. https://doi.org/10.1007/978-3-031-50577-5_19

260

Y. Zhang and Y. Wang

Similarly, teachers’ requirements and evaluation of learning activities are also fed back to students as the basis for testing and regulating their own learning behavior. How to carry out teaching feedback scientifically and effectively is a basic component of modern teaching. Its realization depends on the accuracy and timeliness of information obtained by both sides of education. The information-based teaching data mainly includes the overall objective data and the individual objective data. The overall objective data includes grade scores, average scores, class scores, average scores, individual scores, grade excellence rate, pass rate, low score rate, score rate of each question, discrimination, etc. The individual objective data includes individual score of each question, individual score change, individual knowledge point mastery, error analysis, etc. In order to judge whether the teaching process is suitable for the actual situation, it must design an objective, scientific and quantifiable data fusion system. In this paper, the heuristic firefly algorithm is used to fuse the feedback data of information teaching. Based on the calculation of the correlation degree of the fusion data, a feedback data fusion method for information teaching is designed based on the heuristic firefly algorithm. Through the application of heuristic firefly algorithm, we hope to get more accurate teaching feedback results.

2 Establish Data Entity Network to Be Integrated 2.1 Calculation of Fusion Data Correlation In recent years, the number of information teaching feedback data has been rising rapidly, and most of the data have been digitally saved. In the face of massive digital resources, how to quickly and accurately mine the connotation of these data, so as to maximize its fundamental content, has become a huge challenge in the field of information teaching research. When studying a feedback data of information teaching, we will make some inferences about the data. To understand the world, people need to distinguish different things and understand the similarities between things, while computers need to use unsupervised learning methods to distinguish things [2, 3]. The disorder of information teaching feedback data has brought a huge test to data analysis. Here, we describe the characteristics of the information teaching feedback data from the four most representative characteristics of the information teaching feedback data: grade score, individual score, class score, and the score rate of each question, and obtain the corresponding information of the information teaching feedback characteristics. In order to express the classified information teaching feedback data in a unified form, we establish the characteristic matrix formula of the feedback data (1): ⎤ ⎡ a1 a1 a3 · · · am ⎢ b1 b2 b3 · · · bm ⎥ ⎥ (1) Km = ⎢ ⎣ c1 c2 c3 · · · cm ⎦ d1 d2 d3 · · · dm Where, Km represents the characteristic matrix of feedback data; am , bm , cm and dm represent the information of grade score, individual score, class score and score rate of each question in m data [4]. For the information matrix of information teaching

A Data Fusion Method of Information Teaching Feedback

261

feedback characteristics, which includes grade scores, individual scores, class scores, and scoring rates of each question of the data, the correlation degree ap of grade scores, the correlation degree bp of individual scores, the correlation degree cp of class scores, and the correlation degree dp of scoring rates of each question are obtained from the four types of characteristics, and the correlation degree Pk between the matrices is obtained by weighting the four kinds of correlation degrees. Calculate the correlation degree of the four parameters, such as Formula (2):  2 1 fs + t1 +t 2 (2) Pk =

1 fs + t1 +t 2 Where, Pk represents the fusion correlation degree of different information-based teaching feedback data; fs represents the correlation coefficient of feature information; t1 and t2 respectively represent the fusion weight of the correlation degree. When calculating the correlation degree, the traditional method relies more on expert scoring, but the fusion method designed in this paper is an objective weight assignment method, which is only applicable to the case of fixed data samples. In the face of extensible data, the model needs to be updated constantly, so the time complexity is large. 2.2 Building an Entity Framework for Data Fusion This paper proposes an entity matching framework that integrates structural information for multi-source heterogeneous data. This framework combines structural similarity with text similarity to more comprehensively measure the similarity of multi-source heterogeneous entities, and can quickly implement entity matching. The whole method is mainly divided into three modules, and the specific functions of each module are as follows: The first is the data set preprocessing module, which is mainly used to clean, sort and format convert the original data, and generate the standard format data required by the subsequent process. In the face of multi-source heterogeneous data environment, a good data processing mode is the best start. In knowledge fusion, entity matching and data quality assurance are crucial. The quality of the original data will affect the final entity matching results. The final purpose of data preprocessing is to ensure the data quality of the input data of the subsequent process [5, 6]. Data quality can be measured from multiple dimensions, including whether the data format is standardized, whether the data is complete, and whether the data is accurate and timely. The pre-processing in this task guarantees the data quality mainly by ensuring the uniformity and regularity of the data format and minimizing the missing values in the data, which is the basic guarantee for the successful completion of subsequent tasks. The second module is to build the dataset into a label graph, which aims to model the structural relationship between different entities and different attributes in the dataset. In this study, the dataset is used to build a label graph on the database. Combining the structure information in the graph, entity matching can be better performed. A method of entity matching will be further introduced in Chapter 4. Building tag graph is a core

262

Y. Zhang and Y. Wang

step of the method in this chapter. How to match entities on the database is a challenging task. The third module is the evaluation of entity matching, which ultimately determines which entity records are matched based on the text similarity and the attribute similarity reflected by the tag map. The data is taken from the constructed label map. When entity matching is required, entity matching needs to be evaluated by combining the graph structure information and semantic attributes. If it meets the standard, entity matching will be performed [7]. In this step, the similarity evaluation of entity matching is set, and the knowledge fusion is implemented after the evaluation. Finally, the fused knowledge is presented on the Neo4j database, stored in the database, and stored in the form of graph structure. Use MATCH syntax to read data from the database, and store all data in a temporary list. This step is to turn the problem into a Cypher query statement. You need to return different results according to your intentions. This method has a lot of scalability to this step, because the extracted node has attributes and relationships, which can be used for the integration of relationship information and the integration and extraction of node attributes. At this time, the cosine similarity and basic similarity of structural information can be obtained, and the comprehensive similarity formula (3) can be obtained: ⎧1 ⎪ ⎨ xi , ifds = 1 (3) bxyz (pi , ti ) = y1i , ifds = 2 ⎪ ⎩ 1 , ifd = 3 s zi Where, bxyz (pi , xi ) represents the comprehensive similarity of x, y, z data under the two basic parameters of pi and ti ; xi , yi and zi represent the encoding vector of sampled data; ds represents the embedded entity record result. With the above formula, you can first match the ACM and DBLP entity nodes in the DataFrame, merge the entity nodes, and retain their relationship [8, 9]. Although it is possible to fuse relationships, this paper only focuses on entity matching, so it is reserved in the label graph after the final output of the fusion. It can be clearly seen that entity matching can ultimately assist the completion of knowledge fusion.

3 Data Fusion Method Design Although the existing controller placement algorithm can obtain an approximate optimal controller placement scheme in a limited time, its shortcomings of poor search breadth in the early stage and low convergence accuracy in the late stage make the dynamic scheme obtained by solving is not ideal in terms of integrated transmission delay and controller load index. In this chapter, the algorithm based on firefly is improved from two aspects: static parameters and initial parameters.

A Data Fusion Method of Information Teaching Feedback

263

3.1 Parameter Dynamic Control of Firefly Algorithm The initial settings of the three key parameters α, β, γ in the firefly algorithm are static, but these three key parameters largely determine the performance of the algorithm. In the early stage of the algorithm, it is necessary to increase its randomness and improve the search breadth of the algorithm. In the later stage of the algorithm, it is necessary to reduce the randomness and improve the convergence accuracy of the algorithm. Therefore, this section will introduce a dynamic parameter strategy to improve the algorithm from three aspects: dynamic light attraction, dynamic attraction factor and dynamic random parameters. Dynamic light attractiveness determines the attractiveness of different deployment schemes to other deployment schemes during the implementation of the algorithm. In order to improve the global search ability of the algorithm and avoid falling into the local optimal solution, it is necessary to reduce the attractiveness of deployment schemes with small differences in the early stage of the algorithm and improve the attractiveness of deployment schemes with large differences. Select the appropriate constants g and p. Where, g divides the algorithm into early stage and late stage, and p is the late stage static parameter of the algorithm. The dynamic light attraction parameter is expressed by the piecewise function shown in Formula (4):  g × Ti − α, g × Ti ≤ g × Ti (4) R(α) = p, α > g × Ti Where, R(α) represents a piecewise function of the dynamic light attraction parameter; Ti represents the number of iterations of the algorithm. After the optimization goal is determined, each deployment scheme will converge to the optimization goal. In order to improve the early random search ability of the algorithm and avoid falling into local optimization, and to improve the convergence speed of the later algorithm, the deployment scheme attraction factor based on the number of iterations is proposed, as shown in Formula (5): ⎧ ⎨ β , β < g × Ti (5) R(β) = Ti ⎩ p, β ≥ g × Ti Where, R(β) represents the piecewise function of the dynamic attraction factor; p Divide the algorithm into early stage and late stage. The essence of random parameters is to provide a perturbation strategy for each deployment scheme in the process of convergence towards the optimization goal, so as to enhance the global search capability. A dynamic random parameter based on the number of iterations is proposed, which can achieve better convergence accuracy in the later stage of the algorithm, such as Formula (6):  hu × (γ − g × Ti ) (6) R(γ ) = Ti Where, R(γ ) represents the function value of dynamic random parameter; hu is the perturbation parameter of the algorithm. Through the above three formulas, an approximate

264

Y. Zhang and Y. Wang

optimal controller placement scheme is found in an iterative manner. First, confirm the placement scheme of each parameter and the moving target, and update the control coordinate vector of each scheme with the improved coordinate vector. Then, the coordinate vector of the updated placement scheme is calculated, and the comprehensive transmission delay and mapping relationship are obtained. Then, judge whether the load of each parameter optimization scheme meets the preset conditions, and randomly generate the scheme that does not meet the load constraints. Finally, the control coordinate with the lowest DELAY value and its corresponding mapping relationship are output. 3.2 Design Data Fusion Algorithm In order to verify the effectiveness of the controller placement algorithm based on the improved firefly algorithm, after initializing the control scheme, random control coordinates can be generated according to the specific topology information, and the mapping relationship and comprehensive transmission delay of the initial control scheme can be calculated at the same time. The control algorithm is determined according to the input, and the approximate optimal control placement scheme in the entire solution space for iteration can be obtained. Design the data fusion algorithm shown in Fig. 1.

Start

Solving dynamic parameters

Determine the static parameters to be converted

N Get the global optimal solution

Select topology as optimization instance

Y Update current location

Using static parameter optimization algorithms N

Output static parameter instance Get individual optimal value Y

Topology solving

Extract velocity vector Get fusion results

End

Fig. 1. Algorithm flow

A Data Fusion Method of Information Teaching Feedback

265

First, calculate the optimal and worst controller placement schemes under the current iteration, and use the leadership strategy to update the coordinate vector of the controller placement scheme in the leadership. Secondly, for the controller placement scheme that is not in the leadership class, randomly select a controller placement scheme as the one, and find the update strategy of individual position according to the current scheme [10]. In this process, the loss function is shown in Formula (7): f (x) =

m n  

pmin × fij2

(7)

i=1 j=1

Where, f (x) represents the loss function of the algorithm after fusion; pmin is the minimum weighted index between fuzzy ribs; fij represents the correlation degree of the fusion center matrix. Constrain the correlation between the set fuzzy threshold and the initial fusion center. The principle of selecting the initial fusion center is to make the distance between the initial fusion centers greater than the set threshold, so that the fusion centers can be analyzed in multiple feasible regions in the next fusion algorithm. This process avoids the situation that the algorithm is easy to converge to the local minimum when the algorithm randomly obtains the initial fusion center.

4 Experimental Research 4.1 Experimental Environment and Experimental Data Set The purpose of this experiment is to test the effectiveness of the information teaching feedback data and method designed above based on the heuristic firefly algorithm. In terms of hardware, the experimental environment of this paper is a single PC, its CPU configuration is Intel Core i7-8700 3.20 GHz, 32GB memory, and the operating system is Wndows10 Professional Edition. In terms of software, the experimental program is coded using MATLAB language based on MATLAB R2017a platform, and the subsequent experimental results are numerically analyzed using MATLAB R2017a platform. Before the experiment, in order to quantitatively analyze the performance of the algorithm, it is necessary to experiment with the real information teaching feedback data set, including creating the training data set required for training and testing data set required for verification. In this paper, we collect data from the main Internet information teaching websites and build experimental data sets. When establishing the above two data sets, a large number of representative teaching feedback data were collected, and 1000 groups of representative data and their corresponding teaching feedback feature information were manually selected as the data source of the data fusion experiment of the heuristic firefly algorithm. Each group of data includes two different types of teaching feedback data and their corresponding characteristic information matrix. 1000 groups of data are divided into two parts, 500 groups are used as the training data set of the heuristic firefly algorithm, and 500 groups are used as the test data set of the algorithm.

266

Y. Zhang and Y. Wang

Manually mark the collected characteristic information of various teaching feedback data. During the marking process, four markers are required to conduct Kappa test on the consistency of marking results, and the test results are shown in Table 1. The label results are based on the principle of complementarity, and the label information conflicts are based on the principle of high voting. Table 1. Consistency test of teaching feedback data Serial No

Fusion type

Training Set

Test Set

1

A+B

0.865

0.896

2

A+C

0.835

0.885

3

A+D

0.874

0.815

4

B+C

0.896

0.834

5

B+D

0.907

0.875

6

C+D

0.815

0.865

7

A+B+C

0.867

0.836

8

A+B+D

0.905

0.845

9

A+C+D

0.789

0.901

10

B+C+D

0.869

0.875

11

A+B+C+D

0.875

0.869

As shown in Table 1, two, three and four data sets are randomly selected for fusion. On the training set or test set, the consistency test results of teaching feedback data in Kappa test are greater than 0.7, which shows that the marking results are valid. 4.2 Data Processing to Be Merged First of all, we need to preprocess the data, and divide the information teaching feedback data into two types: overall objective data and individual objective data. The overall objective data includes 10 types of data, including grade grades, average scores, class scores, average scores, individual scores, grade excellence rate, pass rate, low score rate, score rate of each question, and differentiation, while the individual objective data includes individual score of each question, individual score change There are four types of data, including the mastery of individual knowledge points and error analysis. In the second step, it is necessary to establish data sets to be fused for different samples. In the above 1000 groups of data, 500 of them are randomly selected as the training set, and the remaining 500 test set data are the verification values for the training model.

A Data Fusion Method of Information Teaching Feedback

267

The third step is to conduct classified training on the overall objective data and individual objective data. Based on the training sample set, establish the nonlinear relationship between the objective data of information teaching feedback and its related achievements, knowledge point mastery, error analysis and other data types, so as to build a heuristic firefly algorithm model. According to different characteristics of information teaching feedback data, training and prediction are conducted separately in the overall objective data and individual objective data, and their root mean square error, correlation coefficient and deviation are calculated. Correlation coefficient, root mean square error and deviation, such as Formula (8)–(10):   n m  i=1 j=1 Xij − Xn Yij − Ym (8) λ(x, y) =  2 n m  2 n m  X Y − X − Y ij n ij m i=1 j=1 i=1 j=1 

m n i=1

RASE =

j=1

 2 f i − fj

m×n Pm =

dh + dk N

(9) (10)

Where, λ(x, y) represents the correlation coefficient of x and y; Xij and Yij represent the fusion frequency of data x and data y in two different data types respectively; Xn and Ym represent the relative action frequency of data x and data y respectively. RASE represents the root mean square error between the two information fusion and the original data; fi and fj represent two data to be fused respectively; G represents the data size. Pm represents the deviation value, dh represents the word measurement value, dk represents the absolute value of the average value, and N represents the number of measurements. The training results of overall objective data and individual objective data obtained through Formula (8)–(10) are shown in Table 2. Table 2. Training Results Comparison content

Overall objective data

Individual objective data

Correlation coefficient

0.92

0.91

Root mean square error

0.003

0.004

−0.0006

−0.0005

Deviation

In Table 2, models of both overall objective data and individual objective data can meet the fusion requirements, with correlation coefficients greater than 0.9, root mean square error less than 0.01, and absolute value of deviation less than 0.001. Therefore, in the model prediction of the heuristic firefly algorithm, after building the model, the prediction sample set can be input into the model for regression prediction.

268

Y. Zhang and Y. Wang

The fourth step is to conduct the final processing of the data, convert the output predictive value from the sample dimension to the space-time dimension corresponding to the information teaching feedback data, and extract the objective data type to be verified according to the remaining data to verify the spatial expansion of the prediction results. 4.3 Selection of Experimental Parameters When training the network structure of the heuristic firefly algorithm, there are many parameters that need to be adjusted. In this experiment, the learning rate, training times and data volume are taken as the adjusted object parameters. In the process of adjusting the network parameters of the heuristic firefly algorithm, the fusion accuracy is taken as the primary reference index. For the training network, the initial learning rate is very important to the fusion accuracy, so it is selected as the primary parameter for adjustment. During the adjustment process, all 500 groups of data to be trained established above are selected for the training data set, and the other parameters are in the default state. All 500 groups of test data sets are selected to verify the accuracy. 1 Test Set Validation Set

Fusion precision/%

0.9

0.8

0.7

0.6

0.5

Learning rate

Fig. 2. Effect of learning rate on fusion accuracy

As shown in Fig. 2, the fusion precision difference between the test set and the verification set is small, so in the following experiments, the test set is selected as the experimental indicator. As the learning rate increased from 0.00001 to 0.1, the accuracy rate of the test also increased linearly, and the highest accuracy rate was 0.9325 near 0.1. Then, as the learning rate increases, the accuracy rate decreases slightly. Therefore, 0.1 is selected as the learning rate of the information teaching feedback data fusion method based on the heuristic firefly algorithm. After determining the learning rate, keep the learning rate at 0.1 and adjust the number of iterations of the algorithm.

A Data Fusion Method of Information Teaching Feedback

269

1 Test Set Validation Set

Fusion precision/%

0.9

0.8

0.7

0.6

0.5 50

100

150

200

250

300

350

400

450

Iterations

Fig. 3. Influence of Iteration Times on Fusion Accuracy

As can be seen from Fig. 3, with the increase of the number of iterations, the accuracy rate also improved significantly. When the number of iterations reached 300, the fusion accuracy of the algorithm reached 0.9374. After that, the number of iterations continued to grow, but the fusion accuracy of the algorithm did not change. It can be seen that 300 iterations were the peak value of the fusion accuracy in this experiment. Therefore, 300 was selected as the number of iterations in this experiment. When the amount of test data is large, you need to set the learning rate to 0.1 and the number of iterations to 300. 1 Test Set Validation Set

Fusion precision/%

0.9

0.8

0.7

0.6

0.5 10

20

30

40

50

60

Data volume

Fig. 4. Data volume

70

80

90

270

Y. Zhang and Y. Wang

As can be seen from Fig. 4, when the data volume is 10, the fusion precision of the information-based teaching feedback data of the test set is 0.753. With the gradual increase of the data volume, the fusion precision of the information-based teaching feedback data is also growing. When the amount of data is 50, the fusion precision of the information teaching feedback data has exceeded 0.9, and when the amount of data is larger, the fusion precision of the information teaching feedback data sometimes increases, sometimes decreases, but is not less than 0.9. It can be seen that the amount of data selected for this experiment should be greater than 50. In conclusion, when the learning rate is 0.1, the number of iterations is 300, and the data volume is 50, the maximum fusion precision between the test set and the verification set can be obtained. 4.4 Time Efficiency of Data Fusion In order to evaluate the time efficiency of the heuristic firefly algorithm designed in this paper to integrate the overall objective data and individual objective data in the information teaching feedback data fusion model, test and analyze it, and the experimental results are shown in Fig. 5. As shown in Fig. 5, as the amount of data increases, the time required for data fusion also increases significantly, and it can be clearly seen that the fusion time required for overall objective data is less than that for individual objective data. Compared with the data fusion between different types, the time required for two types of data fusion is obviously less than that for three types of data fusion, while the time required for three types of data fusion is less than that for four types of data fusion. It can be seen that the larger the data volume, the longer the data fusion takes; The fewer data types, the faster the fusion.

A Data Fusion Method of Information Teaching Feedback 1100 Overall objective data Individual objective data

900

Time/s

700

500

300

100 50

60

70

80

90

100

110

120

130

120

130

Data volume/MB

(a) Two types of data fusion 1700 Overall objective data Individual objective data

Time/s

1400

1100

800

500

200 50

60

70

80 90 100 Data volume/MB

110

(b) Three types of data fusion 3000 Overall objective data Individual objective data

Time/s

2500

2000

1500

1000

500 50

60

70

80

90

100

110

Data volume/MB

(c) Four types of data fusion

Fig. 5. Algorithm Running Time

120

130

271

272

Y. Zhang and Y. Wang

5 Conclusion From the perspective of classification of information teaching feedback information, according to its data characteristics in different fields, this paper conducts classification processing from multiple dimensions, and establishes the characteristic data matrix of information teaching feedback data. According to the experimental data, the optimal learning rate, iteration times, data volume and other parameters are obtained, and the algorithm running time of two types of data, three types of data, four types of data is obtained according to the different types of data fusion. It can be seen from the experimental results that the amount of data and the number of data types are in direct proportion to the fusion efficiency. Acknowledgement. 1. 2021 Jiangsu Province Higher Education Education Reform Research Project General Project: “Double-High Plan” leads the logical path, implementation difficulties and path breakthroughs in the in-depth integration of production and education and collaborative education. (Project number: 2021JSJG440). 2. 2022 General Project of Philosophy and Social Sciences Research in Colleges and Universities: Research on the Realistic Dilemma and Technical Appeals of Precision Funding in Colleges and Universities from the Perspective of Big Data (Project No.: 2022SJYB1050).

References 1. Wang, H., Wang, Z.: Design of simulation teaching system based on modular production and processing. Comput. Simul. 39(4), 205–209 (2022) 2. Ni, T., Sang, Q.: Class head up rate detection algorithm based on attention mechanism and feature fusion. Comput. Eng. 48(4), 252–268 (2022) 3. Wu, S., Yan, J., Zhang, J.: BPNN data fusion algorithm based on heuristic firefly. Transducer Microsyst. Technol. 40(4), 146–149, 156 (2021) 4. Alawbathani, S., Batool, M., Fleckhaus, J., et al.: A teaching tool about the fickle p value and other statistical principles based on real-life data. Naunyn-Schmiedeberg’s Arch. Pharmacol. 394(6), 1315–1319 (2021) 5. Nowakowska, M., Beben, K., Pajecki, M.: Use of data mining in a two-step process of profiling student preferences in relation to the enhancement of English as a foreign language teaching. Stat. Anal. Data Min. 13(5), 482–498 (2020) 6. Lu, S.: Research on public opinion data fusion of violent terrorist incidents. J. Intell. 41(3), 121–127, 101 (2022) 7. Zhang, D., Li, J., Zhang, J., et al.: Co-saliency detection algorithm with efficient channel attention and feature fusion. J. Harbin Inst. Tech. 54(11), 103–111 (2022) 8. Kim, J., Suzuka, K., Yakel, E.: Reusing qualitative video data: matching reuse goals and criteria for selection. Aslib J. Inf. Manage.: New Inf. Perspect. 72(3), 395–419 (2020) 9. Zheng, W.-Y., Yao, J.-J.: Integrating big data into college foreign language teaching:a prediction study of CET6 scores. Comput.-Assist. Foreign Lang. Educ. China 2019(5), 90–95 10. Yuan, T.-Q., Xi, P.: Quality evaluation algorithm based on multi-modal audio and video fusion. J. Shenyang Univ. Technol. 44(3), 331–335 (2022)

Research on Database Language Query Method Based on Cloud Computing Platform Shao Gong1(B) , Caofang Long2 , and Weijia Jin3 1 School of Humanities and Communication, University of Sanya, Sanya 572022, China

[email protected]

2 School of Information and Intelligence Engineering, University of Sanya, Sanya 572022,

China 3 Zhejiang Tongji Vocational College of Science and Technology, Hangzhou 311231, China

Abstract. Some database language query methods have the problems of time cost and resource occupation. Under this background, a database language query method based on cloud computing platform is designed. Convert the formally described information into data that can be stored, extract the semantic Web hierarchy, and establish relationships with real objects, identify the potential association features of database words and sentences, restore the original orthogonal semantic structure space, build ontology annotation model based on cloud computing platform, transplant HanLP’s neural network dependency syntax analysis tool, and design language query methods. Test results: The average time cost and resource utilization of the database language query method designed this time are 1720 ms and 42.36% respectively, which shows that the database language query method designed this time is more effective under the technical support of the cloud computing platform. Keywords: Cloud Computing Platform · Database Language Query · Semantic Level · Potential Association · Dimension Model

1 Introduction Most of the existing language query methods are non-temporal and are limited to accessing a single database state [1–3]. When measuring the similarity of two short texts or two words, it is impossible to directly edit the distance. Therefore, the database language query method needs to be optimized [4]. Literature [5] proposed a database language query process analysis method based on data mining. It will identify the query target and query conditions through database semantic analysis based on the query statement segmentation array, build a semantic dependency tree, divide it into collection blocks, and convert the natural language query into SQL statements through comprehensive conversion method. On this basis, use data mining technology to locate user demand data in the database, The location data is extracted and fed back to the user to realize the analysis of the database language query © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 273–286, 2024. https://doi.org/10.1007/978-3-031-50577-5_20

274

S. Gong et al.

process. The experimental results show that the method can provide accurate database services for users. Literature [6] proposes a database keyword query method based on pattern graph. This method designs a combined network structure, which can effectively avoid redundant operations caused by redundant structures between candidate networks in traditional methods; At the same time, an improved candidate network generation strategy is proposed, which can avoid generating redundant candidate networks and reduce the traversal range, thus improving efficiency; Finally, based on the merging network, a merging network execution algorithm is designed to further improve the query efficiency. The experimental results show that this method can ensure that the query results are not missing. Although the above methods improve the query effect of database language to a certain extent, there are problems of long query time and high resource utilization. Therefore, this paper proposes a database language query method based on cloud computing platform.

2 Extracting the Semantic Web Hierarchy The data in the semantic web has a clear definition, and there are associations between the data. Therefore, the semantic web has great advantages in retrieval, automatic processing, reuse, integration, etc. The semantic network connects multiple entities through semantic relationships to form a directional knowledge network from the starting point to the end point. The relationship model uses the relationship table as the data structure, behavior tuple attributes, and field attributes to form a two-dimensional structure. Semantic web can not only describe video, image, digital resources and other content in the page, but also describe abstract resources such as time, behavior, space, and events, and declare the relationship between resources in the description. Semantic web has the following characteristics: the access to information resources is no longer limited to local, and it is more convenient to obtain a wide range of information resources through the network. Although the semantic network knowledge representation is intuitive, when there are many nodes, a strict knowledge network is formed, but as the number of nodes increases, the knowledge network develops horizontally, which is cumbersome and difficult to search and query. The number of groups develops vertically and the cardinality is huge, and even an excellent query algorithm inevitably requires a high time complexity. The Semantic Web adopts a formal description method based on document structure, which can facilitate the integrated processing of data. The information is described in a form that can be accurately understood by the computer, so as to realize the intelligent processing of the information. Convert formally described information into storable data and establish relationships with real-world objects. The basic architecture of the semantic web hierarchy is shown in Fig. 1: The semantic network knowledge representation itself has the problem of combination explosion, while the chain structure in common data structures can effectively avoid the exponential increase problem caused by queries and searches. Therefore, the semantic relationship of the semantic network can be added to the field attributes of the sub relational model by combining the horizontal and vertical aspects of the semantic network, and then the pointer field can be added to the parent relational model to point to

Research on Database Language Query Method

275

Trust layer

Logic layer

Proof layer

Semantic layer

Grammar layer

Data layer

Coding layer

Fig. 1. Basic Architecture of the semantic web hierarchy

the sub relational model, Finally, a semantic network knowledge representation method based on relational model is formed. As the bottom layer of the semantic web, it has two functions. Unicode provides a unified double byte character encoding standard for the semantic web. Each character has a unique identifier corresponding to the Unicode rm code. At present, Unicode can support languages of all known countries. XML Schema mainly describes the structure of the document to limit the semantic representation of the document. Because words are the smallest semantic unit that can move independently. In today’s natural language query methods, such as search engines and question answering systems, two kinds of texts are mainly queried, including English text and Chinese character text. As a uniform resource identifier, URI can semantically identify network resources in the Web, thereby providing identifiers for distinguishing these network resources. Syntax layer, namely XML + NS + XMLs layer. XML does not provide semantic interpretation of documents, but only provides syntax for the Semantic Web to achieve document structuring. The data layer is the RDF/RDFs layer. As a key technology in the Semantic Web, the data layer provides a basic description of resources and their relationships. RDF represents resources at the semantic level, which can represent the attributes of resources and the relationship between resources. Therefore, in the database language query method, the difficulty of Chinese word segmentation is higher than that of English word segmentation. Even in some special query methods, it is necessary to combine Chinese text with English text for hybrid query. And the word segmentation algorithm based on human understanding knowledge is usually based on computer pattern recognition, so as to imitate human understanding of special Chinese characters, and finally can recognize the effect of keywords. RDF can define the concepts, attributes and relationships of objects, and use XML as a syntax to provide simple semantic representation functions. It uses URIs to describe the concepts, attributes, relationships, views, etc. of objects, so as to realize the interoperability of applications. However, it cannot deeply describe the meaning of concepts or attributes and the relationships between terms. The value of point type represents a point or undefined value

276

S. Gong et al.

in the European space, and the value of points type represents a set of finite points, so the definition of the abstract type point and the carrying set of points is as follows: δ1 = W 2 ∪ {α⊥β}

(1)

δ2 = {L ⊂ (α, β)}

(2)

In formulas (1) and (2), W represents real bearing coefficient, L represents finite point set, and α, β represents two adjacent mapping values respectively. In addition, RDFs can define resource attributes, types, elements and concepts, and constrain the combination of resources and concepts according to constraints, so as to ensure that conflicts between constraints are detected. The basic idea of this segmentation algorithm is usually to perform word segmentation, semantic analysis and syntactic analysis at the same time to facilitate the processing of ambiguous words. Due to the particularity and complexity of Chinese text, the theoretical research and technical development of word segmentation algorithm based on natural language is still at the basic stage, but the vast number of scientific researchers have already found its huge commercial value and potential scientific research value. In the hierarchical structure of the Semantic Web, the syntax layer, the data layer and the semantic layer are the core parts of the Semantic Web, which realize the semantic representation of Web resources. Among them, XML realizes the structuring of documents. RDF uses triples (objects, attributes, values) to describe resources. Using this structure is conducive to automatic processing by machines. The basic idea of the word segmentation matching algorithm based on probability statistics is that the probability or frequency of adjacent words can better reflect the credibility of the composed words. RDFs describe the concepts and attributes of resources, and provide semantic relationships between concepts and attributes according to the conceptual hierarchy of ontology. The logic layer, the proof layer and the trust layer realize the logic reasoning function, and provide authentication and trust mechanisms. The digital signature technology is mainly used to trust and authorize the description, reasoning and certification of resources. As the upper structure of the semantic Web, it ensures the security of semantic Web operations and content to the maximum extent.

3 Identify Potential Associated Features of Database Words and Sentences The database implementation structure adds a spatiotemporal processing layer on top of the traditional relational database management system, and the data operations are completed through the spatiotemporal processing layer without any modification to the underlying database management system kernel. At the same time, this keywordbased text processing method is mainly based on word frequency information, and the similarity of two texts depends on the number of common words they have, so it is impossible to distinguish the semantic ambiguity of natural language. The spatiotemporal processing layer is responsible for the translation between the database language and SQL, and the optimization of spatiotemporal queries. All data requests must be processed through the spatiotemporal layer. The SQL converted from spatiotemporal queries is

Research on Database Language Query Method

277

very complex, which is not conducive to query optimization of the underlying relational database management system. Due to the existence of a large number of synonyms and polysemy in language, the accurate expression of semantics depends not only on the proper use of the vocabulary itself, but also on the definition of the word meaning by the context. If the limitation of context is ignored and only isolated keywords are used to represent the content of the text, the accuracy and integrity of the query results will be affected. In addition, the space-time layer will become a bottleneck for application development, because all requests must first be converted to standard SQL through the space-time processing layer. The space-time operation efficiency of the prototype system of this architecture will mainly depend on the space-time processing layer. Use statistical computing methods to analyze large text sets to extract and represent the semantics of words. Because the context provides a set of interrelationships and constraints on the things in it, it largely determines the semantic relevance between words. A value of type line represents a set of continuous curve curves on the plane, defined as follows: l : [0, 1] → W 2

(3)

In formula (3), l represents curves mapping space. Because the object relational database management system provides the extension function of user-defined data types, new data types and spatio-temporal operations can be extended on the basis of the object relational database management system, and spatio-temporal indexes can also be extended to the DBMS kernel through UDR and other technologies. Therefore, this potential meaning is the sum of all the contextual information of a word. The starting point of latent semantic analysis is that there is a certain relationship between words in the text, that is, there is a certain latent semantic structure. But it also has some problems. Although the underlying database management can use the extended spatiotemporal index to speed up the query, its support for spatiotemporal query optimization is limited to this. The underlying database management system still uses the query optimization rules of relational databases to process spatiotemporal queries, which is obviously not suitable for spatiotemporal queries. This latent semantic structure is implicit in the contextual usage patterns of words in the text. Therefore, the method of statistical calculation is used to analyze a large number of texts to find this potential semantic structure. It does not need a certain semantic code, but only depends on the relationship between things in the context. It uses semantic structures to express words and texts, so as to eliminate the correlation between words and simplify the text vector. The constant types point, points, lines, and regions in the category SPATIAL are spatial data types provided by the spatial database. The constant type instant in the category TIME is a basic time type, which can be regarded as a real number or a time class with several operations attached. Real numbers are briefly used in the study.If the closure of a set E coincides with itself, then E is a regular closed set. The purpose of this regularization process is then to take into account that the regions should be regular. On the basis of formula (3), the boundary constraints of E are obtained: D= 

φ2 L⊆

W 2 |∃|φ

− 1|



(4)

In formula (4), φ represents interval set. Assume that the implied meaning hidden in words (that is, these semantic dimensions of the potential semantic space) can better

278

S. Gong et al.

depict the true meaning of the text. Different word groups are used in different text sets to express the meaning of some semantic dimensions. The discrete types in the database construct a range that can act on all types in the categories BASE and TIME, thus generating range (int), range (real), range (string), range (bool), and range (instant).The type constructs that moving can act on all types in the categories BASE and SPATIAL, resulting in new types. For example, moving(points) represents the moving point type, and moving(real) represents the real number that changes at any time, which can be used as the return of some space-time operations type. The recognition of this semantic space representation is hindered [5, 8]. Through proper data processing, the purpose of restoring the original orthogonal semantic structure space and the original semantic dimension is achieved. In the language space structure, texts and words are organized and stored according to their semantic relevance, and synonyms scattered in different texts are adjacent in space. Among them, Intime represents the binary ordered pair of time and value, which is defined as:   L ∪ W2 (5) Gη−1 = Dη In formula (5), η represents a closed sign quad. In addition, this type constructs that the intime can act on all types in the category BASE and SPATIAL, thus generating new types. For example, the intime (points) type is a binary group, representing the location of the space-time point object at a certain time. The dimension of semantic space is reduced and transformed to eliminate the “noise” in semantic expression (rare or unimportant usage meaning of words). The meaning of words is the weighted average of multiple meanings of words. The potential semantic structure is used to express the term and text, and the term and text are mapped into the same K-dimension semantic space, which is expressed in the form of K factors. Therefore, the meaning of vector reflects not the frequency and distribution of simple terms, but the strengthened semantic relationship.hf is constructed to act on TEMPORAL types, resulting in composite spatiotemporal types that can support past and future motion. For example, hf(mpoints, mhnes) represents a space-time object whose motion before the current moment is a moving point, and is expected to be a moving line afterward. While maintaining most of the original information, it overcomes the phenomenon of polysemy, synonyms and word dependence produced by traditional vector space representation methods. At the same time, the similarity analysis in the new semantic space has a better effect than using the original feature vector, because it is based on the semantic layer and not only the lexical layer.

4 Construction of Ontology Annotation Model Based on Cloud Computing Platform In the cloud computing platform, one of the most important problems of cloud service ontology annotation is to maintain the consistency between the known cloud service documents and the ontology library that stores annotations. Consistency refers to maintaining the correctness between annotations and language expressions in cloud service

Research on Database Language Query Method

279

documents. Database is the basis for computer to understand natural language, which makes it possible to realize the universality of interfaces. According to the applicable scope of knowledge, this paper divides the database into two parts: general database and special database. The general database is composed of a word segmentation dictionary, a general database dictionary and a synonym word forest, and is not affected by the application field. In a value oriented database, the primary code, that is, the value, is used to represent real world entities.For the representation of entities, a fundamental problem is the update of the master key. Updating the primary key results in a mutation of entity continuity, and care should be taken to update all tuples and sets associated with that entity. Therefore, all changes to the annotated cloud service document (e.g., addition, deletion of cloud service information) must be reflected. Otherwise, if the change information of the source file is not recorded in the ontology library, the cloud service ontology annotation will be inconsistent. The synonym Cilin is a semantic database, which is mainly used for synonym identification, but in order to ensure the accuracy of conversion, this paper retains the thesaurus as a supplement. Object-oriented data models allow the real world to be clearly identified through the use of object identities, which are distinct from values in a database [9]. In a database, each object is assigned a unique object ID by the system, and the relationship between object IDs and objects is fixed. Unlike values, object identifiers can be generated, deleted, but not changed. The special database focuses on the database objects and consists of a special thesaurus, a synonym thesaurus, an entity database, a domain name database, a compound concept database, and an enumeration value database. If cloud service ontology annotation supports complex ontology, it is necessary to maintain the consistency and consistency of ontology annotation between the constantly changing cloud service documents and all ontologies in the process of cloud service ontology annotation. If there are two objects whose property values are the same but whose object IDs are different, the database system also considers them different. The user can send a retrieval request to the cloud service semantic search engine, and the cloud service semantic search engine retrieves the content annotated by the cloud service ontology according to the ontology, and finally responds to the cloud computing-based database language query result in the annotation library through the semantic search engine user. Under the support of the cloud computing platform, the main steps of ontology annotation are optimized and described, as shown in Fig. 2:

Pre-processing

Identifying and extract terminology keywords

Ontology library search

Cloud Service Marking Maintenance

Launch of ontology labelling results

Analyzing the underlying semantics

Fig. 2. The main steps of ontology labeling

280

S. Gong et al.

Since words in language are the smallest unit of natural language processing, it is necessary to preprocess query statements. The main steps of lexical analysis are: word segmentation, part of speech tagging, relational data semantic tagging. Due to the lack of support for natural language interface in existing word segmentation devices, a secondary development is carried out on the basis of existing research results. In the process of word segmentation, the semantics of relational data are taken into account, so that words and database objects can be associated. The key point is the processing between the first part and the second part. The evolution part in the second part is the core. Cloud service, ontology and annotation are combined with the evolution log to evolve the cloud service ontology annotation, and the final result of the evolution of cloud service ontology annotation is returned to the cloud service ontology annotation library in the first part, forming an iterative process between the cloud service ontology annotation library and the ontology annotation evolution. Transplant HanLP’s neural network dependency parser, which uses decision-based analysis to obtain word dependencies. The dependency parser believes that whether a sentence is reasonable or not is closely related to its probability of occurrence, and the probability of a sentence is the probability that each word is combined in order, and a sentence is recorded as T , which consists of a series of words, h1 , h2 , ..., hn in order, then the probability calculation method of a sentence is:  P(h1 , h2 , ..., hn ) P(T ) = (6) P(h1 )P(h2 |h1 )...P(hn |h1 , ...hn−1 ) In formula (6), refers to the th word. The parser will eventually output a dependency tree that annotates the semantics of related data, parse it by combining rules, and eliminate the ambiguity generated in the process of data semantic annotation according to semantically independent collection blocks, so as to ensure that words have unique semantic information as much as possible. Finally, a consistent cloud service ontology annotation is generated to ensure the correctness of the user’s cloud computing based database language query in the first part. Ontology is the fourth layer of the seven tier semantic Web architecture. It is a modeling method for the semantic description of cloud services. Ontology uses a formal way to define the terms that have gained consensus in the field, clarify the relationship between terms, and use the hierarchy and relationship between concepts and attributes to express the semantic relationship between concepts and attributes. Cloud service ontology annotation process is to apply ontology to label cloud service resources based on concepts, attributes Attribute to realize semantic reasoning function. The main task of the structured statement generation module is to map the intermediate structure to SQL, complete the identification and conversion of query targets and query conditions, and generate complete SQL query commands. In order to describe the inconsistency of semantic annotation of cloud services, we define consistency constraints and inconsistency constraints respectively, and describe an annotation model. Among them, the consistency constraint ensures the consistency of the underlying ontology between semantic annotation entities. The semantic annotation constraint is to ensure that the ontology model conforms to the definition of the annotation model. The specific expression formula of the ontology annotation model is: A=

(E, S, D, W , L)μ  (D ⊆ T )

(7)

Research on Database Language Query Method

281

In formula (7), S represents the resource set. According to the syntax rules of SQL, this paper classifies and parses the query targets and query conditions, and proposes the corresponding rationality verification scheme, which is more likely to ensure the correctness of the conversion. This module will output a complete SQL query command. The cloud service ontology annotation implementation adds semantic description information to the resources to be tagged based on ontology, making the resources to be tagged from machine readable to machine understandable, thus realizing the semantic Web function. Cloud service ontology annotation can establish a connection between fuzzy natural language and ontology based formal language. This process mainly involves inserting labels into documents. These labels represent the connection between text fragments and ontology elements (attributes, concepts, relationships and instances).If the cloud service document to be annotated is a structured document, the structure of the cloud service document to be annotated needs to be considered, and the annotation is performed according to the structural characteristics of the cloud service document to be annotated. Generally, the structured decomposition method is used for ontology annotation. For example, the structural information in a word document includes: title, hierarchy, style and so on.

5 Design Language Query Methods The database language query based on cloud computing applies the retrieval conditions described by natural language to analyze and process in a semantic way, realizing the expansion of retrieval at the semantic level, and thus improving the recall and precision of the database language query process based on cloud computing. The process of database analyzing query statements expressed in descriptive language and determining reasonable and effective execution strategies and steps for them is called query optimization. Query optimization is an important part of query processing, especially for relational databases. The essence of query conditions is to limit the given database information, and the purpose of analysis is to express it in a standard structured form, so as to obtain the data records that users want. Generally speaking, in NLQ, in addition to the query target and query verb, it can be considered as a component of the query condition. Database language query based on cloud computing involves two aspects: retrieval purpose representation and semantic expansion constraints. The cloud service retrieval request input by the user usually has a certain retrieval purpose, and the application expression for the cloud service retrieval purpose needs to be processed at the semantic level, so as to ensure the consistency between the semantic expression between the machine understanding and the user’s cloud service retrieval requirement. According to the structure type of the heterogeneous database, the number of bidirectional middleware is defined, as follows: ζ=

γ (γ − 1) 2

(8)

In Formula (8), γ represents the required cost of heterogeneous databases. In addition, query conditions in the cloud computing platform can be divided into single-layer conditions and nested conditions. Nested conditions refer to conditions that imply sub queries in the condition expression. Single-layer conditions are non nested conditions

282

S. Gong et al.

and no sub conditions are included in the condition expression. Single layer conditions can be divided into simple conditions and compound conditions. There is only one condition expression in the simple conditions, and the compound conditions are connected by multiple simple conditions through logical symbols. The basic principle of database language query based on cloud computing: the user puts forward a cloud service retrieval request, and the human-machine interface analyzes the lexical rules of the user’s cloud service retrieval request, and converts the obtained user search words into concepts through word segmentation. Based on reasoning and expansion at the semantic level, other concept sets related to concepts are extracted, and the concept sets are retrieved and optimized. In SQL, the aggregate function returns the calculated result of the column. Composite query targets have no explicit relational data semantics, and can generally be obtained by combining, transforming, or calculating simple query targets. For example, Age is a composite query target, which is obtained by subtracting the birth year from the current system year. Sort according to semantic distance and semantic similarity, and push the information that meets the user’s cloud service retrieval conditions to the retrieval result set after sorting. When the query method uses the head word to predict its context, the network layer that the optimized query target must pass through is described based on the neural network: V =

  1  log ∂ ∂x,y Kxy y

(9)

In formula (11), ∂ represents the size of the word bag, y represents the input time, and x represents the unique heat vector of the head word. In imperative sentences, the query target often appears at the end of the sentence as a noun phrase structure. Therefore, when processing query statements, we use reverse order to scan forward from the end of the sentence. The query statement is limited in length, and the rule set is small and unambiguous. In this paper, an improved reverse maximum length matching algorithm is used to store the matched target phrase into the target phrase set, and the entities in the target phrase set are stored into the entity set. The conditional DELETE statement will definitely query, while the non conditional DELETE statement is equivalent to SELECT * FROM TABLE first, and then delete. Therefore, the core of database manipulation statement is query, and the key to improve query efficiency is query optimization. When the cloud service resource base continues to grow, it is necessary to create a semantic index for the cloud service resource base to generate a semantic index base, thereby forming an orderly organization and management of cloud service resources, which is helpful to improve the database language query based on cloud computing. The specific process of language query is shown in Fig. 3.

6 Simulation Test 6.1 Test Preparation The specific configuration of the test environment is: CPU: 2*Quad-Core GHz AMD Opteron(tm), Memory: 4GB Fully Buffered Dimm ECC Registered SDRAM, Hard Disk: 2* Seagate (ST9146802SS) 146GB 10000 RPM, Operating System: Linux RedHat,

Research on Database Language Query Method

283

Start

Get maximum rule length MAX and minimum rule length MIN

Len=MAX

TryWord matches the rules in the target rule set

Form target phrase N Target phrase collection Y Len+1 is another word

End

Fig. 3. Language query flow chart

Filesystem: EXT3 page/block size of 4096 bytes, Compiler: GCC. Since the system is a restricted natural language interface, users should support in-scope questions as much as possible. 6.2 Test Results Select literature [5] method, literature [6] method and the database language query method designed this time to compare the effect. Test the time cost and resource utilization of the three database language query methods under the same number of tuples, as shown in Figs. 4 and 5: It can be seen from Figs. 4 and 5 that under the condition of the same number of tuples, the average time cost and resource utilization of the database language query method designed this time are: 1720ms and 42.36% respectively; The average time cost and resource utilization of the method in literature [5] are 2340 ms and 59.87% respectively, and the average time cost and resource utilization of the method in literature [6] are 2285 ms and 56.44% respectively. In order to further verify the effectiveness of the database language query method designed this time, take the query accuracy rate as an experimental index, and compare the methods of literature [5] and literature [6] with the database language query method designed this time. The results are shown in Fig. 6.

284

S. Gong et al. 4000 Literature [5] method

Time overheads˄ms˅

3500

Literature [6] method The database language query method for this design

3000 2500 2000 1500 1000 500

0

2000

4000

6000

8000

10000

Number of tuples

Fig. 4. Time overhead (ms)

Resource Occupancy Rate˄%˅

70

Literature [5] method Literature [6] method

65

The database language query method for this design

60 50 40 30 20 10

0

2000

4000

6000

8000

10000

Number of tuples

Fig. 5. Resource occupancy rate (%)

It can be seen from Fig. 6 that the maximum query accuracy of the database language query method designed this time is 86.1%, and the maximum query accuracy of the document [5] method and the document [6] method are 73% and 73.5%, respectively. It can be seen that the query effect of this method is better.

Research on Database Language Query Method

285

Literature [5] method Literature [6] method The database language query method for this design

90 80

Accuracy (%)

70 60 50 40 30 20 10 2000

4000

6000

8000

10000

Number of tuples

Fig. 6. Query accuracy rate (%)

7 Conclusion In order to solve the problems of time cost and resource utilization of traditional database language query methods, this paper designs a database language query method based on cloud computing platform. This paper focuses on the semantic annotation method of relational data, and realizes the association between words and database objects in the way of data semantic coverage. In addition, the tagging information of the conditional value of the question is added to the database language query method, and the NL2SQL task of Tab1eQA dataset that may not directly contain the conditional value required by the SQL statement is decomposed into two tasks: question parsing and text matching. In the neural network of question parsing, the separator determined by the column type is fused into the input of RoBERTA encoder to extract the column features. No additional calculation is required by the column attention mechanism. The text matching problem is solved by combining the editing distance and semantic dictionary. The experimental results show that the time cost and average resource utilization of the database language query method designed this time are low, which indicates that the application effect of this method is good. Aknowledgement. This article is the 2022 Hainan Provincial Philosophy and Social Science Planning Project “Research on the International Language Environment Construction of Hainan Free Trade Port” [HNSK (YB) 22-126] and the 2022 Hainan Provincial Higher Education Scientific Research Project “Hainan Dialect Diversity Protection Research” [Hnkyzc2022-10] Phased research results.

286

S. Gong et al.

References 1. Yan, K., Chen, H., Fu, D.J., et al.: Bibliometric visualization analysis related to remote sensing cloud computing platforms. J. Remote Sensi. 6(2), 310–323 (2022) 2. Zhou, X., Zhang, Y., Zhong, Z., et al.: Analysis on the resource management model based on heterogeneous cloud computing platform. Office Autom. 27(12), 31–33 (2022) 3. Cao, J., Huang, T., Chen, G., et al.: Research on technology of generating multi-table SQL query statement by natural language. J. Front. Comput. Sci. Technol. 14(7), 1133–1141 (2020) 4. Pan, X., Xu, S., Cai, X., et al.: Survey on deep learning based natural language interface to database. J. Comput. Res. Dev. 58(9), 1925–1950 (2021) 5. Hou, Y.: Query process analysis of database language based on data mining. Microcomput. Appl. 37(02), 49–52 (2021) 6. Fei, Y., Ding, G., Teng, Y., et al.: Efficient relational database keyword search method based on schema graph. Appl. Res. Comput. 36(03), 838–843+860 (2019) 7. Liu, B., Wang, X., Liu, P., et al.: KGDB: knowledge graph database system with unified model and query language. J. Softw. 32(3), 781–804 (2021) 8. Jiang, S., Cao, L.: Research on the secret homomorphism retrieval method of multiple keywords in privacy database. Comput. Simul. 39(4), 408–412 (2022) 9. Liu, J., Du, J., Xie, H., et al.: A concurrent OLAP query performance prediction approach for distributed database. Comput. Meas. Control 30(10), 209–215+221 (2022)

Reliability Evaluation Method of Intelligent Transportation System Based on Deep Learning Xiaomei Yang(B) Guangxi Vocational Normal University, Nanning 530000, China [email protected]

Abstract. Urban road reliability analysis is an important part of traffic condition analysis and research in recent years. The research on road network reliability can provide strong information support for the control, induction, optimization and planning of intelligent transportation systems. In this context, a reliability evaluation method of intelligent transportation system based on deep learning is proposed. Use the Delphi method to select evaluation indicators and build an evaluation index system. The AHP method and factor analysis method were used to calculate the comprehensive weight of the indicators. Based on the deep belief network in deep learning, an evaluation model is constructed, the reliability index is calculated, and the degree of reliability is judged. The test results show that the intelligent transportation system is applied in 9 different administrative regions, and the obtained reliability indexes are all above 1.0, indicating that the reliability of the intelligent transportation system is high. Keywords: Deep Learning · Intelligent Transportation System · Evaluation Index · Ahp Method · Factor Analysis Method · Reliability Evaluation

1 Introduction The smart transportation system is a basic, leading and strategic industry for urban development and construction, and an important service industry that meets the needs of urban life and production. The transportation system in the central area of a large city is the lifeblood of the entire city development and the core to ensure the normal operation of the city [1, 2]. A stable, efficient and reliable transportation system is not only the basis for travelers to achieve their travel goals, but also the ultimate goal of urban traffic managers and builders. As socialism with characteristics enters a new era, my country’s urbanization process continues to accelerate, and the number of large cities continues to grow. Relevant data show that in 2013, there were seven cities with urban populations exceeding 10 million in my country. In addition to Beijing, Shanghai, Guangzhou and Shenzhen, there were Wuhan, Tianjin and Chongqing; urban populations were between 5 million and 10 million. There are 11 cities, namely Chengdu, Nanjing, Foshan, Dongguan, Xi’an, Shenyang, Hangzhou, Suzhou, Shantou, Harbin, and Hong Kong. As of 2017, the number of large cities with a population of more than 2 million in © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 287–303, 2024. https://doi.org/10.1007/978-3-031-50577-5_21

288

X. Yang

the central area of my country has reached 53, accounting for about 1/4 of the number of large cities in the world. However, with the continuous expansion of the scale of cities, the contradiction between traffic supply and demand in the central areas of large cities has become prominent, the regional road traffic pressure has increased, and the traffic congestion in the central areas of large cities has become more and more serious. Due to congestion, the operational efficiency of the transportation system has decreased, the operating cost has risen, the travel time of traffic participants has been prolonged, and the traffic environment has deteriorated, which has seriously affected the normal conduct of various life and production activities in the region. How to improve the operation of urban smart transportation system and improve system reliability has become one of the key research contents of my country’s transportation industry. The reliability of the intelligent transportation system is not only a comprehensive reflection of the performance of the transportation system, but also one of the theoretical foundations for the construction of ITS. It is the basis and important part of the overall planning, design, traffic flow organization and management of the road network and the formulation of major emergency response plans [3]. In addition, in the case of limited resources, studying the reliability of urban transportation system can rationally allocate existing resources to the greatest extent. Effectively exploring the potential of road network has great theoretical significance and application value for improving the level of urban traffic management and dealing with the harm caused by sudden disasters and emergencies. Based on the above analysis, a reliability evaluation method of intelligent transportation system based on deep learning is proposed. Select evaluation indicators, build an evaluation indicator system, and calculate the comprehensive weight of indicators. Based on the deep belief network in deep learning, an evaluation model is constructed, the reliability index is calculated, and the degree of reliability is judged. The test results verify that the reliability of the intelligent transportation system is high.

2 Research on Reliability of Intelligent Transportation System The reliability of smart transportation systems is getting more and more attention. Because significant travel uncertainty not only increases the difficulty and cost of individual travel decisions, but also reduces the efficiency of the transportation system, affects the performance of the system, and even leads to the failure of management measures. Reliability is an important performance index of urban transportation system, and it is a probabilistic description of the transportation function of the system. It is not only affected by static elements such as road network facilities, but also more susceptible to dynamic random demands, which makes the uncertainty of reliability itself more difficult to describe. In many megacities, frequent traffic accidents and road maintenance and other temporary events and natural disasters and other emergencies require the rescue vehicles to arrive at the accident site quickly and in a timely manner under the premise of satisfying accessibility [4]. It can be seen that reliability is different from the general indicators used to describe the operation status of the urban transportation system. It integrates short-term and long-term indicators. It not only represents evaluation criteria, but also has policy connotations. It is a system performance indicator based on a broad

Reliability Evaluation Method of Intelligent Transportation System

289

basis and belongs to a multidisciplinary research field. Therefore, in-depth discussions are needed from different perspectives such as transportation science, system science, computer science, economics, and behavioral science. 2.1 Construction of Evaluation Index System Appropriate screening should be carried out for the initially selected evaluation indicators, the primary and secondary indicators should be distinguished, and the main evaluation indicators should be selected to construct an evaluation indicator system. The selection of indicators is usually divided into three steps, that is, to establish the principles of selection of evaluation indicators, to clarify the influencing factors of reliability, and to select and establish indicators. Selection Principles of Evaluation Indicators In actual comprehensive evaluation activities, it is not that the more evaluation indicators the better, but the less the better. Therefore, when establishing an evaluation indicator system, the following principles should be followed: (1) Systematic principle: The index system should be able to fully reflect the essential characteristics and overall performance of the evaluation object, and the overall evaluation function of the index system is greater than the simple sum of the subindicators. (2) Consistency principle: The evaluation index system should be consistent with the evaluation objectives, so as to fully reflect the intention of the evaluation activities. (3) The principle of independence: the indicators at the same level should not have an inclusive relationship, so as to ensure that the indicators can reflect the actual situation of the system from different aspects. (4) The principle of measurability: indicators can be measured or measured, and numbers should be used as much as possible. (5) Scientific principle: Guided by scientific theory and based on the internal elements of the objective system and their essential connections, it correctly reflects the quantitative characteristics of the system as a whole and its internal interrelationships. (6) The principle of comparability: the stronger the comparability of the index system of the systematic evaluation, the greater the credibility of the evaluation results. In the standardization process of indicators, synchronous trending should be maintained to ensure the comparability between indicators [5]. Clarify the Influencing Factors of Reliability According to the definition of the research object above, this subsection focuses on analyzing the components of the smart transportation system, laying the foundation for the following article to accurately grasp the urban traffic operation mechanism and find out the reliability indicators that affect the smart transportation system. The smart transportation system is mainly composed of five parts: traffic flow, road network, traffic management conditions and traffic environment. Among them, traffic participants and transportation modes are collectively referred to as road traffic flow.

290

X. Yang

The following will analyze the various components of the road transportation system in combination with the actual case of the traffic system in the central area of my country’s large cities. Traffic Participants Traffic participants include pedestrians, drivers and passengers, and are one of the elements that make up the traffic flow in the central area of a large city. The dynamic and random spatiotemporal distribution characteristics of traffic flow in the central area are mainly due to the uncertainty of the total amount of travel and travel behavior of each traffic participant. Mode of Transportation Transportation modes include private cars, conventional buses, rental and online carhailing, and non-motorized vehicles, which are another major element of traffic flow. The complex and diverse modes of transportation in the central area, its operation mode, the travel proportion of various modes of transportation and the operation status are mainly affected by the travel behavior of traffic participants and the structure and scale of road network infrastructure. Road Network A network of various roads within a city is called an urban road network. The road network studied in this paper is the road network in the central area of the city. The road network in the central area of a big city is composed of road sections and intersections in the area. Its formation and development are closely related to the city’s politics, economy, culture, functional layout and land use scale. Traffic Management Conditions Traffic management conditions include signs and markings, intelligent transportation facilities and corresponding control policies. The purpose of traffic management is to recognize and follow the inherent objective laws of road traffic flow. Use corresponding technical means, methods and measures to continuously improve the efficiency and quality of traffic management. This results in fewer delays, shorter running times, greater capacity, better order and lower operating costs, resulting in the best socio-economic, transportation and environmental benefits. Traffic Environment The road traffic environment includes the noise environment and the atmospheric environment. If the noise and air pollution are within the acceptable range, it means that the traffic system in the central area runs smoothly, and the degree of vehicle exhaust emissions and noise pollution is smaller, and vice versa. At present, the road traffic environment in the central areas of my country’s large cities is not optimistic. Noise and air pollution are still serious, and the task of environmental governance is still severe. Selection and Establishment of Indicators Commonly used index selection methods include Delphi method and principal component analysis method. The Delphi method is used here, also known as the expert prediction method. It distributes multiple rounds of questionnaires to experts in relevant fields to solicit opinions in an anonymous way. These experts do not meet or focus on discussions. Each round of distribution will summarize and revise the opinions of the

Reliability Evaluation Method of Intelligent Transportation System

291

previous round of experts, and then give feedback, and finally make the results of each expert tend to be consistent, so as to achieve the final research or prediction effect [6]. This paper studies the reliability evaluation index of intelligent transportation system. It is necessary to collect and analyze the opinions of experts who have a certain understanding and qualifications in various aspects of urban transportation, so as to obtain the approval of most experts, so as to construct the evaluation index. During the implementation of the Delphi method, there are always two people in action, one is the organizer of predictions, and the other is the selected experts. The first thing to note is that the questionnaires in the Delphi method are different from the usual ones. In addition to the content of the usual questionnaire to ask questions and ask the respondents to answer, it also has the responsibility of providing information to the respondents, and it is a tool for experts to exchange ideas. The workflow of the Delphi method can be roughly divided into the following steps. In each step, the organizers and experts have different tasks [7]. Step 1: Form an expert group to clarify the research objectives, and determine experts and specialists according to the scope of knowledge required by the project research. The number of experts can be determined according to the size of the research project and the breadth of the scope involved. Generally, about 8–20 people are appropriate. Step 2: Ask all the experts the questions to be consulted and the relevant requirements, and attach all the background materials on this issue, and ask the experts to ask what materials are needed, and then the experts will make a written reply. Step 3: According to the materials they received, combined with their own knowledge and experience, each expert put forward their own opinions, and explained the basis and reasons. Step 4: Summarize and organize the first judgment opinions of the experts, and then distribute them to the experts, so that the experts can compare their different opinions with others, and revise their opinions and judgments. The opinions of the experts can be sorted out or other experts with higher status can be invited to comment, and then these opinions can be distributed to the experts so that they can revise their opinions after reference. Step 5: Experts adjust and revise their opinions according to the results of the first round of consultation and related materials, and give the basis and reasons for the revised opinions. Step 6: According to the above steps, collect comments round by round and provide feedback to experts. It usually takes three or four rounds to collect opinions and feedback. When giving feedback to experts, only various opinions are given, but the specific names of the experts who express various opinions are not stated. This process is repeated until each expert stops changing his opinion. In the above steps, the summary and arrangement of each round of expert opinions is a more important part. According to the above survey, according to the score given by each expert, calculate the positive coefficient, authority coefficient and coordination coefficient. (1) Positive coefficient

292

X. Yang

The participating experts will cooperate with the whole investigation in a serious and rigorous manner only if they have a high degree of attention and interest in the research. The expert positivity index can well reflect the degree of concern of the participants to the project to ensure the accuracy of the results. Calculated as follows: A=

a1 a2

(1)

In the formula, A represents the positive coefficient, a1 represents the number of participating experts, a2 represents the total number of experts. (2) Authority coefficient In addition to being highly motivated, the experts participating in this research should also ensure that they have an understanding of the fields involved in this research or have long been engaged in industry representatives or leaders in related fields. In mathematical statistics, the authoritative index of experts is used to ensure the reliability of the results. This index can not only measure the basis of experts’ judgment on each indicator, but also reflect the expert’s familiarity with each indicator. The basis of judgment is generally divided into four aspects, namely theoretical analysis, practical experience, understanding of domestic and foreign peers and intuition. Each aspect is further divided into three degrees of influence, which are large, medium and small, indicating that the degree of influence of the four aspects on the judgment of an indicator is relatively large, medium or small. Each judgment is assigned a numerical value according to the corresponding degree of influence for quantification, as shown in Table 1.

Table 1. Judgment basis and quantitative table of influence degree Judgment basis

Degree of influence on expert judgment

Theoretical analysis

Big

Middle

Small

3

2

1

Practical experience

5

4

3

Understanding of peers at home and abroad

1

1

1

Intuition

1

1

1

To measure the familiarity of experts, there are 6 levels corresponding to 6 points, as shown in Table 2. The formula for calculating the expert authority coefficient is as follows: B=

b1 + b2 2

(2)

In the formula, B is the authority degree of experts, b1 is the judgment coefficient, and b2 is the familiarity.

Reliability Evaluation Method of Intelligent Transportation System

293

Table 2. The coefficient of familiarity of experts to the problem Familiarity

Familiarity coefficient

Familiar with

1.0

Be familiar with

0.8

Familiar with

0.6

Commonly

0.4

Less familiar with

0.2

Very unfamiliar

1.0

Expert opinion coordination index The degree of coordination of expert opinions is used to measure the degree of consistency between different experts’ attitudes and opinions on the same issue. Usually, the higher the degree of agreement between different experts on the same issue, the closer the result is to the true level. The coordination coefficient is generally represented by the Kennell coefficient C. The value of C ranges from 0 to 1. The closer the value of C is to 1, the higher the recognition of the index system by experts and the more accurate the data results. The formula for calculating the Kennell coefficient C is as follows: C=

Dj Ej

(3)

wherein,   mj  2  dij − Ej   i=1 Dj = mj − 1 mj dij Ej = i=1 mj

(4)

(5)

In the formula, C is the degree of coordination of expert opinions, Ej is the arithmetic mean of the evaluation index j, and the value ranges from 0 to 100. mj is the number of experts participating in the evaluation of the indicator j, dij is the score value of the i expert on the evaluation indicator j, and Dj is the standard deviation of the evaluation indicator j. When the A ≥ 8.0, B ≥ 5.0 and C ≤ 8.0 of the calculated indicators are selected as the evaluation indicator system. According to the previous analysis, the reliability evaluation indicators of the transportation system are summarized into three categories, namely, the reliability indicators of connectivity, reliability indicators of travel time, and reliability indicators of smooth flow.

294

X. Yang

2.2 Evaluation Index Weight Calculation The constructed reliability evaluation index system of smart transportation system needs to assign values to each indicator while clearly interpreting the meaning of each indicator, that is, to establish the weight of each indicator. Indicator weight refers to the proportion of each evaluation indicator in the comprehensive evaluation, which directly affects the results of the comprehensive evaluation. At present, the methods for determining the weight of indicators mainly include subjective weighting method, objective weighting method and combination of subjective and objective weighting method [8]. Subjective weighting method means that experts determine the weight of indicators according to their own experience and subjective judgment of the actual situation, including analytic hierarchy process (AHP), fuzzy evaluation method, binomial coefficient method, etc. The subjective empowerment method is a reflection of the will of the decision makers. Due to the different degrees of subjective cognition of the importance of the various indicators, the decision makers will inevitably have a certain degree of subjective arbitrariness. Sometimes different variables are given the same weight. For example, the Human Development Index is the arithmetic mean of three indicators, including life cycle per capita, education, and GDP. The basis of the subjective weighting method is that the expert must be very familiar with the research object. If this condition is not met for some reason, the evaluation results given will be biased. The objective weighting method is based on an objective weighting criterion determined in advance, and then provides information from the sample of the research object, and calculates the weight by mathematical or statistical methods. That is, the method of obtaining the weight by mathematically processing the original information of each index, including the coefficient of variation method, the entropy value method, the factor analysis method, the principal component analysis method, and the multi-objective programming method. The objective weighting method excludes most of the subjective components, and the results are generally “neutral”. However, the objective weighting method is always the optimal solution under a certain criterion. If only considering its “optimal” mathematically, there will be unreasonable results. In order to consider the subjective cognition experience of decision makers on the importance of indicators, and at the same time take into account the objective information of each indicator itself, the subjective and objective combination weight assignment method has become a trend. Based on the optimization theory, it builds an optimization model of the comprehensive weight of indicators and obtains the exact solution of the model, which improves the shortcomings of the single weighting method to a certain extent. When combining subjective and objective integration to empower, different integration methods can be adopted according to different goals. It can be based on the largest comprehensive evaluation target value, or based on the largest deviation from the negative ideal, or based on the smallest deviation of the decision result under subjective and objective weighting, or even based on the smallest sum of squared deviations between the subjective and objective weights [9]. Based on the above analysis, the AHP method and factor analysis method are used to calculate the comprehensive weight of the indicators. (1) Analytic hierarchy process

Reliability Evaluation Method of Intelligent Transportation System

295

Subjective weight determination based on the Analytic Hierarchy Process (AHP) method. AHP is a widely used method for determining subjective weights. When using the AHP method to determine the subjective weight of the indicators, firstly, the hierarchical structure model between the target layer and the criterion layer is established, and the judgment matrix between the relevant factors at each level is constructed according to the data and experts according to the scaling method. Then, the single-level ranking and the total-level ranking of each level are obtained, and then the consistency test is carried out on the judgment matrix of each level. When the ratio CR of the consistency index CI and the average random consistency index RI is less than 0.1, the judgment matrix has satisfactory consistency. Finally, the maximum eigenroot and weight vector of the matrix are obtained by the sum-product method. (2) Factor analysis method Factor analysis is a method to simplify data processing and problem analysis. It can describe and express the information that contains almost all the data by means of dimensionality reduction with few indicators. These representative indicators are calculated by certain mathematical models. Moreover, in the subsequent analysis, it is necessary to consider the proportion of each main factor to calculate the comprehensive score of the factors, and the weights in the factor analysis method are also calculated and processed with the help of mathematical models and SPSS. Therefore, the disadvantage of artificially defining the proportion of factors is avoided, and the interference of human factors is eliminated, so that the results of factor analysis have higher objectivity, and the distorted part is reduced. The calculation process of the comprehensive weight of the indicators is shown in Fig. 1. The formula for determining the comprehensive weight is as follows: Wj = pV1j + (1 − p)V2j

(6)

In the formula, Wj is the comprehensive weight of the j index, V1j is the weight value of the j index obtained by the AHP, V2j is the weight value of the j index obtained by the factor analysis method, and p is the subjective preference coefficient. The function is established for the purpose of minimizing the sum of squared deviations of subjective weights, objective weights and combined weights: min Y =

n   2  2 Wj − V1j + Wj − V2j

(7)

i=1

In the formula, min Y is the minimum sum of square deviations of subjective weight, objective weight and combined weight. The formula (6) is brought into the above formula (7), the subjective preference coefficient p is solved, and the complete formula for determining the comprehensive weight is obtained.

296

X. Yang Start Find the correlation coefficient matrix between each index after standardization

Multi level evaluation index system

Solving the eigenvalue and eigenvector of correlation coefficient matrix

Establish judgment matrix

The eigenvectors form a transformation matrix

Calculate weight coefficient

Calculate the variance contribution rate and cumulative variance contribution rate according to the transformation matrix N ive up

Consistency test

Pass

The cumulative variance contribution rate is greater than 85% Y

N

Adjust judgment matrix

Y Weight

M indicators greater than 85% are common factors

Factor rotation

Calculate factor load matrix after factor rotation

Comprehensive weight of indicators

Solution subjective preference coefficient

End Calculate factor variable score, i.e. weight

Fig. 1. The calculation process of the comprehensive weight of indicators

2.3 Evaluation Model Based on Deep Learning Deep learning is a research direction gradually formed with the continuous development of artificial neural networks. As a special machine learning technology under development, deep learning is still limited in academic circles, so there is no unified definition. Deep learning is a data-driven modeling method, which can be regarded as a machine learning method that performs unsupervised or semi-supervised feature extraction through multi-layer nonlinear information processing units to achieve classification and prediction. Deep learning uses algorithms to parse labeled training samples and learn some potential patterns from the samples. Then, the complex data relationship is modeled, and the model is used to analyze and predict the samples, so as to better complete the required tasks [10, 11]. The core idea of deep learning is to establish a multi-level learning process, simulate the human brain for analysis, learning and induction, and use the mechanism of the human brain to analyze data and solve problems.

Reliability Evaluation Method of Intelligent Transportation System

297

The vigorous development of deep neural networks solves the problem of insufficient representation ability of shallow network functions. In addition, the deep neural network solves the gradient disappearance phenomenon that occurs in the traditional neural network with the increase of the number of network layers. Therefore, it will be the next development direction to introduce the reliability evaluation model of intelligent transportation system in combination with deep learning theory. This chapter takes deep learning as the basic theory and the deep belief network as the theoretical support of the evaluation model, and completes the reliability evaluation of the intelligent transportation system from the perspective of deep learning. Using deep learning mode to perform TSA work is actually a process of learning and classification. By learning a large amount of data containing feature information to improve the evaluation model, the model can obtain the ability of accurate classification. From the perspective of deep learning, a complete deep belief network evaluation model includes feature information extraction function and classification function. The hidden layer of feature extraction is composed of multi-layer restricted Boltzmann machines in the form of stacking, and the feature information processing layer with classification ability constitutes a complete evaluation model. Because the Boltzmann machine (BM) has advantages in data processing, it is time-consuming in the model training process. Therefore, in order to make up for its own shortcomings while retaining its advantages, some experts and scholars have proposed Restricted Boltzmann Machines (RBM). RBM is a two-layer network model, namely visible layer and hidden layer. Compared with M, the biggest advantage of RBM lies in its symmetrical network structure and the connection method of nodes between layers. This advantage is mainly manifested as follows: (1) Since each neuron node in the hidden unit is independent of each other and only connected to the visual unit, the hidden unit can be determined by the value of the visual unit node, which will improve the efficiency of calculating the expected value; (2) Similarly, since the nodes of the visible unit are also independent of each other, the state of the nodes in the hidden unit can be obtained from the input data through Markov chain sampling, so that the independent expected value can be obtained. DBN is composed of multiple RBMs stacked with certain rules. In this model, the visual unit is responsible for receiving input sample data, the hidden unit is responsible for feature information extraction, and the label layer is responsible for outputting the classification results. The number of visible element nodes and the number of label layer nodes are determined by the dimension of the input data and the number of categories of the classification, respectively. The composition process of the evaluation model based on the DBN network is as follows: Step 1: Input feature selection. The intelligent transportation system will have a large amount of indicator data information during operation, and these data will reflect the reliability of the system to some extent. For the specific process, please refer to Sect. 1.1 Research. Step 2: First, collect the required data according to the index system, in order to improve the interaction between different features and avoid the failure of feature information to affect the experimental results. In this paper, the most commonly used normalization

298

X. Yang

method Min-Max is used to process the data, and its formula is as follows: s =

s − smin smax − smin

(8)

In the formula, s represents the new feature information (index information) obtained by normalizing the input data, and s represents the original feature information, smax represents the maximum value among all the selected feature information, and smin represents the minimum value among all the selected feature information. The values of the feature information are all mapped in the 0–1 range after Min-Max normalization processing, which greatly reduces the difference between each feature information (indicator). Step 3: Build 5 output nodes according to the categories of evaluation classification, representing high reliability, high reliability, general reliability, low reliability, and low reliability. They are represented by 10000, 01000, 00100, 00010, and 00001 respectively. The reliability of the intelligent transportation system is judged based on the reliability index, and its expression is as follows: Q=

n 

Wj · sj

(9)

j=1

In the formula, Q represents the reliability index, Wj represents the comprehensive weight of the j indicator, sj represents the data of the j indicator, and n represents the number of indicators. When Q > 1.0, the reliability of the intelligent transportation system is high; When 0.8 < Q < 1.0, the reliability of the smart transportation system is high; When 0.6 < Q < 0.8, the reliability of the intelligent transportation system is general; When 0.4 < Q < 0.6, the reliability of the smart transportation system is low; When 0.0 < Q < 0.4, the reliability of the smart transportation system is low; Step 4: The obtained simulation data is randomly divided into three parts in proportion, w1hich are the unlabeled sample data for pre-training, the labeled sample data for parameter fine-tuning, and the test sample data for detecting and evaluating the model. Step 5: Pre-training stage: The obtained unlabeled training samples are input into the evaluation model, and the model starts from the bottom layer for layer-by-layer training, and the two adjacent layers are an RBM. Step 6: Parameter fine-tuning stage: After the model completes pre-training, the sample data is used as input, and the label information corresponding to each data is used as the expected output of the model. Using the loss function and gradient descent principles, the parameters of the model are fine-tuned until the specified number of iterations are completed. After the above process, the research on reliability evaluation of intelligent transportation system based on deep learning is completed.

Reliability Evaluation Method of Intelligent Transportation System

299

3 Evaluation Method Testing and Analysis 3.1 Research Object and Background In order to verify the validity of the reliability evaluation method of intelligent transportation system based on deep learning. The main urban area of a city includes 9 administrative regions, covering an area of 5,473 square kilometers, and the urban construction land area is 595.03 square kilometers, an increase of 25.8 square kilometers and an increase of 4.5070 over the same period last year. Among them, the urban road land was 107.73 square kilometers, an increase of 7.1 square kilometers over the same period of last year. The resident population of the main urban area is 8.518 million, a year-on-year increase of 2.0070, making it one of the typical large cities. The city uses the smart transportation system for traffic scheduling and management in 9 administrative regions, testing the application performance of the evaluation method. 3.2 Evaluation Index System According to the score given by each expert, the positive coefficient, authority coefficient and coordination coefficient of each evaluation index are calculated, and the results are shown in Fig. 2. Selected area

Non selected indicators Inclusion index

Positive coefficient

12 10 8 6 4 2 01 2

1 0

12 10 8

8 6

6 Authority coefficient

4

4 2

2 0

Expert opinion coordination index

Fig. 2. Selection of evaluation indicators

Based on Fig. 2, a total of 16 indicators were selected, and an evaluation indicator system was established as shown in Table 3.

300

X. Yang Table 3. Evaluation index system

Level I indicators

Level II indicators

Level III indicators

Reliability of intelligent transportation system

System travel time reliability

Travel time index Delay time index Buffer time index Tolerable time boundary

Smooth system reliability

Travel speed index Road network unit saturation Proportion of unimpeded mileage Clear duration

System connectivity reliability

Road network density Non linear coefficient Proportion of broken roads Road grading index

3.3 Comprehensive Weight of Evaluation Indicators The AHP method and factor analysis method were used to calculate the comprehensive weight of the indicators. Taking three areas as an example, the comprehensive weights of evaluation indicators are obtained as shown in Fig. 3 and Table 4. 3.4 Reliability Analysis Calculate the reliability index of the intelligent transportation system and judge its reliability. The results are shown in Fig. 4. It can be seen from Fig. 4 that the intelligent transportation system is applied in 9 different administrative regions, and the obtained reliability indices are all above 1.0, indicating that the reliability of the intelligent transportation system is high. Taking the above reliability index as the index, the reliability of the proposed method and the method of reference [3] is tested by five groups of experiments. The test results are shown in Fig. 5. It can be seen from Fig. 5 that the reliability of the method of reference [3] is about 0.8, while the reliability of the proposed method is always higher than 1.0, which proves that the proposed method has good evaluation effect.

Reliability Evaluation Method of Intelligent Transportation System System connectivity reliability

System travel time reliability Smooth system reliability 10

8

Weight

6

4

2

0 Zone 1

Zone 2

Zone 3

Fig. 3. Comprehensive weights of secondary indicators

Table 4. Comprehensive weights of three-level indicators Level III indicators

Zone 1

Zone 2

Zone 3

Travel time index

2.53

1.21

2.52

Delay time index

1.824

0.87

1.54

Buffer time index

4.23

1.25

2.41

Tolerable time boundary

2.201

2.62

2.22

Travel speed index

1.52

2.88

2.88

Road network unit saturation

1.22

2.47

2.62

Proportion of unimpeded mileage

5.21

2.32

4.74

Clear duration

4.12

5.21

4.44

Road network density

4.47

4.12

3.65

Non linear coefficient

6.32

2.55

2.54

Proportion of broken roads

6.85

2.36

4.12

Road grading index

5.23

3.33

2.55

301

302

X. Yang 1.2

1.0

Reliability index

0.8

0.6

0.4

0.2

0

Zone Zone Zone Zone Zone Zone Zone Zone Zone 8 9 7 3 6 2 4 5 1

Fig. 4. Reliability index The proposed method The method of reference [3]

1.2

Reliability index

1.0 0.8 0.6

0.4

0.2

1

3 2 4 Experimental group/group

5

Fig. 5. Reliability test

4 Conclusion The smart transportation system is the lifeblood of urban development and the core to ensure the normal operation of the city. A stable, efficient and reliable smart transportation system is not only the basis for travelers to achieve their travel goals, but also the ultimate goal of urban traffic management and planners. To this end, a reliability evaluation method of intelligent transportation system based on deep learning is proposed. The following achievements are made: the intelligent transportation system is applied to different areas, and the system reliability is high, which proves the effectiveness of the method. It can maximize the rational allocation of existing resources and effectively explore the potential of the road network, which has great theoretical significance and application value for improving the level of urban traffic management and dealing with the hazards caused by sudden disasters and emergencies.

Reliability Evaluation Method of Intelligent Transportation System

303

In order to improve the accuracy of the system reliability analysis and evaluation results, the sensitivity of various indicators to reflect the actual situation, the degree of deviation of the indicators from the actual situation in the long-term change trend, and the relevant theories and practices at home and abroad should be taken into consideration in the future research, so as to put forward more in-depth and meaningful suggestions for improving the reliability of the transportation system in the central area of large cities.

References 1. Chen, B., Sun, D., Zhou, J., et al.: A future intelligent traffic system with mixed autonomous vehicles and human-driven vehicles. Inf. Sci. 529, 59–72 (2020) 2. Wang, T., Yao, L.: Distributed storage algorithm of big data for intelligent transportation system. Comput. Simul. 39(01), 138–142 (2022) 3. Roy, C., Misra, S.: Safe-Passé: dynamic handoff scheme for provisioning safety-as-a-service in 5G-enabled intelligent transportation system. IEEE Trans. Intell. Transp. Syst. 22(8), 5415– 5425 (2021) 4. Bae, J.H., Kwon, Y.S., Yu, J., et al.: Korean medicine treatment to posterior cruciate ligament tear patients due to traffic accident: report of 2 cases. J. Korean Med. Rehabil. 31(3), 141–147 (2021) 5. Ding, Y., Dong, J., Yang, T., et al.: Failure evaluation of bridge deck based on parallel connection bayesian network: analytical model. Materials 14(6), 1411 (2021) 6. Lima, J.: FMUD Araújo. Industrial semi-supervised dynamic soft-sensor modeling approach based on deep relevant representation learning. Sensors 21(10), 3430 (2021) 7. Guo, J., Wang, Q., Li, Y.: Evaluation-oriented faade defects detection using rule-based deep learning method. Autom. Constr. 131(12), 103910 (2021) 8. Duran-Lopez, L., Dominguez-Morales, J.P., Rios-Navarro, A., et al.: Performance evaluation of deep learning-based prostate cancer screening methods in histopathological images: measuring the impact of the model’s complexity on its processing speed. Sensors 21(4), 1122 (2021) 9. Hou, X., Breier, J., Jap, D., et al.: Physical security of deep learning on edge devices: comprehensive evaluation of fault injection attack vectors. Microelectron. Reliab. 120(2), 114116 (2021) 10. Liu, Y., Sun, P., Wergeles, N., et al.: A survey and performance evaluation of deep learning methods for small object detection. Expert Syst. Appl. 172(4), 114602 (2021) 11. Adarme, M.O., Feitosa, R.Q., Happ, P., et al.: Evaluation of deep learning techniques for deforestation detection in the Brazilian amazon and Cerrado biomes from remote sensing imagery. Remote Sensing 12(6), 910 (2020)

Forecasting Method of Power Consumption Information for Power Users Based on Cloud Computing Chen Dai1(B) , Yukun Xu1 , Chao Jiang1 , Jingrui Yan2 , and Xiaowei Dong2 1 State Grid Shanghai Municipal Electric Power Company, Shanghai 200120, China

[email protected] 2 LongShine Technology Group Co., Ltd., Shanghai 201600, China

Abstract. In order to realize the real-time balance of power demand and effectively avoid the waste of power, it is necessary to forecast the power consumption. Under this background, a forecasting method of power consumption information for power users based on cloud computing is designed. The prediction model framework is designed based on cloud computing technology. Carry out abnormal data processing, missing data filling and normalization for power consumption data. Calculate the correlation degree and select the influencing factors of power consumption of power users. Combined with multiple regression analysis, the forecasting model of electricity consumption information of power users is constructed. The results show that the mean absolute percentage error (MAPS), the root mean square error (RMSE) and the equalization coefficient (EC) of the method are the minimum and the maximum, which proves the accuracy of the method. Keywords: Cloud Computing · Power Consumption Information of Power Users · Multiple Regression Analysis · Prediction Method

1 Introduction Compared with other energy sources, electric energy has the characteristics of balance of generation, transmission and distribution and difficulty in large-scale storage. In order to achieve the real-time balance of power demand and effectively avoid the waste of electric energy, short-term forecasting of power consumption is required. Short term forecasting of power system energy can provide accurate data reference for market operation and market planning of power system. However, the short-term power consumption forecasting of power system is a complex problem, which is mainly reflected in: influenced by the type of power users, the fluctuation of power consumption is large and shows obvious periodicity; Electricity consumption is affected by weather, holidays, economic conditions and other factors, and its volatility is large. Both overestimation and underestimation of electricity demand will affect the reasonable allocation and planning of electric energy. Therefore, from the perspective of the power market, the development of © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 304–317, 2024. https://doi.org/10.1007/978-3-031-50577-5_22

Forecasting Method of Power Consumption Information

305

a highly accurate power consumption forecasting method can not only provide accurate load demand data for power operators and power market participants, but also provide reasonable power consumption arrangements for power users and reduce power consumption to a large extent. With the continuous promotion of electric power reform, it is a new direction for the development of the current electric power industry to break the monopoly position of the power sales side and introduce a market-oriented competition mechanism. In order to adapt to the development of the trend, it is necessary to master the accurate power demand of power users in the process of power market transactions, which puts forward higher requirements for the accuracy of short-term power consumption prediction. Many studies have shown that the power consumption of a region is closely related to the development of its local economy, which is called a “barometer”. As the most important secondary energy, electricity accounts for a high proportion of terminal energy consumption. Considering the non storage nature of power resources, oversupply will result in huge investment waste and energy consumption, which will have a negative impact on economic development. Therefore, reliable power consumption prediction is crucial for formulating correct energy development planning. Because the power consumption is affected by various uncertain factors, it is difficult to predict accurately. In order to solve this problem, many methods have been used to build prediction models. These methods can be divided into three types: time series method [1], statistical model [2]and machine learning method [3]. Time series model is one of the most basic and widely used methods in power forecasting. It predicts the future trend based on historical series. When the number of observations is large enough, time series methods can obtain higher prediction accuracy, but they are too dependent on past data and lack of interpretability. Statistical models have advantages over other models in terms of interpretability, simplicity and ease of deployment. Machine learning methods are more advanced methods to deal with electricity problems, and they are applicable to more complex calculations. Its main disadvantage is that its internal operating mechanism is unknown. Only in the case of big data can we obtain satisfactory prediction accuracy, which requires great efforts in data compilation. Under this background, a forecasting method of power consumption information for power users based on cloud computing is proposed.

2 Prediction Model of Power Consumption Information for Power Users 2.1 Cloud Computing Technology Design Prediction Model Framework Cloud computing is an effective way to process massive data proposed by the industry in recent years [4]. Cloud computing mainly includes three levels: Infrastructure as a Service, Platform as a Service and Software as a Service. Generally, cloud computing should have the following characteristics: rapidly deploy resources or obtain services based on virtualization technology; Realize dynamic and scalable expansion; Provide resources according to demand and pay according to usage; It is provided through the Internet for massive information processing; Users can easily participate; Flexible shape, free gathering and dispersing; Reduce the processing burden of user terminals; It reduces

306

C. Dai et al.

users’ dependence on IT expertise. The core of cloud computing is Map/Reduce technology. The two steps of Map and Reduce are mainly used to process large-scale datasets in parallel. First, the Map function will first perform the specified operation on each element of the original data composed of many independent elements, and the original data will not be changed. In this step, multiple new lists (Key Value pairs) will be created to save the processing results of the Map, so the Map operation is highly parallel. After the map work is completed, the system will then shuffle and sort the newly generated multiple lists, and then reduce these newly created lists, and merge the elements in a list appropriately according to the Key value. Since Google put forward the concept of cloud computing and Map/Reduce, many companies have implemented it. The most famous and widely used one is Hadoop, the open-source version of Map/Reduce developed by Yahoo team led by Doug Cutting, the father of Lucene, and managed by Apache. In the application of smart grid, the grid company has collected a large amount of power consumption information from users and stored it in the historical database as the historical data of power load forecasting. The power consumption information of a user forms a time series. In the comprehensive consideration of weather, politics and other factors, the user’s power load data of the next day or hour can be predicted based on the time series or other technologies, and these data can be pushed to the user. The user can then adjust the forecast data according to his actual situation, and return the adjusted data to the grid company, So as to ultimately improve the accuracy of load forecasting of grid companies and improve the efficiency of power generation. However, short-term power load forecasting for tens of millions of users and massive data is a very challenging task, which requires a data processing center with strong computing power of power grid companies. Such a data center is expensive. Cloud computing can combine some low-cost devices to meet the rapid processing requirements of large amounts of data. This paper proposes a short-term load forecasting technology based on Map/Reduce. According to the implementation process of Map/Reduce, this paper divides the power consumption prediction based on Map/Reduce into the following steps. 1) Preprocessing: preprocess the original data to improve the quality of sampling data. 2) Map: Select an appropriate mapping function, such as remainder operation, to process data in parallel and map data with the same user ID to the same node. 3) Shuffle: Each node sorts its data by time, and the sorting is performed by month, day, and hour. 4) Reduce: The sorted data on each node forms a time series, which uses a mature power load forecasting technology to forecast the power load data of the next period. 5) Output result: get the power load forecast data of each user in the next time period, and push it to the user through the broadband network, so as to ensure that the user can obtain these data in a timely manner. 2.2 Power Consumption Data Preprocessing With the promotion of smart grid construction, the traditional manual recording of electricity has been replaced by smart meters. The accuracy of power consumption recording has been greatly improved through smart meters. However, due to certain risks in the operation of power grid equipment, special circumstances or emergencies may cause the loss or mutation of electricity data. Therefore, when forecasting power consumption, it is

Forecasting Method of Power Consumption Information

307

first necessary to preprocess the collected original data. The purpose of preprocessing is to improve the quality of the sampled data, reasonably predict and fill the missing data, so as to discover the real power consumption law and improve the prediction accuracy [5]. At the same time, in the process of power consumption prediction, due to many factors affecting power consumption, there may be some correlation and redundant information between each factor, which will lead to the increase of the network size of the prediction model and the increase of time in the calculation process, thus reducing the prediction accuracy of the model. Therefore, the accuracy of the prediction model can be improved by extracting the features of correlation and redundant information of related factors based on intelligent algorithms. 2.2.1 Abnormal Data Processing In fact, in the process of long-term stable operation of the power system, there will be no sudden change in the power consumption of the system, and the historical data of power consumption is also a continuous and stable time series, so the difference between the power consumption data at a specific time and the power consumption data at adjacent times in the historical data of power consumption is not large [6]. When the change value of electricity consumption at a certain time is too large, it needs to be corrected by horizontal processing method. First, use Formula 1 to judge the abnormal value. max[|S(T , t) − S(T , t − 1), |S(T , t) − S(T , t + 1)] > A(t)

(1)

Then use Formula 2 to correct the abnormal value. S(T , t) =

S(T , t − 1) + S(T , t + 1) 2

(2)

where, S(T , t) is the threshold value; t is the sampling point; S(T , t + 1) is the power consumption at time t + 1 on day T ; S(T , t) is the electricity consumption at time t on day T ; S(T , t − 1) is the power consumption at t − 1 on day T . 2.2.2 Defect Data Completion In the actual big data of power consumption, data loss often occurs due to storage or human factors. The rapid detection of outliers in the previous section will also bring data loss points. The existence of these defects will make it difficult to model, increase the error of calculation results, and even cause the collapse of the analysis program. What needs to be discussed in this section is how to effectively complete the missing data of power consumption big data [7]. Here, KNN filling algorithm is selected to complete the missing data with big TV data. The data filling algorithm based on KNN mainly includes three main stages: missing point search, missing point calculation and missing point recursion. First, search the number and location of missing points in the original data set, then use the KNN algorithm to find the K-nearest neighbor of each missing point in turn and calculate the distance between the missing point and the K-nearest neighbor, and calculate the filling value of the missing point by giving the weight that the K-nearest neighbor is inversely proportional to the distance. After each missing point is calculated, update the original

308

C. Dai et al.

data set for circular calculation until there is no missing value. Next, calculate the difference between the current filling value vector and the last round of recorded filling value vector. If the difference is less than the predefined allowable error, stop recursion, otherwise repeat the missing point calculation process until the conditions are met. However, KNN filling algorithm uses all the nearest neighbors of target data (data records with missing items) to participate in the filling of target data. It ignores the problem that the k-nearest neighbor of target data records may have noise [8]. Therefore, the accuracy of KNN filling algorithm in filling missing data depends to a large extent on the quality of the original data. In the real world, the noise in the data always exists. When the k nearest neighbor has noise, filling missing data with KNN algorithm will produce large deviation. To solve this problem, the ENN-KNN missing data filling algorithm proposed in this paper can effectively eliminate the impact of such noise on the filling results by comparing the true nearest neighbor degree between the nearest neighbors of the target data records, thus improving the accuracy of filling [9]. The ENN KNN missing data filling algorithm is identical to the KNN algorithm in obtaining the k-nearest neighbor of the target data. By comparing the Euclidean distance between the target data and the perfect value data, k nearest neighbors of the target data are obtained. In this paper, P(xi ) is used to represent the nearest neighbor set of the target data xi . After getting the nearest neighbor of target data xi , use the same method to find the nearest neighbor of each nearest neighbor of target data. In this paper, PP(xi ) i = 1, 2, ..., k represents the nearest neighbor set of the i nearest neighbor in the nearest neighbor set P(xi ). The specific process is as follows: 1) Data initialization and construction of complete value data matrix; 2) Calculate the Euclidean distance di (xi , X ) between the target data and all data records in the complete value data matrix, as shown in Formula 3;  di (xi , X ) = (xi − X )T (xi − X ) (3) where, xi represents target data; X stands for complete value data matrix. 3) The k data records with the minimum Euclidean distance are selected as the k nearest neighbor of the target data, and their positions in the complete value data matrix are stored in the array B; 4) Select the k data records with the minimum Euclidean distance to the nearest neighbor of each target data from the complete value data matrix, and store their positions ˆ in the complete value data matrix into the two-dimensional array B; 5) Initialize the neighbor importance QKK(xi ) of the nearest neighbor of each target data; 6) Calculate the importance of each data record in the complete value data matrix, such as Formula 4. qi =

k   i=1 i∈KK(xi )

QKK(xi ) k

(4)

where, qi represents the importance of the i-th complete value data record in the data set.

Forecasting Method of Power Consumption Information

309

7) Calculate the true nearest neighbor degree of k nearest neighbors of target data xi , as shown in Formula 5.  qˆ i = qi (5) i∈KK(xi )

where, qˆ i represents the true nearest neighbor degree of the i-th nearest neighbor of the target data. 8) Remove the noise nearest neighbor of the target data. The ENN KNN missing data filling algorithm determines the nearest neighbor of noise based on the true nearest neighbor degree of the nearest neighbor [10]. The judgment standard is shown in Formula 6:  1, qˆ i < αq (6) Zi = 0, else In the formula, q represents the average of the true nearest neighbor degree. Zi represents the noise judgment result of the i-nearest neighbor of the target data record. When the result is 1, it means that this nearest neighbor is the noise nearest neighbor of the target data. When the result is 0, this nearest neighbor is the non noise nearest neighbor of the target data. α is the elimination coefficient, and the value range is (0, 1). When α → 0, ENN-KNN filling algorithm is equivalent to KNN filling algorithm. When α → 1, the ENN-KNN filling algorithm is equivalent to filling the target data with the nearest neighbor with the greatest degree of real nearest neighbor. 9) According to Zi , calculate the nearest neighbor weight wi of the target data; 10) Estimate the value of missing data and fill it, as shown in Formula 7; xˆ i =

k 

wi ci

(7)

i=1

In the formula, ci represents the value of the corresponding position of the nearest neighbor of the target data xi ; 11) Repeat steps 2 to 10 until there is no missing data in the dataset. 2.2.3 Normalization of Power Consumption Data In the process of processing the historical data of electricity consumption, because the factors that affect electricity consumption also need to be used in the prediction process, and the units of electricity consumption and various influencing factors are different, the relevant variables need to be normalized before the prediction to convert into data without dimensions [11], thus having little impact on the prediction of the model. Therefore, in the process of power consumption data processing, all variables should be normalized to ensure that each influencing factor is within the same numerical range. Normalization can be used to process each variable. The normalized data processing process of power consumption data is Formula 8: xˆ i =

xi − xmin xmax − xmin

(8)

310

C. Dai et al.

where, xi is the power consumption data to be normalized; xˆ i represents the data obtained after normalization; Intact data of previous and subsequent days; xmax is the maximum of the sample data used; xmin is the minimum of the sample data used. After the above steps, the user power data is cleaned and normalized to the maximum value, and its data is transformed into a complete and comparable time series without outliers. 2.3 Selection of Influencing Factors for Power Consumption of Power Users With the continuous improvement of living standards, people’s preferences for electricity are also changing. Through reading the literature, the previous studies involved the following three factors affecting power consumption: economy, temperature and industrial structure. As far as residential power consumption is concerned, it has certain particularity. For example, as far as industrial structure is concerned, it represents the production status of the primary, secondary and tertiary industries, and more reflects production power consumption than residential power consumption; For example, household electricity consumption will be more sensitive to the reform of electricity price policy. Through summary and induction, the influencing factors of corresponding indicators are given, and then the influencing factors and power consumption are analyzed by grey correlation to obtain several influencing factors with the largest correlation with power consumption, providing basis for subsequent modeling [12]. Grey correlation analysis is an important part of the grey system theory, which is widely used in education, economy, agriculture and other fields. The degree of correlation between factors is judged by the degree of similarity of the development trend among the research objects, that is, the “grey correlation degree”. The calculation formula of correlation degree is shown in Formula 9: m 

ψ0i =

λi (k)

k=1

m

, k = 1, 2, ..., m

(9)

Where, λi (k) is each index in the comparison sequence and the reference sequence, and its absolute difference is calculated; m is the number of influencing factors. The residential electricity consumption is mainly affected by climate, economy and industrial characteristics. According to the above correlation analysis, seven influencing factors are selected to form an indicator system (see Table 1), and relevant data are collected and sorted out. Table 1 residential electricity consumption index system specifically includes climate, population, living area, economy, total energy consumption, electricity price, consumption habits, etc. More detailed analysis will be carried out below. (1) Influence of climate on electricity consumption As far as climate is concerned, some studies show that the annual average temperature has little impact on power consumption, which is probably because the fluctuation of the national average annual temperature is relatively small within the studied time interval, so the impact of season and geographical climate on the national power consumption cannot be shown, but this does not negate the impact

Forecasting Method of Power Consumption Information

311

Table 1. Residential electricity consumption index system Dependent variable

Independent variable

Customer power consumption (10000 DWh)

Average temperature (°C) Total population (10000 persons) Permanent population (10000 persons) Residential area (m2) Per capita disposable income (yuan) GDP (100 million yuan) Total energy consumption (100 million tons of coal)

of season and temperature on power consumption. As the research in this paper is a medium and long-term power load forecast on a monthly basis, the temperature will have a greater impact on power consumption, such as cooling in summer and heating in winter. Especially in Chengdu, where there is no heating, electric heaters are mainly used for heating in winter; At the same time, the high temperature will urge families to turn on air conditioners, electric fans and more refrigerators, leading to an increase in power consumption. Therefore, the monthly average temperature in Chengdu is taken as the first influencing factor index and recorded as R1. (2) Impact of economy on electricity consumption Population, living area, GDP, etc. directly or indirectly reflect the economic development level of a region. According to literature research, economy also has a great impact on power consumption. To be specific, the impact of population on electricity consumption can be imagined. The more regional population, the more electricity consumption will naturally bring. It should have a strong positive correlation with residential electricity consumption. The total population and permanent population in Chengdu are taken as the influencing factor indicators and recorded as R2 and R3. In fact, the living area indirectly reflects the population. From a common sense point of view, residents’ electricity use will be more sensitive to electricity prices. The rise in electricity prices will improve people’s awareness of saving electricity in daily life, but it often has little impact on industrial production. Of course, whether it is as expected in this paper depends on the results of grey correlation analysis. Take the per capita residential building area as an indicator of influencing factors and record it as R4. The level of regional economic development greatly affects the power consumption of urban residents. The per capita disposable income of urban residents and the gross regional product reflect the level of economic development. Here, the per capita disposable income and annual gross domestic product of urban residents in Chengdu are included in the influencing factor indicators, which are recorded as R5 and R6 respectively.

312

C. Dai et al.

(3) Impact of industrial characteristics on power consumption For the energy industry of electricity, from the total amount of macro energy consumption to the micro electricity price, as well as the energy consumption habits of the end consumers themselves, will affect the electricity consumption behavior of the end residents. The impact of total energy consumption on electricity consumption should be positive correlation, as it is common sense that the increase of total energy consumption will drive residents’ use of electricity. The total energy consumption in Chengdu is taken as the influencing factor indicator and recorded as R7. 2.4 Prediction Model Construction For the long-term development of a region, power system planning, design and dispatching are all factors affecting its development, so it is necessary to forecast the medium and long-term power consumption. With the great changes of power demand, some new characteristics have emerged in load characteristics. How to effectively improve the accuracy of medium and long-term power consumption forecast, first, fully consider the relevant factors, and second, accurately grasp the load characteristics. Based on the main factors that affect the power demand and load characteristics analyzed above, this paper constructs a single mathematical model to predict the medium and long-term power consumption of the city in accordance with the four main factors that affect the power demand, namely, macroeconomic trend, industrial structure adjustment, development of key power use industries and climate conditions. The time span here is five to ten years. The power consumption regression model prediction technology is to use the regression analysis method in mathematics to count the historical data of power consumption and analyze the relationship between variables to obtain a definite curve, establish a mathematical model of power consumption and related variables, and extend the curve to a certain time to obtain the forecast value at that time, so as to achieve the purpose of prediction. In general, it is to use the mathematical model to predict future power consumption. The factors affecting electricity consumption in electricity consumption prediction are random, so take them as independent variables and electricity consumption as dependent variables [13]. The dependent variable changes with the independent variable, the independent variable is the cause, and the dependent variable is the result, and this relationship cannot be reversed. The power consumption forecasting problem just belongs to this kind of problem, that is, the fitting curve obtained can only be the fitting of random variables, and can not be used to describe the random changing power consumption in turn. The fitting curve can be established by either straight line fitting or curve fitting. The former is called linear regression and the latter is called nonlinear regression. Since there are many influencing factors (independent variables) of user electricity consumption, it is necessary to establish a multiple regression model. Multiple linear regression is to study whether two or more independent variables and a dependent variable have a linear relationship of interdependence. This relationship can usually be expressed in terms of multiple regression equation, that is, the multiple regression equation depicts a linear regression model with two or more independent variables in the relationship equation between a dependent variable and multiple independent variables. In this model, the dependent variable is a linear function of multiple independent

Forecasting Method of Power Consumption Information

313

variables R1 , R2 , .., RM and error terms, such as Formula 10. L = g0 + g1 R1 + g2 R2 + ... + gM RM + E

(10)

where, Y is the dependent variable (electricity consumption); M is the number of independent variables (influencing factors); g0 , g1 , ..., gM is the regression coefficient; E is the random error term. In practical application, if n group observation data Rj1 , Rj2 , ..., RjM (j = 1, 2, ..., n) is obtained, as shown in Formula 11: Lj = g0 + g1 Rj1 + g2 Rj2 + ... + gM RjM + Ej

(11)

The corresponding matrix expression is Formula 12: L = Rg + E

(12)

where, L is the observed value vector of the dependent variable (the observed value of electricity consumption); R is the observed value matrix of the independent variable (power consumption influencing factor); g is the population regression parameter vector. To facilitate parameter estimation of the model, the following basic assumptions are made for the equation: (1) The independent variable R1 , R2 , .., RM is a deterministic variable and R is a full rank matrix; (2) The random error term has zero mean and equal variance, that is, it satisfies the Gauss Markov condition; (3) The random error term obeys normal distribution; The regression analysis steps for multiple variables are as follows: Step 1: Draw a scatter chart and observe its distribution characteristics; Step 2: Carry out the corresponding variable conversion according to the selected function; Step 3: establish a linear regression model for the transformed data; Step 4: Use the least square method to solve the equation of the urban road network capacity prediction model, and obtain the estimated values of various multivariate linear regression coefficients; Step 5: Test the parameters of the linear regression model. The inspection includes R inspection, t inspection and F inspection. According to the regression analysis equation obtained, the value of the independent variable is directly brought into the value of the dependent variable, that is, the value of the influencing factor is brought into the value of the power consumption.

3 Method Validation 3.1 Power Consumption Data The data used in this paper is from the daily hourly electricity consumption data of California from February 6, 2017 to April 16, 2017. This group of data has 24 observations every day, and each group of observations covers 2GB data. There are 480GB

314

C. Dai et al.

observations in 10 weeks. In this paper, the data of 9 weeks from February 6, 2017 to April 9, 2017 are used to forecast the average hourly power consumption from April 10, 2017 to April 16, 2017. The fluctuation of the original power consumption data is shown in Fig. 1. 5000

Power consumption/kW

4000

3000

2000

1000

1

2

3

4

5

6

7

8

9

Time/week

Fig. 1. Trend Chart of Data Power Consumption in 9 Weeks

It can be seen from Fig. 1 that the original series is non-stationary and nonlinear, and the seasonal effect of the series is very obvious. 3.2 Model Evaluation Criteria In order to judge the prediction performance of each model, the indicators used to evaluate the quality of prediction models in this paper are: average absolute percentage error (MAPE), root mean square error (RMSE), and equalization coefficient (EC). In general, a small value of MAPE and RMSE indicates that the prediction model has a better effect. EC indicates the consistency between the real measured value and the predicted value. The larger the value, the better the prediction effect of the model. When EC > 0.9, the prediction is meaningful. The indicators are defined as follows: (1) Mean absolute percentage error (MAPE), as shown in Formula 13: n  βi −βˆi  i=1  βi  MAPE = n (2) Root mean square error (RMSE), as shown in Formula 14:   2  n

 βi − βˆi i=1 RMSE = n

(13)

(14)

Forecasting Method of Power Consumption Information

315

(3) Equalization coefficient (EC), such as Formula 15: 2 n

 βi − βˆi EC = 1 −

i=1 n 



(βi ) + 2

i=1

(15)

n 2  βˆi

i=1

where, βi is the observed value of power consumption data, βˆi is the predicted value of power consumption data, and n is the number of load prediction results. The above three evaluation indexes are used to evaluate the prediction performance of the model. 3.3 Prediction Results The established linear regression model is used to predict the average hourly power consumption in the 10th week, and the results are shown in Fig. 2. 1000

Power consumption/kW

800

600

400

200

1

2

3

4

5

6

7

Time/day

Fig. 2. Power Consumption Forecast Results

3.4 Prediction Accuracy Comparison Under the same data, the proposed method, reference [1] time series method, reference [3] statistical model and reference [3] machine learning method are used to predict the average hourly power consumption in the 10th week, and then calculate the average absolute percentage error (MAPS), root mean square error (RMSE) and average coefficient (EC) between the prediction results and the actual results, as shown in Table 2.

316

C. Dai et al. Table 2. Comparison of Forecast Accuracy

Method

MAPE

RMSE

EC

Research methods

1.3622

21.3255

0.9987

Time series method

2.6247

84.2521

0.9421

Statistical model

2.0124

63.1425

0.9632

Machine learning method

2.8742

72.2725

0.9425

It can be seen from Table 2 that compared with time series method, statistical model and machine learning method, the average absolute percentage error (MAPS), root mean square error (RMSE) of the studied method are smaller, and the equalization coefficient (EC) is higher, which proves the prediction accuracy of this method.

4 Conclusion In order to improve the applicability of power user power consumption information prediction, this paper proposes a cloud computing based power user power consumption information prediction method. In this paper, based on cloud computing technology, a power consumption information prediction model for power users is constructed. On this basis, the influencing factors of power consumption are analyzed. The experimental results show that the proposed method has good prediction accuracy.

References 1. Saoud, L.S., Ghorbani, R.: Metacognitive octonion-valued neural networks as they relate to time series analysis. IEEE Trans. Neural Netw. Learning Syst. 31(2), 539–548 (2020) 2. Sathishkumar, V.E., Shin, C., Cho, Y.: Efficient energy consumption prediction model for a data analytic-enabled industry building in a smart city. Build. Res. Inform. 49(1), 127–143 (2021) 3. Geetha, R., Ramyadevi, K., Balasubramanian, M.: Prediction of domestic power peak demand and consumption using supervised machine learning with smart meter dataset. Multimed. Tools Appl. 80(13), 19675–19693 (2021) 4. Praveenchandar, J., Tamilarasi, A.: Dynamic resource allocation with optimized task scheduling and improved power management in cloud computing. J. Ambient. Intell. Humaniz. Comput. 12(3), 4147–4159 (2021) 5. Liang, Y., Hu, Z., Li, K.: Power consumption model based on feature selection and deep learning in cloud computing scenarios. IET Commun. 14(10), 1610–1618 (2020) 6. Ding, S., Li, R., Wu, S., et al.: Application of a novel structure-adaptative grey model with adjustable time power item for nuclear energy consumption forecasting. Appl. Energy 298(3), 117114 (2021) 7. López, M., Valero, S., Sans, C., et al.: Use of available daylight to improve short-term load forecasting accuracy. Energies 14(1), 95 (2020) 8. Kim, Y., Kim, S.: Forecasting charging demand of electric vehicles using time-series models. Energies 14(5), 1487 (2021)

Forecasting Method of Power Consumption Information

317

9. Wu, X., Dou, C., Yue, D.: Electricity load forecast considering search engine indices. Electr. Power Syst. Res. 199(11), 107398 (2021) 10. Ren, C., Xu, Y., Dai, Bijian, Zhang, Rui: An integrated transfer learning method for power system dynamic security assessment of unlearned faults with missing data. IEEE Trans. Power Syst. 36(5), 4856–4859 (2021) 11. Narimani, M.R., Molzahn, D.K., Crow, M.L.: Tightening QC relaxations of AC optimal power flow problems via complex per unit normalization. IEEE Trans. Power Syst. 36(1), 281–291 (2021) 12. Ahn, H.K., Park, N.: Deep RNN-based photovoltaic power short-term forecast using power IoT sensors. Energies 14(2), 436 (2021) 13. Wu, W., Song, Y., Wei, S.: Research on power consumption forecasting based on improved prophet mode. Comput. Simul. 38(11), 473–478 (2021)

Power Consumption Behavior Analysis Method Based on Improved Clustering Algorithm of Big Data Technology Zheng Zhu1(B) , Haibin Chen1 , Shuang Xiao1 , Jingrui Yan2 , and Lei Wu2 1 Electric Power Research Institute, State Grid Shanghai Municipal Electric Power Company,

Shanghai 200120, China [email protected] 2 LongShine Technology Group Co., Ltd., Shanghai 201600, China

Abstract. With the increase of the types and quantities of electrical equipment, the magnitude of user power consumption data has increased exponentially. Deep mining and analysis of it is the key to help the power grid understand customer needs. Therefore, the research on user power consumption analysis method based on improved clustering algorithm based on big data technology is proposed. Analyze the influencing factors of users’ electricity use behavior (economic factors, time factors, climate factors and other factors), on this basis, explore the characteristics of users’ electricity load, reduce and process users’ electricity data samples based on PCA algorithm, improve clustering algorithm by using big data technology – extreme learning machine principle, cluster and process users’ electricity use behavior, use GSP algorithm to mine sequential patterns of users’ electricity use behavior, and obtain users’ electricity use rule model, Thus, the analysis of users’ electricity consumption behavior is realized. The experimental data show that the minimum value of MIA index obtained by applying the proposed method is 0.09, and the maximum accuracy of the user power consumption law model is 92%, which fully confirms that the proposed method has better application performance. Keywords: Big Data Technology · Power Consumption Data · Power Consumption Behavior of Users · Improvement of Clustering Algorithm · Behavior Analysis

1 Introduction There are various types of power big data, and the power consumption data of power users is one of the fastest growing data sources [1]. With the gradual elimination of traditional mechanical meters and the popularization and application of intelligent terminals such as smart meters, power grid enterprises have mastered a large amount of user electricity data. For a long time, users’ electricity consumption data has been ignored and only used for the measurement of electricity bills [2]. With the gradual maturity of data © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 318–331, 2024. https://doi.org/10.1007/978-3-031-50577-5_23

Power Consumption Behavior Analysis Method Based

319

mining technology, researchers in power grid enterprises and universities have begun to pay attention to the potential value of user electricity data. The analysis of user’s electricity consumption behavior based on user’s daily load curve has gradually become a popular application scenario of power big data mining, so the research of user’s electricity consumption behavior analysis method based on big data technology and improved clustering algorithm is proposed. Based on the analysis of the influencing factors of user’s electricity consumption behavior, the user’s electricity consumption data samples are reduced based on PCA algorithm. Based on big data technology, the improved clustering algorithm is used to mine the sequence pattern of users’ electricity consumption behavior, obtain the laws of users’ electricity consumption, and realize the analysis of users’ electricity consumption behavior.

2 Research on Analysis Method of User’s Electricity Consumption Behavior 2.1 Analysis on Influencing Factors of Users’ Electricity Consumption Behavior In order to improve the accuracy of the analysis of users’ electricity consumption behavior, the first step is to analyze the influencing factors of users’ electricity consumption behavior, so as to lay a solid foundation for the subsequent research on users’ electricity load characteristics. The electric load of power users is composed of industrial load and residential load, and the characteristics of electric load of users are affected by many factors. The analysis of the influencing factors of the user’s electricity use behavior plays an important role in the analysis of the user’s electricity use behavior. The degree of influence of each factor on the user’s behavior varies. The main factors include economic factors, time factors, climate factors, and other factors, as shown below: (1) Economic factors Generally speaking, economic development determines the growth rate of power load. The improvement of economic level is accompanied by the increase of electricity consumption. This is because electricity is an indispensable energy for social development today. The economic output value of most industrial users is in direct proportion to the use of power energy; For residential users, with the improvement of economic development level, household appliances have been popularized, so the power consumption of ordinary residential users has also shown a year-on-year growth trend with the number of local economic development; (2) Time factor The influence of time factor on the electricity consumption behavior of users has a certain regularity. Here, the time factor is divided into holiday factor and weekend factor. Holiday factors refer to the impact of national statutory holidays such as Spring Festival, May Day, National Day and Dragon Boat Festival on users’ electricity use behavior. During holidays, especially during the “Spring Festival” and other major festivals in China, a certain amount of industrial load is in shutdown or only maintained at a low constant level. No industrial operation with high energy consumption is carried out, but only the online operation of equipment is

320

Z. Zhu et al.

maintained. Therefore, the industrial load decreases significantly during holidays [3]. For residential users and commercial users, the peak of electricity consumption in the evening generally rises during holidays, and there is no obvious difference between peak and valley, and their electricity load also changes greatly. Weekend factor refers to the difference between the user’s electricity use behavior on weekdays and weekdays within a week. Because users work in “week” as the time unit in most of the time, this effect is cyclical, and because of the different electricity demand of different types of users, their electricity use behavior on weekdays and weekdays has personalized differences; (3) Climatic factors Climate is also one of the main factors that affect the electricity consumption behavior of users. Generally speaking, climate factors include air temperature, precipitation, wind power and humidity, among which the correlation between air temperature and power consumption behavior of users is the most obvious. The impact of climate factors on users’ electricity consumption behavior can be shown in many aspects: for example, the rising temperature in summer will force people to cool down by opening air conditioners, fans, etc., and then users’ electricity consumption will increase significantly; In winter, when the temperature is low, people use electric heaters and other heating equipment for heating, which will also lead to increased power consumption; When the weather is cloudy or rainy, people will increase the use of lighting equipment and many other aspects. In the current research, the researchers consider the comprehensive impact of various meteorological factors on the user’s electricity consumption behavior, so as to more accurately identify the change rule of the user’s electricity load. (4) Other factors Other factors that affect users’ electricity use behavior mainly include the influence of electricity price, demand side management measures, power supply side, people’s income level and change of consumption concept. This section briefly introduces the influence of electricity price and demand side management measures on users’ electricity use behavior. The impact of electricity price on users’ electricity consumption behavior is mainly shown in the following two aspects: (1) The level of electricity price The level of electricity price is closely related to social economic development level, price level and other factors. Because different users have different affordability to electricity price, the adjustment of electricity price level has different influence on users. However, generally speaking, the rise in electricity prices will lead to a downward trend in electricity consumption of power users, but the overall trend of electricity consumption is also closely related to other factors such as the level of economic development; (2) Electricity price structure The electricity price structure also has a significant impact on users’ electricity consumption behavior. For example, the implementation of peak and valley electricity prices means that power companies determine the peak and valley periods of electricity consumption based on the average electricity load of users in the region,

Power Consumption Behavior Analysis Method Based

321

and then implement different electricity price mechanisms in the peak and valley of electricity consumption, increase the peak electricity price, reduce the low valley electricity price, and then encourage users to use electricity in the valley, so as to achieve the purpose of cutting the peak and filling the valley, balance the contradiction between power supply and demand, and give full play to the leverage role of price, Optimize the allocation of power resources [4]. The development of reasonable DSM measures will effectively improve the characteristics of power load on the user side, effectively avoid peak power consumption and ease the tension of insufficient power supply. For example, implement peak and valley electricity price load management measures to encourage users to use electricity in the low valley, avoid the concentration of users’ electricity demand in a fixed period of time, and achieve peak shaving and valley filling. For another example, by formulating a reasonable and orderly power consumption strategy and according to the characteristics of users’ power consumption behavior, reducing or limiting users’ power load in different periods of time can effectively reduce the peak valley difference of the grid and make the grid load curve tend to balance. The above process has completed the analysis of the influencing factors of users’ electricity consumption behavior, mainly including economic factors, time factors, climate factors and other factors, providing basic support for the subsequent exploration of users’ electricity load characteristics. 2.2 Research on Characteristics of User Power Load Based on the above analysis results of influencing factors of users’ electricity use behavior, explore the characteristics of users’ electricity load, and reduce the data samples of users’ electricity use, so as to facilitate the subsequent clustering operation of users’ electricity use behavior [5]. The power users are divided into three categories: industrial, commercial and residential, and their typical daily load curves are obtained to analyze the characteristics of power load of users, as shown below: Industry is the largest power consumption industry at present, mainly including electricity for high energy consumption industries such as steel, chemical industry, nonferrous metals, cement, and light industries such as textile, paper, and non-staple food processing. There are many kinds of industrial industries, and different industries have different power consumption characteristics, which are very different. Therefore, we can cluster the industrial load curve to determine different industrial industries, judge the prosperity and decline of some industries, and provide targeted energy efficiency services, power saving consulting, reliable power supply, information notification, highquality energy use, high-quality services, customized services, access services, equipment leasing, and power supply channel initiatives. The daily load curve characteristics of typical industrial users are shown in Fig. 1. The commercial load mainly occurs in large commercial buildings and senior office buildings, and is mainly composed of electrical loads in department stores, entertainment venues, supermarkets, food plazas and senior office buildings. It can be roughly divided into three parts: retail, entertainment and office. Commercial power load mainly includes computer equipment, daily lighting, central air conditioner, window air conditioner, fan

322

Z. Zhu et al. 14

12

Electric load/kW

10

8

6

4

2

0

0:00

4:00

8:00

12:00

16:00

20:00

24:00

Time

Fig. 1. Characteristic Diagram of Typical Industrial User’s Daily Load Curve

and elevator. The load characteristics of commercial and office buildings are: 1. The load of commercial and office buildings shows obvious timeliness and seasonality. The load curve of the office building is relatively stable in a day, with little fluctuation, and rises sharply at night; Due to the composition of the business system and different operating conditions, the business load curve will also be slightly different [6]. Therefore, according to different types of load curves, we can judge the rise and fall of certain industries and the geographical location of business districts. The daily load curve characteristics of typical commercial users are shown in Fig. 2.

Fig. 2. Characteristic Diagram of Daily Load Curve of Typical Commercial Users

The residential load mainly comes from the use of ordinary appliances such as lighting, washing machines and refrigerators, as well as high energy consumption appliances such as air conditioners, electric heaters, water heaters, cooking appliances and electric water heaters. In ordinary electrical appliances, lighting power consumption has changed

Power Consumption Behavior Analysis Method Based

323

the most in a day, but there are few time differences. Lighting load often occurs at the same time of the day. According to the power utilization habits of residential users, lighting mainly occurs at night, and the time period is usually from 18:00 to 22:00, also known as “lighting peak”; Refrigerators are household appliances that have been used for a long time, and their service time is relatively less adjustable; The washing machine is usually used in the morning or evening of weekdays, or on Saturdays and holidays; Generally, TV sets are used most frequently in the evening, followed by morning and noon. Among high energy consumption household appliances, electric water heaters and air conditioners are mostly used at night, while cooking appliances such as rice cookers and induction cookers are used at three meals a day. But because each family has different appliances, which are scheduled by different people and have different use preferences, each family will have different daily load curves. The daily electricity load curve of residents will reflect different electricity consumption behaviors. According to these behaviors, different groups can be clustered to analyze different types of users’ electricity consumption behaviors. The daily load curve characteristics of typical residential users are shown in Fig. 3.

Fig. 3. Characteristic Diagram of Typical Residential User’s Daily Load Curve

Based on the above exploration results of user power load characteristics, user power consumption data samples are obtained. The data volume is huge, and there are many redundant data samples, which has brought some obstacles to the subsequent clustering of user power consumption behavior. Therefore, PCA algorithm is applied in this study to reduce the user power consumption data samples. PCA algorithm is a multivariate statistical technique commonly used to reduce data dimensions. It can not only easily process a large number of data, but also avoid a large number of complex calculations. Its basic idea is to reconstruct the original variables with certain correlation into a new set of integrated variables that are not related to each other. PCA dimensionality reduction is to map m samples with n variables that can be described (i.e. m × n data matrix) to a matrix with m variables that can be described (i.e. r ≤ n) through orthogonal transformation. Not only should the position subspace composed of

324

Z. Zhu et al.

base vectors optimally consider the correlation of data, but also the corresponding base vectors should meet the orthogonality. These generated principal components can reflect most of the information of the original matrix, which is usually expressed as a linear combination of the original variables, which can reduce the user power consumption data samples to the maximum extent and facilitate the clustering of subsequent user power consumption behavior [7]. In this paper, the m × n dimension user power load data matrix is reduced to m × r dimension user power load data matrix through PCA algorithm. The specific reduction processing is as follows: Formula (1): ⎧ 1 ⎨ α = m ∗ XtT ∗ Xt (1) [U , S, V ] = svd (H ) ⎩ Xtr = U (:, 1 : r)T ∗ Xt In formula (1), α represents the reduction variable of user power load data matrix; Xt represents the user power load data matrix at time t; T represents transposed symbol; U and V represent a simple matrix satisfying α = USV T ; S represents a diagonal matrix with non negative diagonal elements, and the diagonal elements are sorted in descending order; svd (H ) represents a random constant, which needs to be set according to the actual situation. The element Sij in diagonal matrix S represents the j-th feature corresponding to the i-th user load data. Therefore, the larger the Sij value, the more information it carries. On the contrary, the smaller the Sij value, the less information it carries. In fact, there are only a few features in the user load data of power system that are extremely important. The minimum r is usually selected to meet the conditions of formula (2): r  i,j=1 m 

Sij × 100% ≥ β o

(2)

Sij

i,j=1

In formula (2), β o represents the variation percentage of the reduced dimension data. After mapping the user power load data to the low dimensional space, it can be verified that PCA will select the main features that can maximize the variation, which also shows that the characteristics of the user power load data of the entire power grid can be represented by a small amount of information. The above process has completed the exploration of user power load characteristics, and the reduction of user power data samples has laid a solid foundation for subsequent clustering operations. 2.3 Clustering of Users’ Electricity Consumption Behavior Based on the above simplified user power consumption data samples, the clustering algorithm is improved by using the big data technology – extreme learning machine principle. Based on this, the user power consumption behavior is clustered to make

Power Consumption Behavior Analysis Method Based

325

sufficient preparation for the realization of the end user power consumption behavior analysis. By comparing and analyzing the current popular clustering algorithms, we understand the main problems and defects of each clustering algorithm. Next, we are going to adopt the idea of spectral clustering, and quickly embed the original data through the advantages of fast learning speed of the extreme learning machine, and then cluster them with K-means algorithm [8]. Limit learning machine is a simple, easy to use and effective learning algorithm for solving single hidden layer feedforward neural network SLFNs, and it is also a key component of big data technology. In the traditional neural network learning algorithm, a large number of network training parameters need to be set manually, and it is too easy to cause local optimal solution. However, the limit learning machine only needs to set the number of hidden layer nodes of the network, does not need to adjust the bias and input weight of the hidden element of the network during the algorithm implementation, and will only produce the unique optimal solution, so it has the advantages of good generalization performance and fast learning speed. As a single-layer feedforward neural network, ELM randomly generates hidden layer neural bias and input weight, and obtains output weight through analysis and calculation. ELM chooses non differentiable and even discontinuous functions as its activation functions. In the hidden layer, “sigmoid”, “sine”, “Gaussian” and “hard limiting” functions can be selected as their activation functions, while in the output layer, linear activation functions can be selected as their activation functions. The mathematical model of limit learning machine classification with L hidden layer nodes and h(·) activation function can be expressed as formula (3): Fo (Xi ) =

L 

  χj h ωj xi + δj

(3)

j=1

In formula (3), Fo (Xi ) represents the mathematical model of the classification of the limit learning machine; χj represents the auxiliary parameter of the limit learning machine; ωj represents the weight coefficient corresponding to the user power consumption data sample; δj represents the width of Gaussian hidden layer neurons. Based on the mathematical model of extreme learning machine classification shown in Formula (3), improve the K-means clustering algorithm concept, as shown in Formula (4): m 1 C Ei 2 min χ 2 + 2 2

(4)

i=1

In formula (4), C represents the final category number of the improved K-means clustering algorithm; Ei represents the matrix error value after embedding. Although power users are generally divided into residential users, commercial users and industrial users, the load characteristics of different types of users may still be very different, and some load characteristics of different users are similar. In this paper, 300 residential users’ electricity data are selected from the smart meter data. The original data is 336 dimensions. By supplementing the missing data and eliminating the wrong data, the PCA algorithm is used to preprocess the data dimension reduction, which is

326

Z. Zhu et al.

finally reduced to 48 dimensions. Finally, the extreme learning machine algorithm is used to embed the original data. The improved K-means algorithm is used to cluster the original data, which is finally grouped into five categories. The specific process is shown in Fig. 4.

start

Building user power consumption data samples

Whether there is missing or outlier data

N

Supplement and modify data

Y User land data reduction

k-means clustering

Calculate effective value index

k=k+1

N

k de

(8)

In formula (9), ds refers to the yield of the surrounding new honey source; de refers to the income of the original honey source. Then observe that the bees convert to hire the bees, and the new honey source replaces the original honey source. Current establishment: χ > Limit

(9)

In formula (9), χ refers to the number of times the hired bees and observation bees search for the honey source; Limit is the limit value. At this time, if no honey source with higher profitability is found, you can choose to give up the honey source. At the same time, the role of the bee is changed from hiring bees or observing bees to detecting bees, and a new honey source is randomly generated. 6. Record the optimal honey source found by all bees, and skip to step 2 until the condition of maxCycle with the maximum number of iterations is met or the global optimal position is output when it is less than the optimization error. The steps to apply the artificial bee colony algorithm to the task scheduling of multimedia big data parallel computing are as follows: (1) The user submits the job to the master node. The ResourceManager in the master node places the job in the queue and its corresponding ApplicationMaster sends the resource request for the job. (2) The NodeManager returns heartbeat information from the node to the ResourceManager, and reports the computing and storage resources of the node to the ResourceManager. (3) After the Resource Manager obtains the current running status and remaining resource information of each slave node, it uses an improved artificial bee colony algorithm to initialize and code the submitted job and the slave node to be used for computing tasks to generate an initial artificial bee colony.

422

T. Lin and F. Zhong

(4) Calculate the income degree of honey source. According to the relative income degree of honey source of all bees, bees are transformed into hired bees and observed bees. Those with the highest income degree become hired bees, and those with the lowest income degree become observed bees. (5) Each hired bee continues to collect honey near the original honey source (a local search process), looks for other new honey sources, and calculates the new value of its profitability. If its profitability is high, the bee will replace the original honey source with the new honey source according to the greedy rule; Each observation bee selects the honey source according to the probability proportional to the income value of the honey source, and collects honey near it to find other honey sources. If it finds that the income of the new honey source around it is higher, the observation bee switches to the hired bee, and the new honey source replaces the original honey source. (6) If the number of times the hired bees and observed bees search for this honey source exceeds the limit value, and no honey source with higher profitability is found, they choose to give up this honey source, and the role of the bee changes from the hired bees or observed bees to the investigated bees, and a new honey source is randomly generated. (7) Record the optimal honey source (i.e. the global optimal solution) found by all bees at present, and skip to step 4, until the conditions for the maximum number of iterations maxCycle are met or less than the optimization error, output the global optimal location, output the optimal sequence, i.e. the optimal scheduling scheme, according to which the cluster is allocated from the node’s computing and storage resources to each job, otherwise, return to step (4). (8) The ApplicationMaster of each job sends a periodic heartbeat to the ResourceManager to get the newly allocated resource container. (9) After the ResourceManager receives the heartbeat information from the ApplicationMaster, the containers in the slave nodes assigned to it will be returned to the ApplicationMaster in the form of heartbeat response.

3 Experimental Test 3.1 Construction of the Experimental Platform Test the scheduling performance of the designed task scheduling method for wireless sensor multimedia big data parallel computing based on bee colony algorithm. The experimental environment is built by 6 computers, using the virtualization software VMware Workstation to establish 11 virtual machines with Linux operating system as 11 nodes, select one node as the master node of the cluster, and the remaining 10 nodes as the slave nodes of the cluster. The master node is used to manage the allocation and management of computing resources and jobs of each slave node, and the slave node is used to store and calculate data. The Linux operating system used in the experiment is Cent OS 6, the Hadoop version is Hadoop2.5.1, and the JDK version is jdk1.7. The experiment uses three machines with different configurations to build a heterogeneous cluster environment for wireless sensor multimedia big data parallel computing. The specific machine parameter configurations are shown in Table 2, and the configuration information of each node is shown in Table 3.

Task Scheduling Method of Wireless Sensor Multimedia

423

Table 2. The configuration of each machine parameters Hardware category

Desktop

Notebook

Desktop

Operating system

Windows 10

Windows 10

Windows XP

CPU

Core dual core processing i6-2400u

Intel(R)Core (TM)-i7–4680

Intel(R)Core (TM)-i9–7458

Dominant frequency

5.3 GHZ

5.2 GHZ

5.6 GHZ

Memory

12 GB

16 GB

8 GB

Number of units

2

1

3

Number of virtual machines

1

3

2

Table 3. Configuration of Cluster Node Parameters Node type

Processor

IP address

Memory

Master

i7–7452

174.198.0.0

80G

Slave01

i7–7452

174.198.0.1

120G

Slave02

i7–7452

174.198.0.2

120G

Slave03

i7–7452

174.198.0.3

120G

Slave04

i9–9562

174.198.0.4

240G

Slave05

i9–9562

174.198.0.5

240G

Slave06

i9–9562

174.198.0.6

240G

Slave07

i9–9562

174.198.0.7

240G

Slave08

i9–9562

174.198.0.8

240G

Slave09

i9–9562

174.198.0.9

240G

Slave10

i9–9562

174.198.1.0

240G

Configure a wireless sensor multimedia big data parallel computing environment for each node. The specific steps are as follows: (1) First, install the jdk software on each node, unzip the jdk file and use gedit to configure the environment variables. (2) Configure the hostname of each node, and use the gedit/etc./sysconfig/network command to enter the edit mode to modify the hostname. (3) Configure the IP address of each node. Use the setup command to pop up a tool window and select the “Network configuration” option. Fill in the new IP address. After the configuration is completed, use the/sbin/service network restart command to restart the network service. (4) Close the firewall of each node, and use the setup command to pop up a tool window and select “Firewalls configuration” to enter the setting window for modification.

424

T. Lin and F. Zhong

(5) Configure the hosts file. Use the gedit/etc./hosts command in each node to pop up the editing window, and then fill in the IP address and host information of each node in the file. (6) Each node is configured to log in without password authentication, so that each node can access each other without a password. (7) Configure the Hadoop environment on the master node, unzip the Hadoop file, and modify the Hadoop-env.sh, yarn-env.sh, core-site.xml, hdfs-site.xml, and yarn-site in the folder Xml and mapred-site.xml file contents. (8) Copy the Hadoop configured on the master node to each slave node. (9) Format the file system on the primary node, and use the start-all.sh command to start the Hadoop cluster. (10) Use the jps command to view the processes running on each node. The master node runs four processes: ResourceManager, NameNode, Secondary NameNode, and Jps. The slave node runs three processes: Node Manager, DataNode, and Jps. This indicates that the processes of both the master node and the slave node are successfully started. 3.2 Experimental Results and Analysis The two traditional methods of computational task scheduling are used as comparison method 1 and comparison method 2. The WordCount job was run using the design method, with the number of jobs being 10, 30, 50, 70, 100, and 150, and each job size was 512 MB. The average of 5 times and then 5 times is regarded as the time needed to complete the job, and the experimental results are shown in Fig. 2. 700 This method comparison method

Job completion time (s)

600

comparison method 2

500

400

300

200

Number of tasks (PCs.)

Fig. 2. Time required to complete the job

According to the results of Fig. 2, compared to the other two methods, the operation time of the design method is shorter, which proves that the design method has a good performance of parallel computing task scheduling.

Task Scheduling Method of Wireless Sensor Multimedia

425

The overall resource utilization rate in the task scheduling is tested. When the number of tasks increases from 1000 to 7000, the overall resource utilization test results of the design method are shown in Fig. 3.

Fig. 3. Overall Resource Utilization Test Results

The test results in Fig. 3 show that the overall resource utilization of the design method reaches a stable value decreasing when the number of tasks increases from 1000 to 7000, while the overall resource utilization is above 0.91.

4 Conclusion In the study of the parallel computing task scheduling method of wireless sensing multimedia big data, the artificial swarm algorithm is applied to realize the rapid scheduling of parallel computing tasks, which is of great significance for the development of wireless sensing network.

References 1. Zhao, N.: Cloud computing platform non-recurring task parallel scheduling simulation. Comput. Simul. 38(1), 5 (2021) 2. Qi, Q., Zhang, L., Wang, J., et al.: Scalable parallel task scheduling for autonomous driving using multi-task deep reinforcement learning. IEEE Trans. Veh. Technol. 69(11), 13861–13874 (2020) 3. Saleem, U., Liu, Y., Jangsher, S., et al.: Mobility-aware joint task scheduling and resource allocation for cooperative mobile edge computing. IEEE Trans. Wireless Commun. (99), 1 (2020) 4. Zhou, C., Wu, W., He, H., et al.: Deep reinforcement learning for delay-oriented IOT task scheduling in space-air-ground integrated network. IEEE Trans. Wirel. Commun. (99) 1 (2020) 5. Liu, S., He, T., Dai, J.: A survey of CRF algorithm based knowledge extraction of elementary mathematics in Chinese. Mob. Netw. Appl. (2021)

426

T. Lin and F. Zhong

6. Nie, L., Wang, X., Sun, W., et al.: Imitation-learning-enabled vehicular edge computing: toward online task scheduling. IEEE Netw. 35(3), 102–108 (2021) 7. Al-Habob, A.A., Dobre, O.A., Armada, A.G., Muhaidat, S., et al.: Task scheduling for mobile edge computing using genetic algorithm and conflict graphs. IEEE Trans. Veh. Technol. (99), 1 (2020) 8. Huang, X., Yu, R., Ye, D., et al.: Efficient workload allocation and user-centric utility maximization for task scheduling in collaborative vehicular edge computing. IEEE Trans. Veh. Technol. (99), 1 (2021)

Author Index

C Chen, Haibin 318 Chen, Xinyue 33 Chen, Zhijun 3 D Dai, Chen 304 Ding, Bo 393 Dong, Xiaowei 304 Duan, Peng 19 Duan, Xiaoli 184, 200 F Fang, Weiqing

3

Liu, Xinran 215 Long, Caofang 273 M Ma, Lei 228, 243 Maihebubai, Xiaokaiti

62

P Pan, Xiafu 79 Pei, Pei 110 R Ren, Jun 110

G Gao, Jie 364, 381 Gong, Liqiang 3 Gong, Shao 46, 273

S Sha, Licheng 19 Shen, Qi 3 Sun, Yanmei 127

J Jia, Huichen 110 Jia, Junqiang 62 Jiang, Chao 304, 393 Jiang, Shibai 62 Jin, Weijia 46, 273

W Wan, Dong 11 Wan, Xucheng 139, 155 Wang, Aiqing 364, 381 Wang, Jin 393 Wang, Ruidong 110 Wang, Tao 62 Wang, Ye 259 Wang, Yixin 3 Wu, Lei 318

K Kang, Yingjian

92

L Li, Chenxi 11 Li, Jianping 3 Li, Jingyu 228 Li, Sen 184, 200 Li, Yingchen 110 Lin, Qiusheng 393 Lin, Tongxi 332, 411 Lin, Weixuan 349 Liu, Wenjing 169

X Xi, Shaoqing 19 Xiao, Shuang 318 Xu, Kai 19 Xu, Ya 127 Xu, Yanfa 215 Xu, Yuan 11 Xu, Yukun 304, 393 Xuelati, Simayi 62

© ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024 Published by Springer Nature Switzerland AG 2024. All Rights Reserved B. Wang et al. (Eds.): ICMTEL 2023, LNICST 534, pp. 427–428, 2024. https://doi.org/10.1007/978-3-031-50577-5

428

Y Yan, Jingrui 304, 318, 393 Yang, Jianxing 228 Yang, Xiaomei 287 Yuan, Haidi 169 Z Zhang, Chuangji 349 Zhang, Yanli 349

Author Index

Zhang, Yanning 243 Zhang, Yide 11 Zhang, Yuliang 259 Zhao, Xinchen 19 Zhao, Yan 139, 155 Zheng, Chun 79 Zhong, Fulong 332, 411 Zhu, Zheng 318 Zhuo, Shufeng 92