Cyber Security Intelligence and Analytics: The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), Volume 2 3031317742, 9783031317743

This book provides the proceedings of the 5th International Conference on Cyber Security Intelligence and Analytics. The

354 53 32MB

English Pages 598 [599] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Organization
Contents
Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm
1 Introduction
2 Research on Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm
2.1 Functions of the Network Ecosystem
2.2 Network Ecological Imbalance
2.3 Application of Intelligent Optimization Algorithm in Ecological Balance
2.4 Measures to Maintain the Ecological Balance of the Network
3 Ecological Balance Construction and Optimization Strategy Design Experiment Based on Intelligent Optimization Algorithm
3.1 Experimental Setup
3.2 Ecological Balance Stability and Ecological Quality Test
4 Experimental Analysis of Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm
4.1 Ecological Balance Stability Test
4.2 Ecological Balance Quality Test
4.3 Strategies for Maintaining Ecological Balance
5 Conclusions
References
The Design of Virtual Reality Systems for Metaverse Scenarios
1 Introduction
2 Related Technologies
2.1 Design Principles for Roaming Interactive Systems
2.2 The Purpose of Overall System Planning
2.3 Technical Support for Metaverse
2.4 Real-Time Collision Detection Algorithm Based on Enclosing Box Tree
3 Design of a Virtual Reality System for Metaverse Scenarios
3.1 Virtual Reality System Build
3.2 Visual Interface Design
3.3 Virtual Character Control
4 Analysis and Research of a Virtual Reality System for Metaverse Scenarios
4.1 Testing Collision Detection Algorithms
4.2 Roaming Tests
5 Conclusions
References
Design and Development of Health Data Platform for Home-Based Elderly Care Based on AAL
1 Introduction
2 Introduction of Related Technologies
2.1 AAL
2.2 Smart Elderly Care
2.3 Internet of Things
2.4 Remote Patient Monitoring
3 Health Data Platform for Home-Based Elderly Care Based on AAL
3.1 Construction Objectives
3.2 Overall Architecture
3.3 Core Functions
4 Conclusions
References
Logistics Distribution Optimization System of Cross-Border e-Commerce Platform Based on Bayes-BP Algorithm
1 Introduction
2 Related Theoretical Overview and Research
2.1 Problems Existing in Cross-Border e-Commerce Logistics Platforms
2.2 An Overview of the Relevant Theories of the Bayes-BP Algorithm
3 Experiment and Research
3.1 Experimental Method
3.2 Experimental Requirements
4 Analysis and Discussion
4.1 Scale Analysis of Cross-Border E-commerce Logistics Market
4.2 Comparative Analysis of Logistics Distribution System Before and After Optimization
5 Conclusions
References
Application of FCM Clustering Algorithm in Digital Library Management System
1 Introduction
2 Proposed Method
2.1 FCM Clustering Algorithm
2.2 Demand Analysis of Digital Library
3 Design of Digital Library Management System
3.1 Functional Module Design
3.2 Design of Non-functional Modules
3.3 System Development Environment
4 Discussion
4.1 Analysis of ExperImental Results
5 Conclusions
References
Logistics Distribution Path Planning and Design Based on Ant Colony Optimization Algorithm
1 Introduction
2 Path Planning based on Ant Colony Optimization Algorithm
2.1 ACA Overview
2.2 Optimization of ACA
3 Simulation Experiment Settings
3.1 Data Sources
3.2 Experimental Verification
4 Simulation Experiment Results and Analysis
5 Conclusions
References
The Development of Power Grid Digital Infrastructure Based on Fuzzy Comprehensive Evaluation
1 Introduction
2 Construction of Evaluation Index System
2.1 Carding of Influencing Factors
2.2 Screening of Evaluation Indicators
3 Construction of Fuzzy Comprehensive Evaluation Model and Result Analysis
3.1 Weighting
3.2 Fuzzy Comprehensive Evaluation
3.3 Results Analysis
4 Conclusion
References
Virtual Reality Technology in Indoor Environment Art Design
1 Introduction
2 Design Exploration of the Application of Virtual Reality Technology in Interior Environment Art Design
2.1 Virtual Reality Technology
2.2 Application of Virtual Reality Technology in Interior Environment Art Design
3 Explore the Application Effect of Virtual Reality Technology in Interior Environment Art Design
4 Investigation and Research Analysis of the Application of Virtual Reality Technology in Interior Environment Art Design
5 Conclusions
References
Application of Digital Virtual Art in 3D Film Animation Design Effect
1 Introduction
2 Research on the Application of Digital Virtual Art in the Design Effect of 3D Film and TA
2.1 Characterization of Digital Virtual Technology
2.2 Influence of Digital Virtual Art on Film and TA Design
2.3 The Application of Digital Virtual Art in Film and TA
2.4 Application Value of Digital Virtual Art in Film and TA
2.5 Application of the Sampling Survey Algorithm
3 Application and Design Experiment of Digital Virtual Art in 3D Film and TA Design Effect
3.1 Production Environment
3.2 Satisfaction Survey
4 Experimental Analysis of Digital Virtual Art in the Design Effect of 3D Film and TA
4.1 Comparison Between Traditional 3d Film and Television Production Technology and Digital Virtual Art
4.2 The Sophistication of the Two Animation Production Techniques
5 Conclusions
References
Digital Media Art and Visual Communication Design Method Under Computer Technology
1 Introduction
2 Research on DM Art and Visual Communication Design Under Computer Technology
2.1 Introduction to Computer Technology
2.2 The Era of DM
2.3 The Performance Characteristics of DM Art
2.4 Algorithm Research
3 Experimental Research on DM Art and Visual Communication Design Under Computer Technology
3.1 Static Vision
3.2 Motion Vision
3.3 Other Sensory Auxiliary Visual Linkage
4 Experimental Analysis of DM Art and Visual Communication Design Under Computer Technology
4.1 Quantum Image Compression
4.2 Algorithm Testing
5 Conclusions
References
Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm
1 Introduction
2 Design and Research of Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm
2.1 Features of Rotor Scale
2.2 Overall Design of Instrument Software System
2.3 Algorithm Research
3 Experimental Research on Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm
3.1 Selection of Hardware Equipment:
3.2 Each Functional Module
4 Experiment Analysis of Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm
4.1 Measurement Control Algorithm Performance Test
4.2 Accuracy Comparison of Meter Identification Methods
5 Conclusions
References
Biological Tissue Detection System Based on Improved Optimization Algorithm
1 Introduction
2 Related Theoretical Overview and Research
2.1 Introduction to Biological Tissue Detection Technology
2.2 Machine Learning Algorithms and Their Applications
3 Experiment and Research
3.1 Experimental Method
3.2 Experimental Requirements
4 Analysis and Discussion
4.1 Analysis of PCA Contribution Rate of Normal Biological Tissue Spectrum
4.2 Accuracy Analysis of Biological Tissue Detection
5 Conclusions
References
Application of Computer Network Security Technology in Software Development
1 Introduction
2 Research on the Technology of Computer Security Software Development
2.1 Principles of Computer Software Development
2.2 Security Risks in Computer Software Development
2.3 Model-Based Security Vulnerability Test
2.4 Algorithm Application of Computer Security Software Development Technology
3 Design Experiment of Developing Technical Problems Based on Computer Security Software
3.1 Software Test
3.2 Practical Application
4 Experimental Analysis of Technical Technology Based on Computer Security Software
4.1 FindBugs Software Testing for Several Open-Source Projects
4.2 Development and Application of the Actual Safety System
5 Conclusions
References
The Application of Financial Technology in the Intelligent Management of Credit Risk Under the Background of Big Data
1 Introduction
2 The Application of Financial Technology in the Intelligent Management of Credit Risk Under the Background of BD
2.1 The Value and Application of Data in Credit Risk Management
2.2 Problems of BD Credit Investigation in the Application of Financial Credit Risk Management
2.3 Application of Data Mining in Risk Management
2.4 Fintech
3 Realization of Credit Risk Management System
3.1 Operating Environment
3.2 Technical Architecture
3.3 System Test of Risk Measurement Results
4 Analysis of System Statistics
4.1 All Indicators Statistics of the Whole Bank
5 Conclusion
References
Measurement and Prediction of Carbon Sequestration Capacity Based on Random Forest Algorithm
1 Introduction
2 Research Design
3 An Empirical Study on Carbon Sequestration
4 Application of the Model
5 Scalability and Adaptability Analysis
6 Conclusions
References
Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm
1 Introduction
2 Research on Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm
2.1 The Regulation of the External Environment on the Growth of Microorganisms
2.2 Optimization Model and Algorithm
2.3 Application of Matrix Decomposition in Identification and Optimization of Microbial Growth Rate
2.4 Microbial Growth Model
2.5 Matrix Decomposition Model
3 Design Experiment of Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm
3.1 Introduction to the Development Environment
3.2 Experimental Design
3.3 Microbial Growth Rate Model
4 Experimental Analysis of Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm
4.1 Growth Rate Identification and Comparison
4.2 Identification Accuracy
5 Conclusions
References
Construction Building Interior Renovation Information Model Based on Computer BIM Technology
1 Overview of the Application of BIM Technology in Old Industrial Buildings
1.1 The Concept of Interior Renovation and Reuse of Old Industrial Buildings Based on BIM Technology
1.2 Clustering Algorithm Reference
1.3 Principles for the Renovation and Reuse of the Internal Space of Old Industrial Buildings
1.4 Indoor Parameters Specified by Building Design Standards
2 Design Strategy of BIM Technology in Interior Renovation and Reuse of Old Industrial Buildings
2.1 Design of BIM Technology in the Interior Renovation and Reuse of Old Industrial Buildings
2.2 The Practical Application of BIM Technology in the Indoor Renovation and Reuse of Old Industrial Buildings
3 Conclusion
References
Data Acquisition Control System Applying RFID Technology and Wireless Communication
1 Introduction
2 Design of RFID-Based Data Acquisition Control System
2.1 Device Selection of System Hardware Circuit
2.2 System Architecture
2.3 Design of Data Acquisition Control System
3 Test Research of RFID Data Acquisition Control System
4 Conclusion
References
Modified K-means Algorithm in Computer Science (CS) Accurate Evaluation Platform
1 Introduction
2 Design and Exploration of Modified K-means Equation in CS Accurate Evaluation Platform
2.1 Modified the K-means Equation
2.2 Research on Modified K-means Equation in CS Accurate Evaluation Platform
2.3 The Basic Flow of K-means Clustering Algorithm Equation
3 Research Effect of Modified K-means Equation in CS Accurate Evaluation Platform
3.1 Blossom Environment of Accurate CS Assessment Platform
4 Investigation and Research Analysis of Improving K-means Equation in CS Accurate Evaluation Platform
5 Conclusions
References
The Application of Decision Tree Algorithm in Psychological Assessment Data
1 Introduction
2 Research on the Application of Decision Tree Algorithm in Psychological Assessment Data
2.1 Decision Tree
2.2 Construction of the Psychological Correlation Analysis System for College Students
3 Investigation and Research on the Application of Decision Tree Algorithm in Psychological Assessment Data
3.1 Data Sources
3.2 C4.5 Algorithm
4 Analysis and Research on the Application of Decision Tree Algorithm in Psychological Assessment Data
4.1 Decision Tree Training Results
4.2 Algorithm Comparison
5 Conclusions
References
DT Algorithm in Mechanical Equipment Fault Diagnosis System
1 Introduction
2 Research on Application of DTA in Mechanical Equipment FD System
2.1 FD of Mechanical Equipment
2.2 DM
2.3 DT Method
2.4 Intelligent Diagnosis Method of Mechanical Fault
3 Investigation and Research on the Application of DTA in Mechanical Equipment FD System
3.1 Optimization of DTA Model
3.2 Experimental Setup
4 Analysis and Research on the Application of DTA in Mechanical Equipment FD System
4.1 Optimization Results
4.2 Training Results of Diagnostic Model
4.3 Application of DT in Mechanical Equipment FD System
5 Conclusions
References
Development of Industrial Historical and Cultural Heritage Display System Based on Panoramic VR Tech
1 Introduction to VR Tech
2 Overview of Industrial Historical and Cultural Heritage
3 Necessity of VR Tech in the Display of Industrial History and Cultural Heritage
3.1 A Powerful Complement to Traditional Archiving Technology
3.2 Upgrade of Communication Effect
4 Development of Industrial Historical and Cultural Heritage Display System Based on Panoramic VR Tech
4.1 Application Process of VR Tech
4.2 Creating a Digital Resource Library
4.3 Setting up a Scenario Model
4.4 Interactive Programming
5 Conclusions
References
Application of Improved Particle Swarm Optimization Algorithm in Power Economic Dispatch System
1 Introduction
2 Mathematical Model of Economic Dispatching of Power System
2.1 Objective Function
2.2 Constraints
3 Improved Particle Swarm Algorithm for Solving Economic Scheduling
3.1 Standard Particle Swarm Algorithm
3.2 Handling of Constraints
3.3 Improved Particle Swarm Algorithm
4 Simulation Experiment
5 Conclusion
References
Commodity Design Structure Matrix Sorting Algorithm on Account of Virtual Reality Skill
1 Introduction
2 Design and Exploration of Commodity Design Structure Matrix Sorting Algorithm on Account of VR Skill
2.1 VR Skill
2.2 Research on Ordering Algorithm of Commodity Design Structure Matrix on Account of VR Skill
3 Research on the Effect of Commodity Design Structure Matrix Sorting Algorithm on Account of VR Skill
3.1 Multi-target Allotment Majorization Model of Unit Commodities
4 Investigation and Analysis of Commodity Design Structure Matrix Sorting Algorithm on Account of VR Skill
5 Conclusions
References
Machine Vision Communication System Based on Computer Intelligent Algorithm
1 Introduction
2 Research on the Design of Machine Vision Communication System Based on Computer Intelligent Algorithm
2.1 Binocular Camera
2.2 Image Distortion
2.3 Image Restoration Technology
3 Investigation and Research on the Design of Machine Vision Communication System Based on Computer Intelligent Algorithm
3.1 System Environment Construction
3.2 Image Inpainting Algorithm
3.3 Experiment Setup of Intelligent Algorithm
4 Analysis and Research on the Design of Machine Vision Communication System Based on Computer Intelligent Algorithm
4.1 System Framework Design
4.2 Algorithm Implementation
5 Conclusions
References
Performance Evaluation of Rural Informatization Construction Based on Big Data
1 Introduction
2 Research on Performance Evaluation of Rural Informatization Construction
2.1 Principles for Establishing the Evaluation Index System
2.2 The Role of Informatization
2.3 The Role of Rural Informatization
2.4 The Role of Government in Rural Informatization Construction
2.5 AHP
3 Experimental Research on Rural Informatization Construction
3.1 The Core Elements of Rural Informatization Construction
3.2 Main Contents of Rural Informatization
3.3 Ways to Improve Performance Evaluation
4 Experiment Analysis of Rural Informatization Construction
4.1 Development of Agricultural Information Resources
4.2 Transmission and Service of Rural Information Resources
4.3 Rural Informatization Talents and Population Quality
5 Conclusions
References
Genetic Algorithm in Ginzburg-Landau Equation Analysis System
1 Introduction
2 Equation Analysis
2.1 Development Status
2.2 Fractional Partial Differential Equations
2.3 Preprocessing Technology
3 Experimental Study
3.1 Two-Dimensional Constant Coefficient Nonlinear Ginzburg-Landau Equation
3.2 Traveling Wave Reduction of the Ginzburg-Landau Equation
3.3 Generalized Ginzburg-Landau Equation
3.4 Genetic Algorithm Process
4 Experiment Analysis
4.1 Analysis of Numerical Results
4.2 Numerical Test
5 Conclusions
References
Object Detection in UAV Images Based on Improved YOLOv5
1 Introduction
2 YOLOv5 Target Recognition Algorithm
3 The Proposed Method
3.1 Improved SPPF Network
3.2 Super-Resolution Method
3.3 Convolution Attention Mechanism Module
4 Experiments
4.1 Experimental Environment and Dataset
4.2 Comparison of Detection Performance
5 Conclusion
References
Evaluation Indicators of Power Grid Planning Considering Large-Scale New Energy Access
1 Introduction
2 Reliability Evaluation Index of New Energy Power Generation and Grid Planning
2.1 The Impact of Grid-Connected New Energy Generation on the Power System
2.2 Reliability Index Coefficient
2.3 Construction of the Evaluation Index System for Power Grid Planning
3 Experimental Research
3.1 Research Purpose
3.2 Research Content
4 Analysis of the Results of Power Grid Planning Evaluation Indicators
4.1 Reliability Index
4.2 Economic Indicators
5 Conclusion
References
Certificateless Blind Proxy Signature Algorithm Based on PSS Standard Model
1 Introduction
2 Certificateless BPS Algorithm Based on PSS SM
2.1 Certificateless Standard Signature Protocol of PSS SM
2.2 Basic Concepts of Blind Signature
2.3 General Algorithm for Certificateless Proxy Blind Signature
3 Security Performance Analysis of Certificateless BPS Algorithm Based on PSS SM
3.1 Correctness Analysis
3.2 Enforceability
4 Efficiency Analysis of Certificateless BPS Algorithm
5 Conclusions
References
Mine Emergency Rescue Simulation Decision System on Account of Computer Technology
1 Introduction
2 Design and Exploration of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology
2.1 Computer Technology
2.2 Design and Implementation of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology
3 Design and Implementation Effect of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology
3.1 3d Visualization of Mining Engineering
4 Investigation and Analysis of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology
5 Conclusions
References
Smart Phone User Experience for the Elderly Based on Public Platform
1 Introduction
2 Research on the Optimization Design of Smart Phone User Experience for the Elderly Based on the Public Platform
2.1 WeChat Public Platform
2.2 Cognitive Function of the Elderly
2.3 User Experience of the Elderly Using Smartphones
3 Investigation and Research on the Optimization Design of Smart Phone User Experience for the Elderly Based on the Public Platform
3.1 Test Equipment
3.2 Test Method
4 Analysis and Research on the Optimization Design of Smart Phone User Experience for the Elderly Based on the Public Platform
4.1 Optimal Design of Experience for the Elderly on the Public Platform
4.2 User Satisfaction Evaluation Results
5 Conclusions
References
Design and Research of IOT Based Logistics Warehousing and Distribution Management System
1 Introduction
2 Hardware Design of Warehousing and Distribution Management System based on RFID Technology of Internet of Things
2.1 RFID System Design
2.2 Hardware Design of Warehouse Management System
2.3 Hardware Design of Distribution Management System
3 Software Design of Warehousing and Distribution Management System based on RFID Technology of Internet of Things
3.1 Software Design of Data Acquisition Module
3.2 Design Software of Transmission Data Module
3.3 Test and Analysis of Centralized Purchase Data
4 Conclusion
References
Shipping RDF Model Construction and Semantic Information Retrieval
1 Introduction
2 Shipping RDF Semantic Modeling
2.1 Ontology Concepts
2.2 Building RDF Model
2.3 RDF Serialization
3 Shipping Semantic Information Retrieval
3.1 Parsing RDF Data
3.2 Querying RDF Data
3.3 Evaluation Indicators and Discussion of Results
4 Conclusions
References
Application of 3D Virtual Reality Technology in Film and Television Production Under Internet Mode
1 Introduction
2 Introduction to 3D Virtual Reality Technology
2.1 3D Virtual Reality System Features
2.2 3D Virtual Reality System Composition
2.3 Application of Virtual Reality Technology in Role Building
3 Application of Virtual Reality Technology in Scene Atmosphere
3.1 Domestic Film Application
3.2 Hollywood Film Application
3.3 Scene Design Features and Changes
4 The Change of Virtual Reality Technology to Scene Design
5 Conclusion
References
Simulation Research on Realizing Animation Color Gradient Effect Based on 3D Technology
1 Introduction
2 Application of 3D Technology in Animation
2.1 Application in Scenarios
2.2 Application in Special Effects
2.3 Application in Color
3 3D Scene Data Organization
3.1 Scene Model Selection
3.2 Color Model Selection
4 Conclusion
References
Cross-Modal Retrieval Based on Deep Hashing in the Context of Data Space
1 Introduction
2 Related Theoretical Concepts
3 Cross-Modal Retrieval Methods Based on Deep Hashing
3.1 Unsupervised Deep Hashing
3.2 Semi-supervised Deep Hashing
3.3 Supervised Deep Hashing
4 Conclusion
References
Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology
1 Introduction
2 Research on Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology
2.1 Hybrid Reality Technology
2.2 The Composition of Virtual Reality Technology
2.3 Combining 3D GIS Technology with Smart Grid
2.4 On-Site Operation Target Monitoring of Power Grid
3 Investigation and Research on the Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology
3.1 Digital Power Grid Field Operation Target Monitoring System
3.2 3D Space Target Monitoring
4 Investigation and Analysis of Monitoring Algorithm for Digital Power Grid Field Operation Target Based on Mixed Reality Technology
4.1 Analysis of Augmented Reality Interaction Sub-module
4.2 System Application Analysis
5 Conclusions
References
Design and Analysis of Geotechnical Engineering Survey Integrated System Based on GIS
1 Introduction
2 Research on Integrated System Design of Geotechnical Engineering Exploration Based on GIS
2.1 Geographic Information System
2.2 System Design Principle
2.3 Geological Model of Geotechnical Parameters
3 Investigation and Research on Integrated System Design of Geotechnical Engineering Exploration Based on GIS
3.1 Software Components of the System
3.2 Database Design
4 Analysis and Research on Integrated System Design of Geotechnical Engineering Survey Based on GIS
4.1 Functional Module Design Analysis
4.2 Engineering Data Analysis
5 Conclusions
References
Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm
1 Introduction
2 Design and Exploration of Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm
2.1 Mean Clustering Algorithm
2.2 Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm
3 Research on the Effect of Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm
3.1 Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm
4 Investigation and Analysis of Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm
4.1 System Overall Architecture Design
4.2 System Implementation and Analysis
5 Conclusions
References
Design and Implementation of Intelligent Traffic Monitoring System Based on IOT and Big Data Analysis
1 Introduction
2 Common Technologies of Intelligent Transportation
2.1 Internet of Things Application Technology
2.2 Big Data Analysis and Machine Study
3 Design and Implementation of Intelligent Traffic Monitoring System
3.1 Architecture Design
3.2 Design of Wireless Sensor Network Layer
3.3 Realization of Machine Realization Model
4 Analysis of Experimental Results
5 Conclusions
References
Simulation of Passenger Ship Emergency Evacuation Based on Neural Network Algorithm and Physics Mechanical Model
1 Mechanical Model in Emergency Evacuation
2 Neural Network Algorithm
3 Model Design and Application
4 Analysis of Simulation Results
5 Conclusion
References
A Design for Block Chain Service Platform
1 Introduction
1.1 Overview
1.2 Blockchain Features
2 Design Objectives
3 Demand Analysis
3.1 Business Requirements
3.2 Functional Requirements
4 Design Scheme
4.1 Overall Design
4.2 Access Management Module
4.3 Operation Management Module
4.4 System Management Module
5 Conclusions
5.1 Key Technologies
5.2 The Usage of FLOPS
References
Resource Evaluation and Optimization of Wireless Communication Network Based on Internet of Things Technology
1 Introduction
2 Research on the Evaluation and Optimization of Wireless Communication Network Resources Based on Internet of Things Technology
2.1 Practical Application of IoT Technology
2.2 Optimization of Wireless Communication Network Resources
2.3 Theoretical Model of Resource Optimization
3 Model and Research of Wireless Communication Network Resource Evaluation and Optimization Based on Internet of Things Technology
3.1 System Model
3.2 Construction of System Energy Consumption Optimization Problem
3.3 Simulation Parameter Settings
4 Analysis and Research of Wireless Communication Network Resource Evaluation and Optimization Based on Internet of Things Technology
4.1 Analysis of Optimization Effect
4.2 Evaluation of the Overall Optimization Effect
5 Conclusions
References
Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence
1 Introduction
2 Research on Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence
2.1 The Development History of Artificial Intelligence
2.2 Construction Characteristics of Water Environment Engineering
2.3 Green Construction
3 Investigation and Research on Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence
3.1 Analysis of Multi-objective Optimization Problems in Green Construction Project Management
3.2 Data Collection
4 Analysis and Research on Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence
4.1 Data Analysis
4.2 Evaluation of Optimization Results
5 Conclusions
References
Design and Experience of Virtual Ski Resort Based on VR Technology and Meteorological Condition Simulation
1 Introduction
2 Method
2.1 Technical Methods
2.2 The Design for the Snow Road
3 The Study on the Simulation of Weather Conditions for Skiing
4 The Design for VR Interactive Experience Process Design
4.1 Initial Scenario Construction
4.2 Simulation Design of Meteorological Elements
4.3 Person Computer Interaction Design
4.4 The Design for Control System
5 VR Skiing Experience Effect Design for Different Weather Conditions
6 Conclusions
References
Intelligent Screening and Mining Technology of Software Vulnerability Programs in Power Internet of Things Terminals
1 Introduction
2 Vulnerability Mining Technology Based on Static Analysis
2.1 Lexical Analysis
2.2 Grammar Analysis
3 Design of Code Vulnerability Mining Tool for Power Terminal
3.1 Tool Overview
3.2 Tool Design
4 Results and Analysis of Code Vulnerability Mining Tool for Power Terminal
4.1 Efficiency Test
4.2 Performance Test
5 Conclusion
References
The Application of Improved ID3 Algorithm in College PE Teaching
1 Introduction
2 Related Works
3 Overview of Data Mining
4 Overview of ID3 Algorithm
4.1 Basic Concepts
4.2 Introduction to Classical Algorithms and Algorithm Improvement Methods
5 College PE Teaching Strategy Based on Algorithm
5.1 Application Analysis of the Algorithm
5.2 Teaching Strategies
6 Conclusion
References
The Stability of Online Discrete Switched System Based on Computer
1 Introduction
2 System Description
3 Stability of Linear Discrete Switched Systems
3.1 Stability Analysis
3.2 Extension to Uncertain Systems
4 Discussion
4.1 Comparison with Markovian Skip System
4.2 Comparison with Bang Bang Control System
4.3 Comparison with Variable Structure Control System
4.4 Comparison with LPV System
5 Conclusion
References
Garment Pattern Element Extraction Based on Artificial Bee Colony Algorithm
1 Introduction
2 Related Works
3 Garment Pattern Element Extraction Based on Artificial Bee Colony Algorithm of Bee Honey Collection Behavior
3.1 Honeybee Honey Collection Mechanism
3.2 Artificial Bee Colony Algorithm
3.3 Threshold Segmentation Principle of Garment Pattern Image
3.4 Garment Pattern Threshold Segmentation Based on Artificial Bee Colony Algorithm
4 Conclusion
References
Intelligent Hotel System Design Based on Internet of Things
1 Introduction
2 The Relationship Between the Internet of Things and Smart Hotel System and the Significance of System Design
2.1 Relationship
2.2 Design Significance
3 Design Scheme of Smart Hotel System Based on Internet of Things
3.1 Functional Layer Design
3.2 Communication Layer Design
3.3 Terminal Layer Design
4 Conclusion
References
Construction of Campus English Education Resources Information Platform Based on Internet of Things
1 Introduction
2 Research Methods
2.1 Design of Basic Service Layer
2.2 Design of Application Layer
3 Result Analysis
3.1 Function Test
3.2 Performance Test
3.3 Application Effect of Teaching Resource Service Platform
4 Conclusion
References
Design and Application of Personalized Recommendation Module for English Writing Marking System Based on Theme Model
1 Introduction
2 Related Works
3 Basic Concepts of Topic Model
3.1 Basic Concepts
4 Personalized Recommendation Module Design Method
4.1 Design Procedure
4.2 Design Method
5 Conclusion
References
Design of DNA Storage Coding and Encoding System Based on Transformer
1 Introduction
1.1 Forms of Storage, Show in Fig. 1
1.2 DNA Storage
2 Literature Review
2.1 DNA Storage
2.2 Transformer
3 Research Methods
3.1 Huffman Coding
3.2 Rotating Encoding (Show in Fig. 6)
3.3 Transformer
3.4 “Word” Tokenization
4 Results
4.1 Encoding Part
4.2 Decoding Part
5 Discussion
References
A Further Modification of a Revised B-S Model
1 Introduction
2 Our Model
2.1 g&h Distribution
2.2 The Non-arbitrage Condition
2.3 The Option Pricing Formula
3 Empirical Study
3.1 Pre-process the Data
3.2 Estimate the Parameters
3.3 Compare the Results
4 Discussion
References
Construction of English Teaching Resource Database Based on Big Data Technology
1 Introduction
2 Literature Review
3 Methods
3.1 Construction of English Teaching Resource Database Under the Background of Big Data
3.2 The Overall Structure and Functional Module Design of the English Teaching Resource Library
3.3 The Overall Design of the Database
3.4 The Connotation and Composition of the Integrated Development of English Teaching Resources in the Era of Big Data
4 Results and Analysis
4.1 Performance Test of English Teaching Resource Database Platform
4.2 Application of Big Data in English Teaching Resource Database
5 Conclusion
References
USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning
1 Introduction
2 Previous Work on Imbalanced Learning
3 Performance Measures
4 Methodology
4.1 Testing on Real-World Data Set
4.2 USOMTE
4.3 Imbalanced Learning Using USMOTE
4.4 Improve Logistic Regression by USMOTE
5 Results of the Experiments
5.1 Stop Evaluating Models on Transformed Data Set
5.2 Imbalanced Machine Learning Using USMOTE
5.3 Logistic Regression with USMOTE
6 Future Work
References
Design and Implementation of Mobile Terminal Network Broadcast Platform
1 Introduction
2 Basic Concepts and Advantages of Mobile Live Streaming Platforms
2.1 Basic Concepts
2.2 Advantages
3 Basic Framework Design and Implementation Methods
3.1 Frame Design
3.2 Implementation Method
4 Conclusion
References
Monitoring of Tourist Attractions Based on Data Collection of Internet of Things
1 Introduction
2 Basic Concept of Internet of Things Data Collection and Application Advantages of Scenic Spot Monitoring
2.1 Basic Concepts
2.2 Advantages of Monitoring Application in Tourist Attractions
3 The Design Scheme and Implementation Method of the Monitoring System for the Internet of Things Tourist Attractions
3.1 Design Scheme
3.2 Implementation Method
4 Conclusion
References
Author Index
Recommend Papers

Cyber Security Intelligence and Analytics: The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), Volume 2
 3031317742, 9783031317743

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes on Data Engineering and Communications Technologies 173

Zheng Xu · Saed Alrabaee · Octavio Loyola-González · Niken Dwi Wahyu Cahyani · Nurul Hidayah Ab Rahman   Editors

Cyber Security Intelligence and Analytics The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), Volume 2

Lecture Notes on Data Engineering and Communications Technologies Series Editor Fatos Xhafa, Technical University of Catalonia, Barcelona, Spain

173

The aim of the book series is to present cutting edge engineering approaches to data technologies and communications. It will publish latest advances on the engineering task of building and deploying distributed, scalable and reliable data infrastructures and communication systems. The series will have a prominent applied focus on data technologies and communications with aim to promote the bridging from fundamental research on data science and networking to data engineering and communications that lead to industry products, business knowledge and standardisation. Indexed by SCOPUS, INSPEC, EI Compendex. All books published in the series are submitted for consideration in Web of Science.

Zheng Xu · Saed Alrabaee · Octavio Loyola-González · Niken Dwi Wahyu Cahyani · Nurul Hidayah Ab Rahman Editors

Cyber Security Intelligence and Analytics The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), Volume 2

Editors Zheng Xu Shanghai Polytechnic University Shanghai, China

Saed Alrabaee United Arab Emirates University Abu Dhabi, United Arab Emirates

Octavio Loyola-González Stratesys Madrid, Spain

Niken Dwi Wahyu Cahyani Telkom University Cileunyi, Jawa Barat, Indonesia

Nurul Hidayah Ab Rahman Universiti Tun Hussein Onn Malaysia Parit Raja, Malaysia

ISSN 2367-4512 ISSN 2367-4520 (electronic) Lecture Notes on Data Engineering and Communications Technologies ISBN 978-3-031-31774-3 ISBN 978-3-031-31775-0 (eBook) https://doi.org/10.1007/978-3-031-31775-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023) is an international conference dedicated to promoting novel theoretical and applied research advances in the interdisciplinary agenda of cyber security, particularly focusing on threat intelligence and analytics and countering cybercrime. Cyber security experts, including those in data analytics, incident response and digital forensics, need to be able to rapidly detect, analyze and defend against a diverse range of cyber threats in near real-time conditions. For example, when a significant amount of data is collected from or generated by different security monitoring solutions, intelligent and next generation big data analytical techniques are necessary to mine, interpret and extract knowledge of these (big) data. Cyber threat intelligence and analytics are among the fastest growing interdisciplinary fields of research bringing together researchers from different fields such as digital forensics, political and security studies, criminology, cyber security, big data analytics and machine learning to detect, contain and mitigate advanced persistent threats and fight against organized cybercrimes. The 5th International Conference on Cyber Security Intelligence and Analytics (CSIA 2023), building on the previous successes online meeting (2020 to 2022 due to COVID-19), in Wuhu, China (2019), is proud to be in the fifth consecutive conference year. We are organizing the CSIA 2023 at Radisson Blu Shanghai Pudong Jinqiao Hotel. It will feature a technical program of refereed papers selected by the international program committee, keynote address. Each paper was reviewed by at least two independent experts. The conference would not have been a reality without the contributions of the authors. We sincerely thank all the authors for their valuable contributions. We would like to express our appreciation to all members of the program committee for their valuable efforts in the review process that helped us to guarantee the highest quality of the selected papers for the conference. Our special thanks are due also to the editors of Springer book series “Lecture Notes on Data Engineering and Communications Technologies”, Thomas Ditzinger and Suresh Dharmalingam for their assistance throughout the publication process.

Organization

The 5th International Conference on Cyber Security Intelligence and Analytics CSIA 2023 http://csia.fit/csia2023 March 30–31, 2023, Shanghai, China

Steering Committee Chair Kim-Kwang Raymond Choo

University of Texas at San Antonio, USA

General Chair Zheng Xu

Shanghai Polytechnic University, China

Program Committee Chairs Saed Alrabaee Octavio Loyola-González Niken Dwi Wahyu Cahyani Nurul Hidayah Ab Rahman

United Arab Emirate University, UAE Stratesys, Spain Telkom University, Indonesia Universiti Tun Hussein Onn Malaysia, Malaysia

Publication Chairs Juan Du Shunxiang Zhang

Shanghai University, China Anhui University of Science and Technology, China

viii

Organization

Publicity Chairs Neil. Y. Yen Vijayan Sugumaran Junchi Yan

University of Aizu, Japan Oakland University, USA Shanghai Jiaotong University, China

Local Organizing Chairs Chuang Ma Xiumei Wu

Shanghai Polytechnic University, China Shanghai Polytechnic University, China

Program Committee Members Guangli Zhu Tao Liao Xiaobo Yin Xiangfeng Luo Xiao Wei Huan Du Zhiguo Yan Zhiming Ding Jianhui Li Yi Liu Kuien Liu Feng Lu Wei Xu Ming Hu Abdelrahman Abouarqoub Sana Belguith Rozita Dara Reza Esmaili Ibrahim Ghafir Fadi Hamad Sajjad Homayoun Nesrine Kaaniche Steven Moffat

Anhui University of Science and Technology, China Anhui University of Science and Technology, China Anhui University of Science and Technology, China Shanghai University, China Shanghai University, China Shanghai University, China Fudan University, China Beijing University of Technology, China Chinese Academy of Sciences, China Tsinghua University, China Pivotal Inc., USA Chinese Academy of Sciences, China Renmin University of China, China Shanghai University, China Middle East University, Jordan University of Auckland, New Zealand University of Guelph, Canada Amsterdam University of Applied Science, Netherlands Loughborough University, UK Israa University, Jordan Shiraz University of Technology, Iran Telecom SudParis, France Manchester Metropolitan University, UK

Organization

Charlie Obimbo Jibran Saleem Maryam Shahpasand Steven Walker-Roberts

University of Guelph, Canada Manchester Metropolitan University, UK Asia Pacific University Malaysia, Malaysia Manchester Metropolitan University, UK

ix

Contents

Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xinfeng Zhang The Design of Virtual Reality Systems for Metaverse Scenarios . . . . . . . . . . . . . . Tianjian Gao and Yongzhi Yang

1

11

Design and Development of Health Data Platform for Home-Based Elderly Care Based on AAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaoli Zhang

21

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform Based on Bayes-BP Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhengjun Xie

30

Application of FCM Clustering Algorithm in Digital Library Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yanjun Zhou

40

Logistics Distribution Path Planning and Design Based on Ant Colony Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yan Wang

50

The Development of Power Grid Digital Infrastructure Based on Fuzzy Comprehensive Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Shengyao Shi, Qian Miao, Shiwei Qi, Zhipeng Zhang, and Yuwei Wang

59

Virtual Reality Technology in Indoor Environment Art Design . . . . . . . . . . . . . . . Shuran Zhang

69

Application of Digital Virtual Art in 3D Film Animation Design Effect . . . . . . . . Yueping Zhuang

78

Digital Media Art and Visual Communication Design Method Under Computer Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jiaxin Chen

87

Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jingchao Yuan

97

xii

Contents

Biological Tissue Detection System Based on Improved Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Haihua Wang Application of Computer Network Security Technology in Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Min Xian, Xiang Zheng, and Xiaoqin Ye The Application of Financial Technology in the Intelligent Management of Credit Risk Under the Background of Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Tingting Xie Measurement and Prediction of Carbon Sequestration Capacity Based on Random Forest Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Jiachun Li, Jiawei Fu, and Justin Wright Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Yuanchang Jin and Yufeng Li Construction Building Interior Renovation Information Model Based on Computer BIM Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Tao Yang and Yihan Hu Data Acquisition Control System Applying RFID Technology and Wireless Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Xiaohong Cao, Hong Pan, Xiaojuan Dang, and Jiangping Chen Modified K-means Algorithm in Computer Science (CS) Accurate Evaluation Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Jinri Wei, Yi Mo, and Caiyu Su The Application of Decision Tree Algorithm in Psychological Assessment Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Ping Li DT Algorithm in Mechanical Equipment Fault Diagnosis System . . . . . . . . . . . . . 195 Zijian Zhang, Jianmin Shen, Zhongjie Lv, Junhui Chai, Bo Xu, Xiaolong Zhang, and Xiaodong Dong Development of Industrial Historical and Cultural Heritage Display System Based on Panoramic VR Tech . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Xiaoyu Sun and Ze Wang

Contents

xiii

Application of Improved Particle Swarm Optimization Algorithm in Power Economic Dispatch System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Yige Ju Commodity Design Structure Matrix Sorting Algorithm on Account of Virtual Reality Skill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Tengjiao Liu and Apeksha Davoodi Machine Vision Communication System Based on Computer Intelligent Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Yuanyuan Duan Performance Evaluation of Rural Informatization Construction Based on Big Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Yaping Sun and Ruby Bhadoria Genetic Algorithm in Ginzburg-Landau Equation Analysis System . . . . . . . . . . . 258 Bentu Li Object Detection in UAV Images Based on Improved YOLOv5 . . . . . . . . . . . . . . 267 Zhenrui Chen, Min Wang, and Jinhai Zhang Evaluation Indicators of Power Grid Planning Considering Large-Scale New Energy Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Zhirui Tang and Guihua Qiu Certificateless Blind Proxy Signature Algorithm Based on PSS Standard Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 Li Liu and You You Mine Emergency Rescue Simulation Decision System on Account of Computer Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Hongtao Ma Smart Phone User Experience for the Elderly Based on Public Platform . . . . . . . 310 Dan Ji Design and Research of IOT Based Logistics Warehousing and Distribution Management System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Suqin Guo, Yu Zhang, and Xiaoxia Zhao Shipping RDF Model Construction and Semantic Information Retrieval . . . . . . . 329 Wei Guan and Yiduo Liang

xiv

Contents

Application of 3D Virtual Reality Technology in Film and Television Production Under Internet Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Zhenping Gao Simulation Research on Realizing Animation Color Gradient Effect Based on 3D Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Zhenping Gao Cross-Modal Retrieval Based on Deep Hashing in the Context of Data Space . . 360 Xiwen Cui, Dongxiao Niu, and Jiaqi Feng Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 Peng Yu, Shaohua Song, Xiangdong Jiang, Hailin Gong, Ying Su, and Bin Wu Design and Analysis of Geotechnical Engineering Survey Integrated System Based on GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Yan Wang Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Di Cui Design and Implementation of Intelligent Traffic Monitoring System Based on IOT and Big Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400 Yongling Chu, Yanyan Sai, and Shaochun Li Simulation of Passenger Ship Emergency Evacuation Based on Neural Network Algorithm and Physics Mechanical Model . . . . . . . . . . . . . . . . . . . . . . . . 411 Dehui Sun and Muhammad Khan A Design for Block Chain Service Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Jinmao Shi Resource Evaluation and Optimization of Wireless Communication Network Based on Internet of Things Technology . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Xin Yin, Yong Yuan, Ruifeng Mo, Xin Mi, and Wenqiang Li Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441 Xinwei Zhang

Contents

xv

Design and Experience of Virtual Ski Resort Based on VR Technology and Meteorological Condition Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 Haimin Cheng, Guiqin Fu, and Yushan Liu Intelligent Screening and Mining Technology of Software Vulnerability Programs in Power Internet of Things Terminals . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Xin Jin, Shuangmei Yu, Hengwang Liu, Wen Wang, and Xin Sun The Application of Improved ID3 Algorithm in College PE Teaching . . . . . . . . . 469 Jing Yang and Yunjian Xia The Stability of Online Discrete Switched System Based on Computer . . . . . . . . 478 Shan Gao Garment Pattern Element Extraction Based on Artificial Bee Colony Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487 Baojuan Yang Intelligent Hotel System Design Based on Internet of Things . . . . . . . . . . . . . . . . 495 Qingqing Geng, Yu Peng, and Sundar Rajasekaran Construction of Campus English Education Resources Information Platform Based on Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Lijun Yang, Zhihong Wang, and Dongxia Zhao Design and Application of Personalized Recommendation Module for English Writing Marking System Based on Theme Model . . . . . . . . . . . . . . . . 514 Meng Liang Design of DNA Storage Coding and Encoding System Based on Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523 Siran Kong A Further Modification of a Revised B-S Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Jiaxun Li Construction of English Teaching Resource Database Based on Big Data Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Li Chen, Linlin Wang, and Gagan Mostafa USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554 Junyi Wang

xvi

Contents

Design and Implementation of Mobile Terminal Network Broadcast Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 565 Bing Wang Monitoring of Tourist Attractions Based on Data Collection of Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574 Yu Peng and Qingqing Geng Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583

Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm Xinfeng Zhang(B) Xi’an Eurasia University, Xi’an, Shaanxi, China [email protected]

Abstract. Ecological balance was first proposed by British plant ecologist Tan Sili in 1935. It includes individual ecological balance, group ecological balance and ecosystem balance. Ecological balance means that the structure and function of each part of the ecosystem are in a dynamic state of relative adaptation and coordination within a certain period of time and under relatively stable conditions. This paper mainly studies the establishment and optimization of network ecological balance. This paper aims to study the ecological balance construction and optimization strategy based on intelligent optimization algorithm. In this paper, a dynamically increasing compression factor is introduced into the used Firefly algorithm, and a DCFA algorithm is proposed. On the one hand, since the firefly algorithm uses random search in the search area to optimize the search area of the algorithm and better control the area detected by the algorithm, this function introduces a powerful compression compression based on inertia coefficient, and designs the DCFA algorithm. The DCFA algorithm is then applied to automated ecosystem testing, which confirms that the DCFA algorithm can quickly find optimal solutions and create suitable ecosystems. Accelerate the development of new technologies, monitor the rapid development of the network ecosystem, strengthen social supervision, increase the harmony and cultivation of the network ecosystem, enhance the ethical awareness of network subjects, and improve their recovery and adaptation to network functions. Experiments show that the satisfaction of the optimized ecological balance system in this paper reaches 90%, and the ability to resist network attacks is also nearly doubled. Keywords: Intelligent optimization algorithm · Ecological balance · Ecosystem optimization · Firefly algorithm

1 Introduction The network shrinks the space between people and within the community, it is closely related to people and becomes an inseparable part of human activities, lifestyles and education. However, most of what people know about the web is limited to the technical level. From a technology standpoint, people see the web more as a tool. However, with the protection of the natural environment by human beings, some netizens are attached © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 1–10, 2023. https://doi.org/10.1007/978-3-031-31775-0_1

2

X. Zhang

to a secondary position, and the abuse, exploitation and utilization of the Internet have formed the Internet ecosystem. This greatly affects the confidence of people to participate in the development of network services and network infrastructure, and disrupts the global network. Ecological instability has aroused widespread concern and attention from governments, individuals and universities around the world. Governments in many countries have developed a set of rules and regulations to maintain the relative growth of the online ecosystem. Many professionals at home and abroad have done a lot of research and research from the technical point of view and network practice [1, 2]. Research of ecological balance construction and optimization strategy based on intelligent optimization algorithm, many scholars have studied it and achieved good results. For example, Liu Y once pointed out that an ecosystem is a system that includes all biological and physical environments in a specific area. Unity. He believes that as long as there are main components and can interact and obtain some kind of functional stability, even if it is short-lived, the whole can be regarded as an ecosystem [3]. Xu S put forward the concept of “network ecosystem”, that is: “All other social systems that affect network development constitute the ecological environment of network development. When they influence each other, a network ecology is formed” [4]. In this paper, a dynamically increasing compression factor is introduced into the used Firefly algorithm, and a DCFA algorithm is proposed. On the one hand, since the firefly algorithm uses random search in the search area to optimize the search area of the algorithm and better control the area detected by the algorithm, this function introduces a powerful compression compression based on inertia coefficient, and designs the DCFA algorithm. The DCFA algorithm is then applied to automated ecosystem testing, which confirms that the DCFA algorithm can quickly find optimal solutions and create suitable ecosystems. Accelerate the development of new technologies, monitor the rapid development of the network ecosystem, strengthen social supervision, increase the harmony and cultivation of the network ecosystem, enhance the ethical awareness of network subjects, and improve their recovery and adaptation to network functions.

2 Research on Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm 2.1 Functions of the Network Ecosystem (1) Internal functions The intrinsic function of the network ecosystem is to meet human needs through the network itself. The internal functions of the network are becoming more and more complex. Today, people can do many things through the Internet: chat online, shop online, play online games, browse the web, and more. Not only many things can only be done over the internet, even things that are impossible in the real world are possible. The internet means life in a completely different way. People are constantly changing the network, and the network is constantly changing the real world [5]. The internal functions of the network ecosystem are realized through the flow of information, which is most vividly manifested in many information services, such as data

Ecological Balance Construction and Optimization Strategy

3

transmission. In the past, the delivery of public information could be done through social media - TV, radio, newspapers, newspapers, etc. Transmission of non-public information by telephone, fax, telegram, etc. The network combines two ways, the transmission of public and non-public information can be carried out through the network. Due to the flow of information, the amount of information increases, and information can play a greater role where it is needed [6]. (2) External function The external function of the network ecosystem refers to the impact of the network on human culture and social progress. The Internet has made it faster and easier to download and disseminate the fruits of all man-made civilizations. It has high priority for social development. The growth rate of the network ecosystem is an important factor in measuring social information. 2.2 Network Ecological Imbalance An ecosystem is a responsive system with self-regulation capabilities, but self-regulation capabilities and tolerance to destructive factors are limited. Once this value is exceeded, the process will fail and the system will be destroyed. Ecological imbalance (ecological imbalance for short) refers to the unintended consequences of external pressures and vibrations outside the ecosystem of a system. In the event of a fire, the system will be damaged, resulting in reduced system performance and power outages. Ecosystem collapses. If the structure and function of each part of the ecosystem are in a state of energy balance of renewal and renewal under a certain time and relatively stable conditions, this is energy balance. However, any ecosystem’s ability to self-regulate and its tolerance for destructive factors has an endpoint, and when that endpoint is over, the process fails, the system disappears, and the world becomes unbalanced. When the ecological instability exceeds the self-regulation ability of the system and cannot achieve self-recovery, a series of ecological crises will occur [7, 8]. 2.3 Application of Intelligent Optimization Algorithm in Ecological Balance The factors that affect the ecological balance are very complex and are the comprehensive effect of many factors. They are usually classified as natural factors and human factors. Natural factors refer to abnormal phenomena in nature. The human reason is the irrational exploitation and use of natural resources by human beings, and also the environmental problems caused by the development of modern industry and agriculture. For example, the development of industrialization, people’s excessive pursuit of economy, and overexploitation of natural resources such as land, forests, minerals, water resources and energy; At the same time, the release of a large number of toxic and harmful substances in the “three wastes” has exceeded the limits of the natural ecosystem’s self-regulation, selfrepair, self-balance and growth capacity, causing serious damage to the global ecological balance. The factors affecting the ecological balance are shown in Fig. 1:

4

X. Zhang

Damage to ecological balance caused by species change.

Changes in environmental factors lead to the destruction of ecological balance Factors affecting ecological balance

Changes in information systems Fig. 1. Factors affecting ecological balance

As shown in Fig. 1: There are three ways for humans to destroy the ecosystem: Damage to ecological balance caused by species change. When human beings transform nature, they often take short-term measures to cause the extinction of certain species in an ecosystem, or blindly introduce certain organisms into a region, resulting in the destruction of the entire ecosystem. For example, in Australia, there was no rabbit before, but it was later introduced from Europe to make fur. After the introduction, because there is no natural enemy, it will breed in a large number in a short time, resulting in the grass and trees being gnawed clean, and “talking about rabbit discoloration”. (2) The destruction of ecological balance caused by the change of environmental factors is mainly caused by the imbalance caused by the change of some elements in the environment. With the rapid development of modern industry and agricultural production, many pollutants have entered the atmosphere. These toxic and harmful substances, on the one hand, will cause damage to specific species and break the food chain, thus destroying the substances and energy in the biological system, thus weakening the function of the ecosystem and making it ineffective; On the other hand, it will also affect the ecological environment. For example, with the development of chemical, metal

Ecological Balance Construction and Optimization Strategy

5

smelting and other industries, a large number of harmful gases such as sulfur dioxide, carbon dioxide, nitrogen oxides, hydrocarbons, oxides and soot have been produced, causing serious pollution to the atmosphere and water. The intelligent optimization algorithm mainly used in this paper is the firefly optimization algorithm. The realization of the firefly algorithm mainly uses the different luminous intensity and mutual attraction of individuals in the firefly group, and utilizes a feature between their populations. The firefly with strong light will attract nearby fireflies with weak light to its direction. Approaching, the firefly algorithm is based on this characteristic, in the process of continuous attraction and movement of the population, it completes the update and iteration of the position, so as to achieve the purpose of finding the optimal solution [9, 10]. For the establishment of the mathematical model of the FA algorithm, the first thing to understand is the concept of absolute brightness and relative brightness. The absolute brightness of a firefly is for a firefly individual i, its initial brightness at the position of r = 0, this brightness value is called absolute brightness, Can be marked as I. The definition of relative brightness is different from absolute brightness I, which refers to the luminous intensity of firefly i at the position of firefly j for two individuals i and j, which can be expressed as I. At position Xi (xi1 , xi2 , . . . , xid ), the expression for absute brightness Ii = f (Xi )

(1)

The brightness between fireflies is not constant, it will gradually weaken with the increase of the distance between the two and the absorption of some substances in the air. Therefore, the relative brightness formula of firefly individual i to firefly individual j can be expressed as   −γ r 2 Iij rij = Ii e ij

(2)

2.4 Measures to Maintain the Ecological Balance of the Network The network ecological crisis is not nothing but a solution. Like the natural ecological environment, as long as the main body of the network protects the network ecosystem, the network ecological crisis can be properly resolved. The network is an ecosystem of people and information, so the opposite is not the crisis of individual elements, but the crisis of the network ecosystem. The solution to these problems is undoubtedly a plan, not only the efforts of the government, enterprises and individuals, but also the force of global cooperation. We should not only pay attention to the role of network technology, but also cannot ignore the importance of rules and practices. “Excellent Internet” is the common aspiration of mankind. To this end, it is necessary not only to rely on technological progress, but also to strengthen the research and construction of networks and legal practices. Only by combining technology with practice and law can more human networks and ecosystems be created [11, 12].

6

X. Zhang

3 Ecological Balance Construction and Optimization Strategy Design Experiment Based on Intelligent Optimization Algorithm 3.1 Experimental Setup The problem of the triangle classification program can be simply described as: the three sides of the triangle are the input of the program, the output is the type of the triangle, the output triangle types are equilateral, isosceles, ordinary triangles and non-triangles, the test data is randomly generated, If the data to be generated just covers the path of the equilateral triangle, the random probability is very small, that is, it is difficult to cover the branch of the equilateral triangle. This paper constructs the most basic network ecological simulation environment, adopts the least ecological participation elements, conducts simulation through intelligent optimization algorithm, and finds the optimal solution. 3.2 Ecological Balance Stability and Ecological Quality Test This paper will compare and test the ecological balance simulated by the intelligent optimization algorithm, which are the most basic ecological balance and the most complex ecological balance. By comparing the ecological balance stability and ecological quality test of the two, we get the most optimal solution.

4 Experimental Analysis of Ecological Balance Construction and Optimization Strategy Based on Intelligent Optimization Algorithm 4.1 Ecological Balance Stability Test In this paper, the stability of the two ecological balances simulated by the intelligent optimization algorithm will be tested, and three different network virus attacks will be carried out on them respectively, and the duration of the two ecosystems will be calculated. The experimental data are shown in Table 1. Table 1. Stability testing of two ecosystems 1

2

3

Base

52

37

26

Complex

49

39

35

From Fig. 2, it can be seen that in the case of an attack, the basic ecosystem can maintain more time, but as the number of attacks increases, the ability of the complex ecosystem to resist the attack appears stronger, and the duration is longer than that of the attack. The basic ecological balance system is longer.

Ecological Balance Construction and Optimization Strategy

Base

60

7

complex

50

time

40

30

20

10

0 1

attack

2

3

Fig. 2. Comparison of the stability of two different ecosystems

4.2 Ecological Balance Quality Test This paper conducts a simulation test to test the changes of the network environment under the two ecosystems, and whether the network ecological elements will be optimized, as well as user satisfaction. This paper mainly selects 200 users to score the two ecological simulation systems and obtains the average score for each item. The data are shown in Table 2. Table 2. User evaluations of the two systems Aesthetics

Operability

Expansion capacity

Base complex

61

87

82

Base complex

92

94

63

From Fig. 3, we can see that in terms of aesthetics and operability, the score of complex ecosystems is higher than that of basic ecosystems, but the developability of basic ecosystems is higher in terms of future developability. a little better. But so far, this paper chooses complex ecosystem as the optimal solution of ecological balance system in this paper.

8

X. Zhang

Base

complex

Base

complex

100 90 80

fraction

70 60 50 40 30 20 10 0 aesthetics

Operability

Expansion capacity

project Fig. 3. Different users’ satisfaction surveys for the two ecosystems

4.3 Strategies for Maintaining Ecological Balance (1) The rational development and use of natural resources and the maintenance of ecological balance are the basis for maintaining ecological balance. Only by paying attention to the coordination between the structure and function of the ecosystem can we realize the development of nature and the transformation of the environment on the premise of maintaining the ecological balance. Only through the coordination of the structure and function of the ecosystem can the natural ecosystem adapt to the changes of the external environment and develop continuously, so as to truly adapt to local conditions and make full use of local natural resources. Only the coordination of structure and function can prevent the environmental deterioration caused by excessive destruction of structure and function. In the utilization of biological resources, attention should be paid to maintaining a certain number and ensuring a certain age and sex ratio. This is an ecological guideline that must be followed in the production process of forest harvesting, grassland grazing, fishery and so on, to ensure the sustainable reproduction and recovery of biological resources. On the contrary, it will inevitably lead to the depletion of resources and damage to the ecological environment.

Ecological Balance Construction and Optimization Strategy

9

(2) To transform nature and construct large-scale projects, we should focus on the overall situation and take into account the short-term and long-term impacts from the perspective of ecological benefits, the transformation of natural environment and the construction of large-scale projects; We should take into account the balance between economy and ecology. The damage to ecological balance is often local, long-term and difficult to eliminate. For example, in water conservancy construction, not only the use of water resources but also the change of ecological environment should be considered. If it damages the ecological environment, it will lead to serious consequences. (3) Actively promote the comprehensive utilization of resources to achieve natural ecological balance. In the process of economic development, make full use of natural resources, realize the resource, energy and harmless treatment of “three wastes”, and reduce the impact on the environment. In short, in the process of human transformation of nature, as long as we respect nature, love nature and follow the laws of nature, it is possible to maintain or restore the ecological balance and achieve the goal of harmonious development between human and nature.

5 Conclusions The Internet is the second place for human transformation, a virtual space world between people and the Internet, which requires the Internet to maintain a balance of power in a general sense. In order to maintain the balance of the ecosystem, this issue must rise to the level of interdependence between humans and the network environment, and to the level where the network is an important transit point for human beings. Network ecological protection is as important as human gaffe protection. This paper optimizes the optimal network ecological balance system through a multi-angle intelligent optimization algorithm, and through the simulation actuarial, the balance system can last for the longest time, and maintain the continuous output of high-quality network works by each network module. Prevent the network from being attacked by bad works and viruses.

References 1. Husnain, G., Anwar, S.: An intelligent cluster optimization algorithm based on Whale Optimization Algorithm for VANETs (WOACNET). PLoS ONE 16(4), e0250271 (2021) 2. Ali, J., Saeed, M., Tabassam, M.F., Iqbal, S.: Controlled showering optimization algorithm: an intelligent tool for decision making in global optimization. Comput. Math. Organ. Theory 25(2), 132–164 (2019). https://doi.org/10.1007/s10588-019-09293-6 3. Salahshour, E., Malekzadeh, M., Gordillo, F., et al.: Quantum neural network-based intelligent controller design for CSTR using modified particle swarm optimization algorithm. Trans. Inst. Meas. Control. 41(2), 392–404 (2019) 4. Al_Araji, A., Ibraheem, B.A.: A comparative study of various intelligent optimization algorithms based on path planning and neural controller for mobile robot. Univ. Baghdad Eng. J. 25(8), 80–99 (2019) 5. Ekinci, S., Demiroren, A., Hekimoglu, B.: Parameter optimization of power system stabilizers via kidney-inspired algorithm. Trans. Inst. Meas. Control. 41(5), 1405–1417 (2019)

10

X. Zhang

6. Milojevi, A., Shin, M., Oldham, K.R.: A novel design approach for micro-robotic appendages comprised of active and passive elements with disparate properties. J. Intell. Mater. Syst. Struct. 33(1), 136–159 (2022) 7. Gamal, M., Elsawy, A., Atta, A.: Hybrid algorithm based on chicken swarm optimization and genetic algorithm for text summarization. Int. J. Intell. Eng. Syst. 14(3), 319–131 (2021) 8. Kharrich, M., Kamel, S., Ellaia, R., et al.: Economic and ecological design of hybrid renewable energy systems based on a developed IWO/BSA algorithm. Electronics 10(6), 687 (2021) 9. Zakaria, M.S., Abdullah, H.: Milling optimization based on genetic algorithm and conventional method. J. Adv. Res. Dyn. Control Syst. 12(7), 1179–1186 (2020) 10. Rubini, L.J., Perumal, E.: Hybrid Kernel support vector machine classifier and grey wolf optimization algorithm based intelligent classification algorithm for chronic kidney disease. J. Med. Imaging Health Inform. 10(10), 2297–2307 (2020) 11. Mansoor, M., Mirza, A.F., Long, F., et al.: An intelligent tunicate swarm algorithm based MPPT control strategy for multiple configurations of PV systems under partial shading conditions. Adv. Theory Simulat. 4(12), 2100246- (2021) 12. Kordrostami, Z., Raeini, A., Ghoddus, H.: Design and optimization of lightly doped CNTFET architectures based on NEGF method and PSO algorithm. ECS J. Solid State Sci. Technol. 8(4), M39–M44 (2019)

The Design of Virtual Reality Systems for Metaverse Scenarios Tianjian Gao and Yongzhi Yang(B) Shandong University of Art and Design, Jinan, Shandong, China [email protected]

Abstract. Meta-universe is a process that changes from quantitative to qualitative change with the development of specific technologies, and it incorporates several scientific technologies. The concept of metaverse has brought new changes to the academic world, and the development of all sectors of society will face new opportunities and challenges. The aim of this paper is to investigate the design of virtual reality systems for metaverse scenarios. This paper introduces virtual reality technology and human-computer interaction technology, puts forward the idea of dealing with the phenomenon of interpenetration of human body and objects in virtual scenes, and optimises its core collision detection algorithm for this need, builds a virtual scene of a traditional building on the current mainstream virtual reality engine unity3d platform, and completes the algorithm test. The experimental results show that the hybrid algorithm is superior in terms of the realism of human-computer interaction. Keywords: Metaverse Scenes · Virtual reality · System design · Architectural design

1 Introduction The meta-universe cannot be separated from science and technology, and the construction of the meta-universe is based on the rapid development of science and technology. Without the progress of science and technology, it is impossible to discuss the construction of the meta-universe [1, 2]. The study of metaverse from the perspective of philosophy of science and technology can more intuitively demonstrate the morphological characteristics of digital and computer society under the modern technological development trend, and provide a reference for the direction of metaverse development. Virtual reality and its realistic simulation of interactive experiences can provide a good user experience [3, 4]. It is well known that the Internet will play a key role in the infrastructure of the metaverse, where different immersive virtual worlds will communicate with each other. However, the existing TCP/IP model is not applicable to metaverse networks. Therefore, Sara Ventura explored an initial framework for an improved TCP/IP model. First, a new layer called the metaverse is defined. This layer was then placed at the top layer, i.e. the layer above the application layer in the TCP/IP model. As a result, a new six-layer © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 11–20, 2023. https://doi.org/10.1007/978-3-031-31775-0_2

12

T. Gao and Y. Yang

network model was obtained. Their simulation experiments demonstrated the feasibility of the new model [5]. The metaverse is seen as the next generation of the Internet paradigm, which allows humans to play, work and socialise in alternative virtual worlds, such as virtual reality (VR) rendering through head-mounted displays. With ubiquitous wireless connectivity and powerful edge computing technologies, VR users in a wireless edge-enabled metaverse can immerse themselves in virtual worlds by accessing VR services provided by different providers.Brenda K presents a learning-based framework for incentivising virtual reality services in the metaverse. First, perceived quality is proposed as a metric for virtual reality users to immerse themselves in virtual worlds. Second, to quickly trade VR services between VR users (i.e., buyers) and VR SPs (i.e., sellers), they design a two-Dutch auction mechanism to determine the optimal pricing and allocation rules for this market. Third, to reduce auction communication, they designed a deep reinforcement learning-based auctioneer to speed up the auction process. Experimental results show that the proposed framework can achieve near-optimal social welfare while reducing the cost of auction information exchange by at least half compared to the baseline approach [6]. This paper first analyses the design principles of the roaming interactive system and the purpose of system planning, and then implements key technologies such as the system building environment and virtual reality and collision detection technologies. Through this study, traditional architectural culture can be more systematically and clearly expressed in its connotation, and 3D interactive roaming technology and network information technology can be applied to the dissemination of traditional architectural culture, thus bringing architectural culture into view, and at the same time can bring inspiration to the retention and extensive dissemination paths of traditional architectural culture of other nationalities.

2 Related Technologies 2.1 Design Principles for Roaming Interactive Systems The architectural roaming system should first be a specific design based on the characteristics of the audience. The clear positioning of the audience and the analysis of user needs are the starting point of the entire design [7, 8]. Furthermore, as the fundamental aim of the system design is to disseminate traditional architectural culture, the basic principle that the whole design must follow is to maintain the authenticity of traditional architectural culture in the information content of the system and to transfer it completely to the virtual architectural space. The architectural scenes in the virtual roaming system are often realistic, but not completely independent of the artistic treatment. In the final analysis, virtual architecture belongs to the field of expressive digital virtual art design, which has an artistic character centred on the visual channel among the multiple sensory channels for receiving external information. It is therefore important to realise the beauty of real architecture, but it is also necessary to think about how to reflect the beauty of digital art. Furthermore, according to the three levels of design content in affective design - instinctive, behavioural and reflective - the visual pleasure caused by design beauty is one of the basic design goals of the instinctive level of affective design [9, 10].

The Design of Virtual Reality Systems

13

2.2 The Purpose of Overall System Planning As a practical art design work, the use and purpose of the traditional architectural virtual roaming system is its basic attribute, and its purpose is closely related to the emotional experience and cultural communicative behaviour that people obtain through the design. Only when the designer proposes appropriate and reasonable design content in the design process, the system is likely to achieve the ultimate goal of spreading architectural culture [11]. 2.3 Technical Support for Metaverse (1) XR technology XR technology, also known as augmented reality, includes VR (virtual reality) and MR (mixed reality) technologies for combining virtual content with reality. 2D interface experience, metaverse using 3D visualisation interfaces (VR glasses or air glasses, etc.). With more realistic 3d effects. XR technologies are highly user-oriented and create immersive sensory experiences that make it difficult to distinguish between virtual and actual space [12, 13]. (2) Human Computer Interaction (HCI) The logic of metaverse virtual-real interaction lies in the fact that the computer perceives human behaviour and consciousness, the result of which acts on the digital twin and eventually outputs human consciousness or commands to the external AI device. This is an important technical basis for the metaverse to connect the physical and virtual worlds, which is the problem of human-computer interaction [14]. 2.4 Real-Time Collision Detection Algorithm Based on Enclosing Box Tree (1) Axis-aligned bounding boxes An axis-aligned bounding box is defined as the smallest hexahedron containing the object, with each of its edges parallel to the three principal coordinate axes. The bounding box can be uniquely determined by six scalars that represent its maximum and minimum values on each of the three principal axes. An axis-aligned bounding box can be represented as: R = {(x, y, z)|Xmin ≤ X ≤ Xmax, ymin ≤ y ≤ ymax, zmin ≤ z ≤ zmax}

(1)

Xmin, Xmax, Ymin, Ymax, Zmin and Zmax are the maximum and minimum values of the bounding box on the x, y and z axes respectively.

14

T. Gao and Y. Yang

(2) Sphere enclosing box The ball enclosing box is defined as the smallest sphere that contains the object, the ball enclosing box can be expressed as: R = {(x, y, z)|(x − xc)2 + (y − yc)2 + (z − zc)2 < r 2

(2)

where (xc, yc, zc) are the coordinates of the centre of the ball and r is the radius of the ball.

3 Design of a Virtual Reality System for Metaverse Scenarios 3.1 Virtual Reality System Build This paper uses the 3D virtual reality mechanism of unity on a Win7 64-bit system as the development platform for building the system. Unity3d is a game development tool that includes modular features such as an integrated editor, a c# source code compiler, 3D model manipulation and a physics engine with visual editing capabilities [15, 16]. PhysX physics mechanisms are written into the bottom of the software and built in. The physx physics engine is the most commonly used game physics engine today and includes collision detection, collision response and other related understanding and computational functions [17]. 3.2 Visual Interface Design When creating architectural monoliths, the designer should emphasise the most representative artistic features of the building monolith. The most distinctive architectural elements, such as high purity, high contrast colour compositions, window tiles with decorative pattern replication, the overall trapezoidal form of the building and other parts of the realistic scene that the public tends to overlook, require designers to enhance the editing and treatment of materials when building the scene to emphasise the ethnic characteristics of the building [18]. At the same time, the integration of the natural and human environments must also be fully considered in the process of building the scene environment. Blue sky and white clouds have always been a beautiful environment to aspire to, and the building is more solemn and sacred precisely because it is in such a pure environment. In the visual interface of the system, the most ideal and realistic sky environment is simulated through the parameters of light, enriching the visual experience of the user. The final result of the design is shown in Fig. 1.

The Design of Virtual Reality Systems

15

Fig. 1. System interface scenario

3.3 Virtual Character Control This paper uses 3DSMAX software to model the virtual features, firstly modifying parts of the body and sculpting the virtual human body and facial shape; secondly, adjusting the skin details, setting up map effects such as bumping, and modifying them according to factors such as skin age and colour; finally, designing the character’s clothes. The 3DSMAX virtual feature modelling was completed and the file exported as. Fbx format for easy conversion to the unity 3d engine. The modelling of the virtual character is shown in Fig. 2. After importing the virtual character, create a Mecanim animation and map it from the simplified human body structure to the actual skeleton provided. After importing the character animation template, select the animation type on the device in the Import Settings panel, this application uses Humanoid animation, click on the Animationtype drop down list and select the Apply button. The Mecanim animation system stores information about the skeleton structure of the automatically matching skeleton model. A total of six virtual animations are defined in this paper, including greeting, forked waist, talking, moving forward, switching lights on and off and confusion. The six animations are connected to the virtual character and the animation effects of the character need to match different scenes and call different virtual animations to make the virtual character more realistic and give the user an immersive feeling. The control character animation interface is shown in Fig. 3. One character animation is called at a time, the character’s actions such as greeting, forking, talking, walking forward, turning lights on and off, and confusing are controlled by code, the result of the call is output by voice control, and the dialogue sentences in the processed text are output in conjunction with the animation playback.

16

T. Gao and Y. Yang

Fig. 2. Virtual character modeling

say hello Akimbo

puzzled

Virtual Role speak

Switch lamp

forward

Fig. 3. Virtual character animation control

The Design of Virtual Reality Systems

17

4 Analysis and Research of a Virtual Reality System for Metaverse Scenarios 4.1 Testing Collision Detection Algorithms This paper focuses on the characteristics of human-computer interaction technology in virtual environments and the processing ideas used when the phenomenon of humanobject interpenetration occurs. The collision detection algorithm was optimised in two ways: cross-testing of the enclosing box tree structure and the basic geometric elements. In the experiments, models with different numbers of faces were placed in the virtual world and the time required for individual collision detection was optimised using a pre-collision detection algorithm, an optimisation algorithm for the enclosing box tree structure, an optimisation algorithm for the intersection test and a hybrid optimisation algorithm. The experimental results recorded through the program are shown in Table 1 and Fig. 4. Table 1. Test schedule Number of model faces

Not optimized

Optimization of bounding box tree structure

Intersection test optimization

Hybrid optimization

4000

37

33

26

23

8000

65

58

44

40

12000

108

112

78

77

16000

133

141

121

114

The structural optimisation of the enclosing box tree mainly improves the storage performance of the algorithm and is not as good as the traditional algorithm in terms of computational performance; the optimisation of the intersection test improves the computational performance of the algorithm and significantly reduces the time required for collision detection. The combination of the two optimisations significantly improves the storage performance and computational performance of the algorithm.

18

T. Gao and Y. Yang

Hybrid optimization Intersection test optimization Optimization of bounding box tree structure Not optimized 16000

number of model faces

12000

8000

4000 0

50 100 time required for single collision detection (ms)

150

Fig. 4. Time required for single collision detection in case of collision

4.2 Roaming Tests In order to verify the feasibility of this control method, the experimenter is given a roaming task, i.e. from point a to point b in the scene and back, the time should be controlled within 100 s, otherwise the task is decided to fail, when an obstacle is encountered, the system will appear a dialogue prompting a hit, and then 2 s is added as a penalty, this time five experimenters are allowed to conduct online experiments, each conducting 10 tasks, the experimental results as shown in Table 2. Table 2. Experimental results Experimenter

Number of tasks completed

Average score/s

Experimenter 1

7

14

Experimenter 2

9

11

Experimenter 3

7

14

Experimenter 4

10

10

Experimenter 5

8

12

The Design of Virtual Reality Systems

19

According to Table 1, the average number of times an experimenter completed a task out of 10 online tasks was 8, the average performance of Experimenter 4 was 10 s and the average time to complete a task was 10 s.

5 Conclusions The prosperity of the metaverse brings us endless visions of future life, and countless science fiction scenarios seem to be laid out before us. Based on the theory of virtual reality, this paper designs and builds an Internet-based virtual reality system using programming languages and software (such as 3DSMAX, VRML, Java and JavaScript). Furthermore, there are many areas for improvement in virtual machines. For example, even with the various modelling approaches tried and combined, system speed is still a major constraint to its development. As user needs increase and the demands on the system increase, the system continues to expand and normal operation will be greatly tested. Seeking and further exploring better modelling methods and improving existing modelling methods is a priority for future research.

References 1. Gaggioli, A., Chirico, A.: Call for special issue papers: virtual emotions: understanding affective experiences in the metaverse: deadline for manuscript submission: July 31, 2022. Cyberpsychol. Behav. Soc. Netw. 25(2), 85–86 (2022) 2. Jin, H., Hwang, J., Luo, B., Kim, T., Sung, Y.: Licensing effect of pro-environmental behavior in metaverse. Cyberpsychol. Behav. Soc. Netw. 25(11), 709–717 (2022) 3. Riva, G., Villani, D., Wiederhold, B.K.: Call for special issue papers: HUMANE METAVERSE: opportunities and challenges towards the development of a humane-centered metaverse: deadline for manuscript submission: December 31, 2022. Cyberpsychol. Behav. Soc. Netw. 25(6), 332–333 (2022) 4. Riva, G., Wiederhold, B.K.: What the metaverse is (really) and why we need to know about It. Cyberpsychol. Behav. Soc. Netw. 25(6), 355–359 (2022) 5. Ventura, S., Lullini, G., Riva, G.: Cognitive rehabilitation in the metaverse: insights from the tele-neurorehab project. Cyberpsychol. Behav. Soc. Netw. 25(10), 686–687 (2022) 6. Brenda, K.: Wiederhold: ready (or not) player one: initial musings on the metaverse. Cyberpsychol. Behav. Soc. Netw. 25(1), 1–2 (2022) 7. Saker, M., Frith, J.: Contiguous identities the virtual self in the supposed Metaverse. First Monday 27(3) (2022) 8. Kammler, P., Gravemeier, L.S., Göritz, L., Kammler, F., Gembarski, P.G.: Values of the Metaverse: Hybride Arbeit in virtuellen Begegnungsräumen. HMD Prax. Wirtsch. 59(4), 1062–1074 (2022) 9. Buchholz, F., Oppermann, L., Prinz, W.: There’s more than one metaverse. i-com 21(3), 313–324 (2022) 10. Krasnyanskiy, M., Obukhov, A., Dedov, D.: Control system for an adaptive running platform for moving in virtual reality. Autom. Remote. Control. 83(3), 355–366 (2022) 11. Akbarifar, F., Dukelow, S.P., Mousavi, P., Scott, S.H.: Computer-aided identification of strokeassociated motor impairments using a virtual reality augmented robotic system. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 10(3), 252–259 (2022)

20

T. Gao and Y. Yang

12. Safikhani, S., Keller, S., Schweiger, G., Pirker, J.: Immersive virtual reality for extending the potential of building information modeling in architecture, engineering, and construction sector: systematic review. Int. J. Digit. Earth 15(1), 503–526 (2022) 13. Alonso, R., Bonini, A., Recupero, D.R., Spano, L.D.: Exploiting virtual reality and the robot operating system to remote-control a humanoid robot. Multim. Tools Appl. 81(11), 15565– 15592 (2022) 14. Bui, V., Alaei, A.: Virtual reality in training artificial intelligence-based systems: a case study of fall detection. Multim. Tools Appl. 81, 32625–32642 (2022). https://doi.org/10.1007/s11 042-022-13080-y 15. Molina, G., Gimeno, J., Portalés, C., Casas, S.: A comparative analysis of two immersive virtual reality systems in the integration and visualization of natural hand interaction. Multim. Tools Appl. 81(6), 7733–7758 (2022). https://doi.org/10.1007/s11042-021-11760-9 16. Hernández-Melgarejo, G., Luviano-Juárez, A., Fuentes-Aguilar, R.Q.: A framework to model and control the state of presence in virtual reality systems. IEEE Trans. Affect. Comput. 13(4), 1854–1867 (2022) 17. Al-Jundi, H.A., Tanbour, E.Y.: A framework for fidelity evaluation of immersive virtual reality systems. Virtual Real. 26, 1103–1122 (2022). https://doi.org/10.1007/s10055-021-00618-y 18. Monica, R., Aleotti, J.: Evaluation of the Oculus Rift S tracking system in room scale virtual reality. Virtual Real. 26(4), 1335–1345 (2022)

Design and Development of Health Data Platform for Home-Based Elderly Care Based on AAL Xiaoli Zhang(B) Department of Information Management, Dalian Neusoft University of Information, Dalian, China [email protected]

Abstract. China has entered an aging society gradually, and it is the country with the largest elderly population in the world. Now it is an aging society in China, which has the largest number of elderly people in the world. With the improvement of medical software and hardware facilities and the improvement of intelligent systems in China, many intelligent medical devices are approaching the life of the elderly, such as home blood glucose monitors, ambulatory blood pressure meters, etc. The elderly also gradually accept and widely use these smart devices and other mobile Internet products. This paper introduces the design and development of health data platform for home-based elderly care based on AAL, describes the technical architecture and core functions of the platform, so as to improve home-based elderly care through AAL technology, address the challenges of aging society, and enhance the happiness of elderly home-based elderly care. Keywords: ambient assisted living · smart elderly care · internet of things · remote patient monitoring

1 Introduction Aging has become one of the greatest social and economic challenges in the 21st century. It is estimated that the elderly population in the world will reach 2.1 billion by 2050 [1]. China has the largest elderly population in the world. The elderly population will reach 480 million, which almost one quarter of the world’s elderly population. From 2014 to 2050, the consumption potential of China’s elderly population is expected to grow from almost 4 trillion to about 106 trillion, and its proportion in GDP will increase from about 8% to about 33%. It is making that China is the country with the largest market potential in the global aging industry [2, 3]. The report of the 20th National Congress of the Communist Party of China (CPC) pointed out that: “Implement the national strategy of actively responding to the aging of the population, develop the elderly care industry, optimize services for the elderly, and promote the realization of basic elderly care services for all the elderly.“ Adapt to the change of the demand structure of the elderly from survival to development, actively promote the aging cause and industry to keep pace with the times, and promote the aging © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 21–29, 2023. https://doi.org/10.1007/978-3-031-31775-0_3

22

X. Zhang

transformation in all fields, including home. We will be able to better build an elderly friendly society, so that the elderly can enjoy a happy old age. In recent years, “9073” or “9064” elderly care models have emerged in many places, that is, 90% of the elderly care at home, 7% or 6% of the elderly care in community institutions, and 3% or 4% of the elderly care in institutions such as nursing homes (Please see Fig. 1). Influenced by the strong traditional concept of homeland in China, most of the elderly in China are unwilling to accept social elderly care service institutions, preferring to provide for the elderly at home, based on families and relying on communities.The ability of nursing staff to care for the elderly may be directly or indirectly affected by different factors, such as lack of health information of nursing staff, inability to provide timely nursing services due to distance constraints, etc. [4].

Fig. 1. “9073” or “9064” elderly care models

Ambient Assisted Living (AAL) provides assistance for the elderly who live at home and independently, reduces the cost of treatment, better monitors the actions of the elderly, and provides daily life assistance for the elderly, so as to reduce the economic and social pressure caused by population aging. Therefore, assisted living has become an independent research field, attracting researchers from different fields, such as medicine, science and technology, and society, which is highly interdisciplinary [6]. This topic is based on AAL to research and develop the real-time system in the body area sensor network and wireless sensor network, build a home care health data platform, and devote to solving the problems of low professional level of nursing staff, poor sanitary conditions, poor drug compliance, etc.

2 Introduction of Related Technologies 2.1 AAL Ambient Assisted Living Technologies (AAL) is an important research to improve the problem of population aging. It is an expandable intelligent technology platform, connecting various devices on the platform to build an immediate response environment.

Design and Development of Health Data Platform

23

It uses mobile communication technology to get the user’s state and environment data, and to analyze them. It can monitor the user’s physical condition in real time, cultivate cognitive ability, provide automatic emergency call, and help users to carry out various basic daily life activities, aiming to improve the quality of life of the elderly [6]. 2.2 Smart Elderly Care Smart elderly care is people-centered, designed for the living environment and security of the elderly according to their living habits and characteristics. Its core goal is to improve the quality of life of the elderly. Its main purpose is to promote the physical and mental health of the elderly. It makes full use of IoT, big data, cloud computing, intelligent hardware, artificial intelligence and other new generation information technology. It products to achieve the effective connection and optimal allocation of communities, families, individuals, institutions and health care resources, promote the intelligent upgrading of healthcare services, and improve the quality and efficiency of healthcare services. 2.3 Internet of Things The Internet of Things (IoT) technology is being widely used. It refers to the ubiquitous connection between things and things or things and people through various possible network access, as well as the intelligent perception, recognition and management of things and processes. Through various devices and technologies such as global positioning systems, information sensors, infrared sensors, etc., it collects any object or process that needs real-time monitoring, connection and interaction, and collects all kinds of information needed for its sound, light, position, etc., so as to realize the interconnection of everything [7, 8]. IoT is an information carrier based on the Internet and traditional telecommunications networks [9, 10]. 2.4 Remote Patient Monitoring The rising cost of medical care and the aging of the world’s population have promoted the development of telemedicine and a network providing multiple medical services. Telemedicine support uses integrated health information systems and telecommunications technologies to enable scientists, doctors and other medical professionals around the world to provide services to more patients [11, 12]. In fact, the signals provided by body sensors can effectively process the collected information to obtain reliable and accurate physiological estimates, allowing remote doctors to have real-time opinions on medical diagnosis and prescriptions. Such an intelligent medical system can provide applied surgical procedures for diagnostic procedures, maintenance of chronic diseases, and supervision and rehabilitation. Patient monitoring applications typically control vital signs and provide real-time feedback and information to help patients recover [13]. In this case, we can let the patient accept the supervision of the doctor. In the natural physiological state, it will not limit its normal activities, nor will it cause high cost injuries. Activities of daily living monitoring monitors the activities in daily life of patients with certain

24

X. Zhang

diseases, such as home rehabilitation monitoring after surgery to deal with patients in the medical surgery/post surgery rehabilitation period, and the recovery period during hospitalization. This remote monitoring system is safer, cheaper and more convenient. In this field, a lot of work has been put forward in the literature. This topic extends the scene to the elderly’s daily home care. Wearable and intelligent sensor devices have broad application prospects in environmental protection and medical health, so they have received great attention. The development of intelligent devices provides hardware equipment for the hardware upgrade of intelligent elderly care. For example, smart watches can remotely monitor the daily life status of the elderly, facilitate one click emergency calls, voice calls with family members, set up video monitoring, and protect the security of the elderly. There are also some intelligent devices that can monitor the elderly’s heart rate, blood pressure and other health indicators in real time.

3 Health Data Platform for Home-Based Elderly Care Based on AAL 3.1 Construction Objectives The elderly have decreased visual, auditory, tactile and other perceptual abilities, decreased flexibility of muscle and bone systems, physical decline, and significantly slower reaction speed. At the same time, with the development of medical devices and intelligent devices, more and more intelligent devices suitable for the elderly are put into the market. Wireless sensors are embedded in these intelligent devices, which are expected to completely change the way and technical means of human health monitoring. At present, wireless sensors are becoming smaller and smaller in size, lighter and lighter in weight, and lower and lower in cost. 5G technology also greatly improves the network transmission speed. Thus, design and development of AAL based home care health data platform can achieve preventive care, improve the quality of life, and reduce health costs. This project is intended to develop a home-based elderly care health data platform based on AAL, which can communicate with wearable devices suitable for the elderly, collect health data such as blood pressure, blood oxygen, ECG, blood glucose, exercise status, body temperature, etc., and apply technologies such as multi-sensor or sensor groups to improve the response time of the real-time system of the human body area sensor network, shorten the time for devices to collect cloud storage, and then to visualize the data analysis, improve BAN (Body Area Networks). The core of this system is the elderly health data monitoring. Online measurement and display of patients’ life parameters are an important part of monitoring patients’ health [14]. The system can assist the elderly in their daily life. Normal activities and falls can be distinguished by movement status, body temperature and positioning sensors. The system can also monitor the status of chronic diseases, and analyze the compliance of taking drugs through continuous monitoring of blood pressure, blood glucose, etc., in conjunction with the application of intelligent medicine boxes. At present, Chinese young people are hardly with their parents, either abroad or in other cities, and few of them live together in the same city. Imagine such a scene,

Design and Development of Health Data Platform

25

which the elderly spend their old lives at their familiar homes, wear ambulatory blood pressure monitors, ECG recorders, etc. to obtain their blood pressure, ECG, falls, posture recognition in real time, and measure their weight and blood glucose value regularly. These data are transmitted to the cloud platform in real time through the network, and the system automatically analyzes and feeds back the results to the elderly and their children in graphical form, If there is any abnormality, the system will automatically alarm. It is also possible to monitor health data for a long time, analyze the change trend, and even achieve preventive treatment. 3.2 Overall Architecture In the overall software architecture of the project, in order to increase the processing speed of the system at the same time and match the requirements of medical equipment for real-time monitoring of blood pressure, blood oxygen, blood glucose and other high concurrency business processing of health data collection, this platform adopts a microservice architecture, with decoupling between services, and independent development and deployment. The technical architecture of this platform is shown in Fig. 2. Health data platform for home-based elderly care based on AAL can be abstracted into three layers: platform presentation layer, business logic layer and data access layer.

Fig. 2. Architecture of health data platform for home-based elderly care based on AAL

The top layer is the platform presentation layer. It is implemented based on browser and is not limited to terminal machine operating system and system configuration, simplifying client installation, deployment and system maintenance. VUE is used to reduce

26

X. Zhang

the coupling between the front and rear codes, facilitating the iterative upgrade of the front-end presentation layer. The middle layer is the business logic layer. It uses Rest API for data transmission (Json format). The lightweight application programming interface is responsible for the information interaction between the platform presentation layer and the data access layer, and communicates through the API interface and MQ message queue middleware. This layer is only responsible for data integration and forwarding, not data storage, so as to improve the system data throughput. The lowest layer is the data access layer. It uses Spring Data JPA data persistence framework and Redis storage, and cooperates with Shiro to manage permissions. The database uses MySQL for data storage and query. 3.3 Core Functions The platform consists of three subsystems, including user mobile health monitoring system, doctor PC online monitoring system and background management system. The main functions of each subsystem are shown in Fig. 3.

Fig. 3. System function module structure diagram

The users of the mobile health monitoring system are mainly the elderly living at home and their families. The main functions of the system include user information, device binding, health data query and alarm module. The system is simple and easy to use. For the first time, users only need to enter personal information and bind smart devices. The system can obtain current user’s health data through the Bluetooth link with the intelligent device. If there are any abnormality, an alarm will be issued. The system also supports the maintenance of family members. The bound family members

Design and Development of Health Data Platform

27

can view the user’s health data at any time, and will also receive the alarm reminder of the user. In addition, the “Message Notification” module of the system will push messages or notifications to users, such as medication notifications, health reminder, the latest policies and other advisory information. The doctor PC online monitoring system is a system for doctors and nurses. Patient management and health data monitoring are the main functions of the system. After logging in to the system, the doctor will be able to view his patient list. Click each patient’s details to view the patient’s personal information and health data monitoring results. Personal information includes basic information such as name, age, height, weight, and previous medical history. Health data monitoring results support viewing the monitoring results of heart rate, blood pressure, blood oxygen, etc. The background management system mainly provides services for platform operators. It mainly includes six core business modules: user management, doctor and institution management, health data monitoring, equipment management, alarm module and online notification. The functional modules are described as follows: User information: personal information of the elderly, such as name, mobile number, age, height, weight, urban area, specific address, family members, children’s contact information and other basic information. Doctor and institution information: doctor’s name, contact information, rank, department, affiliated institution, graduation school and other information; The organization includes its name, address, telephone number and other information. Health data monitoring: through Bluetooth communication with the device, obtain real-time monitoring health data in the device, such as regular measurement of heart rate data, blood pressure data, blood oxygen data, etc. The system can draw the corresponding blood pressure curve, blood oxygen curve, etc. according to the current user’s health data, so as to intuitively reflect the user’s health. Please see Fig. 4 for detail page. Device management: including device query, user binding, unbinding and other functions. Through this module, you can query the use status of each device, whether it is online, etc. Alarm module: Take blood pressure as an example, the platform will set alarm blood pressure for each user. If it is higher than high blood pressure or lower than low blood pressure, the system will automatically alarm. The system will handle according to the urgency of the alarm, such as emergency hypertension or hypo-tension, and will automatically dial the user’s mobile phone, while pushing the alarm information to the medical staff and guardians. Online notice: including medication notice, health reminder, latest advice and health consulting information. Health data platform for home-based elderly care based on ALL includes the mobile health monitoring system, the doctor PC online monitoring system, and the background management system. Thus forming a closed loop from the platform where patients settle in, online binding of equipment, real-time monitoring of health data, and one-to-one service of doctors, it realizes the elderly care at home, digital and monitorable health. Based on this platform, more medical devices can be expanded to monitor more health data in the future, such as blood oxygen value, EEG, etc., and more functions can also

28

X. Zhang

Fig. 4. Patient Detail Page in Background Management System

be provided, such as online medical treatment, intelligent customer service, prevention and treatment, scientific health care online micro-video.

4 Conclusions Health data platform for home-based elderly care based on AAL, is an interactive system, a new concept of information technology serving the elderly, involving sociology, ethics and law, wearable technology, distributed and other disciplines. In addition, behavior recognition, smart home, wireless sensor network and home care are also the focus in the field of environment assisted living. This paper analyzes the architecture model of the AAL based home-based elderly care health data platform, which can be connected with intelligent devices to timely collect the health data of the elderly, such as blood pressure, blood oxygen and blood sugar, and includes an alarm mechanism. Nursing staff and family members can receive the alarm information. This platform can monitor the health and life of the elderly, provide guarantee for the high-quality old age life of the elderly, and also reduce the burden on children. Especially, people whose children can not be accompanied in real time in other places can learn about the health of the elderly remotely and in real time through this platform to provide care. With the growing demand of the elderly for independent life and high-quality life, intelligent pension will become the trend of pension in the future. The application and

Design and Development of Health Data Platform

29

promotion of this platform is conducive to improving the aging problem of society, reducing the security risk of home-based elderly care, improving the quality of homebased elderly care, increasing the economic benefits of elderly care services, and reducing the economic and social pressure brought by the aging society.

References 1. van Dooren, M.M.M., Siriaraya, P., Visch, V., Spijkerman, R., Bijkerk, L.: Reflections on the design, implementation, and adoption of a gamified eHealth application in youth mental healthcare. Entertain. Comput. 31(C), 100305 (2019) 2. Winoker Jared, S., Kevin, K., Matlaga Brian, R.: Barriers to the adoption and integration of patient-facing mHealth tools in urology. Nat. Rev. Urol. 18(5), 247–249 (2021) 3. Shin, G., et al.: Wearable activity trackers, accuracy, adoption, acceptance and health impact: a systematic literature review, J. Biomed. Inform. 93, 103153–103159 (2019) 4. Bini, S.A., et al.: Digital orthopaedics: a glimpse into the future in the midst of a pandemic. J. Arthroplasty 35(7), S68–S73 (2020) 5. Manal, A., et al.: Delivering digital healthcare for elderly: a holistic framework for the adoption of ambient assisted living. Int. J. Environ. Res. Public Health 19(24), 16760–16766 (2022) 6. Jovanovic, M., et al.: Ambient assisted living: scoping review of artificial intelligence models, domains, technology, and concerns. J. Med. Internet Res. 24(11), e36553–36559 (2022) 7. Liu, C., Jing, X., Dong, G.: Technical characteristics and wide application of Internet of Things. Sci. Consult. 9, 86 (2011) 8. Zhang, X., Zhang, Y.: Design and development of big data cloud platform for cattle and sheep IoT breeding based on SaaS. In: SPIoT, pp. 925–930 (2021) 9. Vardan, M., et al.: Big Data and Internet of Things (IoT) technologies’ influence on higher education: current state and future prospects. Int. J. Web-Based Learn. Teach. Technol 16(5), 137–157 (2021) 10. Matteo, L., et al.: Integrating social assistive robots, IoT, virtual communities and smart objects to assist at-home independently living elders: the MoveCare project. Int. J. Soc. Rob. 15, 517–545 (2022) 11. Vidhi, J., Gaurang, G., Megha, G., Deepak Kumar, S., Uttam, G.: Ambient intelligencebased multimodal human action recognition for autonomous systems. ISA Trans. 132, 94–108 (2022) 12. Negra, R., Jemili, I., Belghith, A.: Wireless body area networks: applications and technologies. Procedia Comput. Sci. 83(C), 1274–1281 (2016) 13. Daejin, K., Hongyi, B., Chang Carl, K., Liang, D., Jennifer, M.: In-home monitoring technology for aging in place: scoping review. Interact. J. Med. Res. 11(2), e39005–e39005 (2022) 14. Hosseini, S.A., Pirozmand, P.: Developed application software for wireless body area networks to reduce energy consumption. Int. J. Adv. Trend. Comput. Sci. Eng. 9(5), 8063–8068 (2020)

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform Based on Bayes-BP Algorithm Zhengjun Xie(B) Department of Business Administration, Xi’an Eurasia University, Xi’an 710065, Shaanxi, China [email protected]

Abstract. In recent years, the development of CBE (cross-border e-commerce) has driven the development of many overseas markets. With the popularity of domestic e-commerce platforms, many companies have turned their attention to overseas markets. For example, in Europe, America and Southeast Asia, many local people With strong purchasing power, especially with the increasing international influence of China and the introduction of encouraging policies in many countries, CBE has certain prospects. Through the analysis of the development process of CBE platforms, this paper finds that logistics and distribution have played a key role in the long-term development of overseas e-commerce. The development of cross-border logistics is limited by many factors such as long transportation distance and customs participation. Therefore, it is particularly necessary to carry out research on cross-border logistics. Based on the Bayes-BP algorithm, this paper designs the logistics distribution optimization system of the CBE platform, aiming to optimize the logistics distribution path of the CBE platform and improve the efficiency of logistics distribution. The final results of the research show that, The average cost of path 3 before optimization is 21.33, after optimization it is 14.62, and the cost can be saved by 6.71. The average cost of path 4 before optimization is 25.66, and after optimization, it is 14.02, which can save 11.64. It can be seen that through the distribution Optimizing logistics distribution after system optimization can effectively reduce logistics distribution costs and increase revenue sources. Keywords: Cross-border e-commerce · Bayes-BP algorithm · e-Commerce platform · Logistics and distribution

1 Introduction With the rapid development of today’s economy, contemporary people can buy what they want without going out. Thanks to the rapid development of the logistics industry, express delivery sites can be seen everywhere, almost all places of residence. In recent years, domestic e-commerce competition It was very intense, so some people moved their targets abroad, and many overseas regions have great market value [1]. Through CBE logistics, goods can be exported to foreign countries and increase the domestic economic growth value. The logistics cost has become the key factor. Therefore, this paper © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 30–39, 2023. https://doi.org/10.1007/978-3-031-31775-0_4

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform

31

examines the logistics distribution optimization system, which can not only improve the logistics efficiency of enterprises and reduce logistics costs, but also make the rational allocation of enterprise resources and maximize financial benefits. In recent years, many researchers have studied the logistics distribution optimization system of CBE platform based on the Bayes-BP algorithm, and achieved good results. For example, Lambert D M believes that the logistics and distribution costs of CBE platforms are relatively high, accounting for a large part of the total cost of the entire sales process. Therefore, if the efficiency of logistics and distribution can be improved and the losses caused by failure of logistics and distribution can be greatly reduced Increase the level of income [2]. Landis J R believes that CBE is the fastest growing form of trade in recent years, and research on logistics and distribution has high research value [3]. At present, scholars at home and abroad have conducted a lot of research on the logistics distribution optimization system of CBE platforms. These previous theoretical and experimental results provide a theoretical basis for the research in this paper. Based on the Bayes-BP algorithm theory, this paper studies and analyzes the logistics distribution optimization system of the CBE platform. Improving the efficiency of logistics distribution is a key link in the optimization of logistics routes. This paper first studies the scale of the CBE logistics market in recent years. It can be seen that the scale of the CBE logistics market has gradually expanded in recent years, and the logistics industry has brought very considerable income. Secondly, by comparing the logistics distribution optimization system of the CBE platform, it can be seen that the average cost of logistics has been reduced before and after the optimization of the logistics distribution path, indicating that the distribution optimization system can improve logistics efficiency and bring higher logistics benefits.

2 Related Theoretical Overview and Research 2.1 Problems Existing in Cross-Border e-Commerce Logistics Platforms The logistics link generally includes warehousing, sorting, packaging and distribution. Efficient logistics services can promote CBE, enhance consumer shopping experience and facilitate order placement. From the current situation, the efficiency and cost of cross-border logistics are the focus of CBE [4, 5]. In terms of efficiency, due to the particularity of cross-border logistics, the freight cycle is generally long, which prolongs the transaction time. On the one hand, it is easily affected by exchange rate fluctuations, resulting in changes in transaction costs, which can easily lead to poor customer experience and bring customers loss, thereby reducing economic gains. Secondly, in terms of cost, due to the long distance between cross-border logistics and customs supervision, it is difficult to obtain scale advantages, logistics costs will remain high, and the increase in logistics costs will be passed on to commodity prices, which will also affect CBE to some extent development [6]. Again, because CBE logistics has a little commodity characteristics. CBE is different from domestic e-commerce. Every CBE transaction is subject to customs supervision. Therefore, some special commodities cannot be purchased through CBE. Finally, there is the issue of after-sales service. When customers are dissatisfied with the products they receive and want to return them, they will also face problems such as high logistics costs and large return cycles, which

32

Z. Xie

seriously affect the customer’s shopping experience and are not conducive to CBE. Given the importance of cross-border logistics, CBE must involve logistics, so cross-border logistics has become a pain point in the development of the industry. Third-party logistics companies usually do not take into account the requirements of both buyers and sellers. For them, as long as the goods are delivered to the designated location, the job is done. With the rapid development of the Internet, the number of different logistics companies has increased dramatically, so the e-commerce logistics environment of the Internet has also attracted more and more attention. In many cases, in order to improve logistics efficiency and save logistics costs, logistics companies sacrifice part of the customer experience, which is not worth the loss [7, 8]. For many logistics companies, the quality of service cannot meet the requirements of customers, and high-quality logistics services are very important to improve customer satisfaction. For e-commerce companies with their own logistics systems, the logistics companies are their subsidiaries, and the logistics systems are part of their own functions, so they can easily control their own logistics services [9]. 2.2 An Overview of the Relevant Theories of the Bayes-BP Algorithm With the development of neuroscience, artificial neural network technology is becoming more and more mature, artificial neural network is a new research object, it is based on engineering technology and forms a neural network by studying the human brain. Neural networks need to simulate neurons in the brain many times, so they need to continuously receive input and output signal information [10]. Artificial neural network simulation methods are based on nonlinear parallel processors to perform related functions. BP neural network is the most widely used artificial neural network. It has arbitrarily complex template classification capabilities and excellent multidimensional function mapping capabilities. It is widely used to predict urban water demand [11]. Neural network BP has its unique characteristics, it contains many hidden layers, and has distinct neuron structure, which is widely used in network generalization. During the learning process of the neural network, the signal is fed into the forward mode. Input layer, output layer and hidden layer are the main structures of BP neural network. The weights of the neural network connect the neurons contained in each layer in a neat manner, with the boundary between the hidden layer and the output layer. The BP neural network model has a unique operation method, which can convert the difference between the expected value and the sample data into the minimum value, which comes from its own iterative calculation and continuous correction. While the model has good mapping capabilities, the model itself has some flaws as the neural network runs at top speed. These issues are unavoidable and the minimum price will always follow the slope. The negative direction continues to grow, and the corresponding weights between connected nodes are also modified [12]. In the process of running BP neural network, random variables have the characteristics of uncertainty, because they are composed of many unknown variables. If you want to do a good research on random variables, when you need to use the probability distribution method and do not take sample data, the probability distribution of the model H position to the variable θ is called the previous distribution.

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform

33

3 Experiment and Research 3.1 Experimental Method For the training samples (Xi, Yi) (i = 1, 2, 3,… n) after the classification output, the Bayesian rule is applied here, and the posterior probability distribution of the weights can be obtained. The formula is as follows:  pY|W , σ 2 p(W|α )     p W|Y , α, σ 2 = (1) p Y α, σ 2   (2) p W|Y , α, σ 2 = N (μ, ) In the above expression: where α and σ2 are the optimal hyperparameters, they can determine the prior distribution of the weight w, the denominator in the formula is the normalization factor, independent of w, this formula represents the probability distribution of the weight scope.

Supply Chain 1

Chain store 1

Supply Chain 2

Chain store 2

Supply Chain 3

Chain store 3

Fig. 1. Traditional supply chain model 1

Traditional logistics distribution system. The traditional chain store distribution system includes the whole process from supplier to point of sale, and the most commonly used ones are Figs. 1 and Fig. 2. The structure of Fig. 1 is relatively simple. There is

34

Z. Xie

no intermediate link from the supplier to the chain sales terminal. The advantage of this system is that direct distribution will not lead to the shortage of chain enterprises due to the insufficient inventory of the supplier. However, this model also has a disadvantage, that is, it is only applicable to chain stores with relatively dense chain stores and few branches. If the scale of the chain stores is expanded, if the suppliers are out of stock, the sales volume of the chain stores will be greatly reduced, and some errors will occur when processing orders, which will affect the sales volume of products.

Purchase

Transport

Supply Chain 1

Warehouse 1

Chain store 1

Supply Chain 2

Warehouse 2

Chain store 2

Supply Chain 3

Warehouse 3

Chain store 3

Fig. 2. Traditional supply chain model 2

Compared with Fig. 1, the model shown in Fig. 2 has an additional warehouse. The supplier is responsible for warehousing, ensuring sufficient inventory, and then transporting the warehouse to chain stores. The advantage of this is that the warehouse can be distributed to different places, even different cities. In addition, in this mode, even

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform

35

if there is a shortage of suppliers, the inventory can still be supplied for a period of time. Therefore, the shortage of suppliers has a certain impact on the sales of chain stores. Logistics distribution refers to the whole activity of transporting goods from the recipient’s warehouse through logistics transportation tools and completing distribution during the transportation process. Its main purpose is to ensure the timely and accurate delivery of goods to customers. Therefore, this paper mainly considers the system requirements from the following aspects: develop the system according to the randomness of the actual logistics distribution of cross-border e-commerce platforms; In the process of order allocation, the system can reasonably allocate the warehouse according to the quantity of goods in the warehouse and customer demand, thus improving the distribution efficiency; The system can use data mining technology to analyze and mine information such as user behavior and commodity price; The system can use Bayesian BP algorithm to solve the randomization scheduling problem. 3.2 Experimental Requirements Based on the basic theory of model classification, the optimization objective and the composition of the logistics distribution routing problem, this paper summarizes the general steps to create a logistics distribution routing optimization model. Then, according to the concept of time window in actual production and operation activities, a logistics distribution path optimization model with soft time window is introduced. When the path length is the main factor of the total cost and other factors have a negligible impact on the cost, the shortest path can be selected as the optimization objective, which significantly reduces many factors that are difficult to measure and calculate without affecting the optimization model.

4 Analysis and Discussion 4.1 Scale Analysis of Cross-Border E-commerce Logistics Market

Table 1. Analysis of transaction scale of cross-border e-commerce logistics market from 2017 to 2021 Year

Transaction size (trillion)

Growth rate (%)

2017

16.12

12.89

2018

18.01

11.72

2019

21.11

17.21

2020

23.36

10.66

2021

25.89

10.83

36

Z. Xie

Growth rate(%)

10.83

2021

25.89

10.66

2020 Experimental data

Transaction size(trillion)

23.36

17.21

2019

21.11

11.72

2018

18.01

12.89

2017

16.12 0

5

10

15

20

25

30

Experimental variable Fig. 3. Analysis of the market size of CBE logistics

China’s CBE industry has achieved very good results in recent years and has made great contributions to the country’s economic growth. The scale and growth rate of the CBE logistics market from 2017 to 2021 are shown in the Table 1 below. As can be seen from Fig. 3, the scale of my country’s CBE logistics market in 2017 was 16.12 trillion yuan, an increase of 12.89%. In 2018, the scale of my country’s CBE logistics market was 18.01 trillion yuan, an increase of 11.72%. The scale of the CBE logistics market is 18.01 trillion yuan, an increase of 11.72%. In 2021, the scale of my country’s CBE logistics market will increase to 25.89 trillion yuan, an increase of 10.89%. It can be seen from the data table that the logistics market scale The highest growth rate in 2019 was 17.21%.

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform

37

4.2 Comparative Analysis of Logistics Distribution System Before and After Optimization Through the logistics distribution optimization system, through the cost prediction analysis before and after the distribution path optimization, we can know whether the logistics distribution can improve the distribution efficiency and reduce the distribution cost. Through the comparative analysis before and after the optimization of the five distribution paths, the analysis results are shown in the following table.

Average cost before optimization Average cost after optimization difference 30 27.39 25.66

Experimental data

25 22.36

21.33

20

20.66

18.56 15.67

14.62

15

14.02

12.16

11.64

10 6.69

6.4

6.73

6.71

5

0 Path 1

Path 2

Path 3

Path 4

Path 5

Experimental variable Fig. 4. Comparative analysis diagram of logistics distribution system before and after optimization

As shown in Fig. 4, after the optimization of the logistics distribution optimization system on the CBE platform, the distribution cost under different distribution paths is

38

Z. Xie

reduced. From Table 2, it can be seen that the average cost of path 1 before optimization is 22.36, and after optimization is 15.67, can save 6.69, the average cost before optimization of path 2 is 18.56, after optimization is 12.16, can save 6.4, the average cost of path 3 before optimization is 21.33, after optimization is 14.62, can save 6.71, path 4 The average cost before optimization is 25.66, and after optimization is 14.02, which can save 11.64. It can be seen that the logistics distribution after optimization of the distribution optimization system can effectively reduce the cost of logistics and distribution and increase the source of income.

5 Conclusions This paper starts from the logistics distribution optimization system of CBE platform based on the Bayes-BP algorithm. Through the analysis of the CBE market scale, this paper summarizes the feasibility of the logistics distribution optimization system for CBE platforms. The scale of the business continues to increase, so the logistics distribution optimization system to reduce the logistics distribution cost can be greatly transformed into economic benefits. In addition, through the comparative analysis before and after the optimization of the logistics distribution system, it can be seen that the logistics distribution system can effectively reduce the average logistics cost under different distribution paths, which has certain feasibility. By improving the distribution efficiency and reducing the distribution cost, it can greatly increase the source of income and reduce the unnecessary expenses. Finally, this paper finds that although the optimization system can solve the problems of high cycle and long cycle of traditional logistics distribution, there are still many problems to be solved. First of all, optimization systems are not suitable for all businesses, and small and medium-sized enterprises can choose third-party optimization systems to reduce costs. Secondly, the products of the optimization system have certain limitations. Incomplete warehouse management experience will lead to the accumulation of unsalable products, resulting in the cash flow of the enterprise. To build a mature cross-border logistics system, it is urgent to solve the problems in the process of system development and optimization. With strong national policy support, enterprises can work together to reduce costs, share risks, continuously improve their warehouse management capabilities, reduce backlogs, and understand local economic culture, legal systems, etiquette and customs. Meet the needs of the target market, base itself on the local area, carry out local operations, and expand the target market share.

References 1. Johnson, G., Scholes, K., Whittington, R.: Exploring corporate strategy: text and cases. Pearson Educ. 5(4), 1–11 (2021) 2. Lambert, D.M., Stock, J.R., Ellram, L.M.: Fundamentals of Logistics Management .McGrawHill/Irwin (82), pp. 127–130 (2021) 3. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical data. Biometrics 9(1), 1–10 (2021) 4. Nweke, H.F., Ying, W.T., Mujtaba, G., et al.: Multi-sensor fusion based on multiple classifier systems for human activity identification. Hum-cent. Comput. Inf. Sci. 9(1), 11–12 (2019)

Logistics Distribution Optimization System of Cross-Border e-Commerce Platform

39

5. Lee, H.L., Billington, C.: Material management in decentralized supply chains. Oper. Res. 41(5), 835–847 (2019) 6. Schleiden, V., Neiberger, C.: Does sustainability matter? A structural equation model for cross-border online purchasing behaviour. Int. Rev. Retail Distrib. Consum. Res. 30(3), 1–22 (2019) 7. Ghoneim, S.: Improvement of trajectory tracking by robot manipulator based on a new cooperative optimization algorithm. Mathematics 9, 19–52 (2021) 8. Lendle, A., Olarreaga, M., Schropp, S., et al.: There Goes Gravity: How eBay Reduces Trade Costs. SSRN DP9094 1(4), 17–20 (2021) 9. Lomax, R.: Covariance structure analysis: Extensions and developments. Advances in social science methodology 11(12), 1–8 (2020) 10. Maria, G., et al.: Cross-border B2C e-commerce to Greater China and the role of logistics: a literature review. Int. J. Phys. Distrib. Logist. Manag. 47(9), 772–795 (2017) 11. Ivan, T., et al.: Non-linear least squares and maximum likelihood estimation of probability density function of cross-border transmission losses. IEEE Trans. Power Syst. 33(2), 2230– 2238 (2017) 12. Lummus, R.R., Vokurka, R.J.: Defining supply chain management: a historical perspective and practical guidelines. Ind. Manag. Data Syst. 99(1), 11–17 (2019)

Application of FCM Clustering Algorithm in Digital Library Management System Yanjun Zhou(B) Library Jianghan University, Wuhan 430056, Hubei, China [email protected]

Abstract. In this era of rapid development of science and technology, as the technology, skills, equipment update quickly power generation enterprises, need a large number of high-tech workers to support the business of enterprises. The enterprise has a special library, but due to the space of the library, it is unable to store a large number of professional books and receive the staff who flock to the company after work, which leads to the low efficiency of the library operation and the difficulty of improving the staff’s professional knowledge level. If the space of the library is increased and the number of books collected in the library is increased, the difficulty of library management will be increased accordingly. If the reading space of the library is increased, the maintenance of the library becomes more difficult, and all aspects of security will lead to more investment. This paper studies and uses FCM clustering algorithm to realize the construction of digital library. This digital library is composed of advanced software system and hardware facilities. The system software design process adopts three-tier software architecture, based on MVC framework, presented by B/S technology, through the system design software Visual Studio platform and C# language development launched by Microsoft; The hardware technology of the system relies on RFID radio frequency technology and Internet of things technology. The structured data is managed by the SQL SERVER database management system, and the unstructured data is stored and managed by the FTP SERVER. Keywords: FCM clustering algorithm · Digital library · Management system · System design

1 Introduction There are many definitions and expressions of digital library. It can be understood as follows: Digital library is the use of various multimedia devices, the traditional books is converted to a digital model, data, images, and at the same time can add to the traditional library without video, audio and video, model figure, such as digital data, and pass through the network technology, it has the cover an area of an area small, large storage, management, simple, more convenient and quick, etc. [1, 2]. Different from traditional libraries, it also has digital advantages, such as a large number of readers want to borrow the same book to read, traditional libraries generally cannot meet, but digital libraries can, this is the benefits of digitalization. In addition, digital library will no longer be a simple © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 40–49, 2023. https://doi.org/10.1007/978-3-031-31775-0_5

Application of FCM Clustering Algorithm

41

borrowing place, but a collection of learning, education, communication, information integration and classification software [3, 4]. It is impossible to generalize many functions of digital library with traditional library, and it is accurate to say that digital library is a “system” integrating storage, digital conversion and information transmission [5, 6]. The simple graphic information is recompiled and presented to people again in the form of digitalization and multimedia. In order to use various resources of the library efficiently, information technology, network technology and communication are adopted [7, 8]. After consulting a large number of domestic documents about digital libraries, it is found that differences and information sharing are emphasized in the process of establishing digital libraries in China. However, these researches focus more on the realization of digital library management system. Compared with the researches of foreign digital libraries, the research focus of Chinese digital libraries is not on user experience, which is exactly what needs to be strengthened most for a digital library [9, 10]. The future research of digital library should focus on the research of users. Digital library is based on the rapid development of information technology. In a sense, the development of digital library can reflect the development level of information technology industry in a country [2]. From this point, but in recent years, the word conversion tool just solved the problem, using the text conversion tool management of digital library can the traditional books, data, images, text directly into digital format, plain text that ever a book by the scan will be reduced by several hundred MB of memory to a few MB, Greatly saving server storage space. The construction and research work in the field of digital library in foreign countries started much earlier than that in China. After years of development, its main development center now begins to shift from data system and project content to mass users and personalized services [11]. Its development direction, ideas and achievements have been ahead of the field of digital library in China. However, the use, management and user-related research of digital library is still an important link in the development of digital library, and foreign experts and scholars are also continuously carrying out relevant research in this aspect. In view of the practical application of digital library, open storage, data download and big data knowledge base, data exchange and other technologies have been developed in a large scale, and have been further recognized by the majority of users. In order to enrich the digital library and protect some scientific data, some universities and some academic research institutions begin to try to store the scientific research achievements of teachers and researchers into the knowledge base of college digital library. In this way, it also drives the development of digital resource preservation technology and related standard construction. In recent years, in the development of foreign digital library related technology, interoperation, data exchange, quality control, information multimedia and other aspects have made great progress, which is embodied in the change of the use terminal. In the 21st century, mankind has entered the era of information explosion. With the rapid development of information technology, network, computer, Internet of Things, big data, communication technology and other high-tech, with the update and development of many technologies, the ability to collect and sort out all kinds of data will be more powerful, its functionality will be more powerful, so that the user experience will be comprehensively improved. In the process of operation, digital library needs to classify, classify, exchange and store huge information resources. Because it plays a very

42

Y. Zhou

important role in data information management. Therefore, the importance of research and development is highlighted. With the development of various researches, digital library technology has become an important topic in the current information technology research, and has been widely concerned by experts and scholars at home and abroad. Many countries and regions have been using the latest technologies to establish digital library systems and digital library data resources, a batch of research results have begun to appear on the Internet.

2 Proposed Method 2.1 FCM Clustering Algorithm FCM algorithm is a kind of fuzzy clustering method, which classifies unlabeled data sample points by the objective function based on some norm and clustering prototype. For an image, a given sample set is the gray value set of each pixel in the image. S is the dimension of sample space, and N is the number of samples, that is, the number of image pixels. C (c. > 1) is the number of clusters dividing X. FCM calculates the fuzzy membership matrix that meets the following constraints: c

 = 1, ni=1 uki > 0, 1 < k < c, 1 < i < n uki ∈ [0, 1], 1 < k < c, 1 < i < n

k=1 uki

(1)

To minimize the objective function: JFCM (U , V ) =

C K=1

N

um d 2 I =1 ki ki

(2)

where, m > 1 is called the fuzzy index; U = uki is a cxN matrix, called fuzzy membership matrix, where UKI means the membership value of the ith pixel point Xi belonging to the KTH class; V = v1 v2 v3 is a matrix of Sxc composed of C cluster center vectors; Dki = XI -vk represents the distance measure from pixel point XI to center Vk. There are many ways to represent distance, and Euclidean distance is the most used 2.2 Demand Analysis of Digital Library The digital library management system architecture described in Fig. 1 introduces the workflow of different users in operation and use. Because pre-registered users have different permissions, the view users see and the corresponding business logic are different when logging in.

Application of FCM Clustering Algorithm

43

Database

Library management system

General readers Working personnel

Administrators

Fig. 1. Digital library management system structure

Figure 2 shows the flow chart of the main interface module, including the library profile, and the two pull-down menus of “operation” and “information browsing”. In the pull-down menu of “Operation”, choose to log in and log out without logging in, or click the pull-down menu of “Information browsing”. A library is an organization that collects, arranges and collects books and other materials for reading and reference. Library is the distribution center and hub of document information resources. State Grid Yanghu Power Generation Company library in order to improve staff skills, knowledge reserve requirements emerge at the right moment. The library uses paper and electronic books and maintains them accordingly. The library provides books to employees, who learn knowledge from the library to enhance professional knowledge and enhance business skills. Through the library, the company can improve staff skills, make up for staff professional defects caused by technology, environment and other reasons, and achieve the purpose of corporate culture construction.

44

Y. Zhou

Brief introduction

Retrieval Main interface

Ranking List Sign in

Identity judgment

Student information

Borrow and return interface

Back-stage management

Fig. 2. Flow chart of main interface module

In the process of constructing electronic library, we should combine the existing library system with the actual situation of the company. The existing library system has some problems, such as fixed business, insufficient humanization degree, low efficiency, high labor consumption and insufficient accuracy of books management. Due to the factors of the library space and the use of the library, there are some problems such as low utilization rate of the library space and low efficiency of the staff’s borrowing and returning books registration. Although some library systems have the online borrowing function, they are not convenient to operate and not humanized enough to read. For example, there is no annotation, the page where the last reading was recorded and other functions. Every time, it needs tedious searching to connect to the last reading point. Based on the

Application of FCM Clustering Algorithm

45

above actual situation, we need to build a more reasonable and intelligent digital library to improve or solve these problems..

3 Design of Digital Library Management System 3.1 Functional Module Design For the whole digital library, the main function modules include resource uploading management, resource management, resource consulting management, personal space management, learning exchange management, data borrowing and return management, intelligent terminal management, report management and so on. The functional module structure of the system is shown in Fig. 3.

Digital library

Personal Space Management

Resource Upload Management

Data loan and return management

Learning communication management

Resource Access Management

Resource management

Fig. 3. Functional module design

3.2 Design of Non-functional Modules This paper studies the digital library, because the digital library is used for a specific company, its user data and part of the data can only be inquired and managed by internal personnel. The main functions of the system are planned and designed above. Although the planning and design can meet the normal operation needs of the system, the security

46

Y. Zhou

of the system is not enough. The following security measures are taken from the internal software to achieve the security protection of the system. (1) Make data security encryption Employee RFID card number data is stored in SQL Server 2017 database in binary stream encryption mode, which is automatically decrypted and displayed during system call. Without the cooperation of data decryption tool, the data cannot be displayed properly. (2) The card is safe In order to prevent document data from being tampered with, the design stage adopts RFID chip that supports ISO18000-6C standard. This chip has a unique 64-bit ID number in the world, and the number cannot be tampered with. (3) Network transmission security About data center room to the front-end equipment point of signal transmission, in order to ensure the security of its transport links, the trunk link using outdoor armored cable, optical cable access room industrial-grade optical transceiver, by using six classes is shielded twisted-pair cable connected to the other network equipment, the switch port for MAC and IP binding, security refused to illegal equipment access system, Protect data against leakage. Fiber optic cable to network equipment adopts industrial grade fiber optic transceiver to ensure the data transmission of the system, and the system can still run stably in harsh environment for a long time. 3.3 System Development Environment The system operation should be supported by a good environment. The development environment of this digital library is divided into hardware environment and software environment. The following environment is the standard operation environment of the system. In the design of digital library, the performance of hardware equipment should be able to fully meet the requirements of system operation, as the carrier of the system, the unqualified hardware will lead to the system cannot realize the function or even paralysis. The following is about RFID electronic tag read and write equipment (read and write antenna), RFID chip, industrial computer, server and other hardware. The employee card chip and the data chip are used to store the data information of the target object. Read/write antenna is an information reading device that feeds back the read information to the terminal for processing.

Application of FCM Clustering Algorithm

47

4 Discussion 4.1 Analysis of ExperImental Results After classifying the development of digital library informatization application level by FCM clustering algorithm, 27.63% digital libraries are classified into category 1,41.44% digital libraries are classified into category 2,30.93% digital libraries are classified into category 3. The development of three different types of digital libraries in informatization application level is shown in the following Table 1. Table 1. F C M clustering is different from the same kind of information water level Level indicators

Indicator label

Category 1

Category 2

Category 3

Basic infrastructure should be used

A1

0.204

0.187

0.183

A2

0.383

0.384

0.376

The teaching application

Management applications

A3

0.912

0.871

0.804

B1

0.268

0.091

0.072

B2

0.413

0.189

0.103

B3

0.583

0.254

0.237

B4

0.614

0.382

0.351

B5

0.538

0.419

0.364

B6

0.946

0.835

0.054

B7

0.935

0.916

0.697

C1

0.093

0.041

0.038

C2

0.793

0.341

0.312

C3

0.956

0.947

0.845

From the above chart shows category I of the digital library in the school network space number commonly used functions, school teaching informatization system commonly used function of quantity, opened real-name network learning space and the proportion of teachers to use information technology began to show the teaching discipline teacher ratio of these conventional teaching indicators used in information development level relative to the other two categories of conventional digital library The application development of xihua teaching has a higher level; In the special application of information-based teaching, Category 1 digital library in the school teachers and students of the most commonly used software system resources quantity, the use of information technology in the frequency of the curricula and teaching and school use of information technology, the application of auxiliary classroom teaching to achieve normalized number of subjects such as the three specific special under the teaching application index of informatization level relative to the other two categories of digital library development with special information technology teaching application development There is a higher level (in Fig. 4).

48

Y. Zhou

1.2 1 0.8 0.6 0.4 0.2 0 A1

A2

A3

B1

B2

Basic infrastructure should be used Category 1

B3

B4

B5

B6

The teaching application

Category 2

B7

C1

C2

C3

Management applications Category 3

Fig. 4. F C M clustering is different from the same kind of information water level

5 Conclusions To sum up, this paper designs a set of digital library management information platform based on FCM clustering algorithm through demand analysis, design analysis, implementation analysis and test analysis of a company’s digital library platform, in order to help enterprises do a good job in digital library management and meet the actual needs of users.

References 1. Rojas-Sola, José, Ignacio, et al. Agustín de Betancourt’s Double-Acting Steam Engine: Geometric Modeling and Virtual Reconstruction. Symmetry, 2018, 10(8):351–351 2. Siddiqi, M.U.R., et al.: Low cost three-dimensional virtual model construction for remanufacturing industry. J. Remanufact. 9(2), 129–139 (2018). https://doi.org/10.1007/s13243018-0059-5 3. Bulbul A, Dahyot R. Social Media based 3D Visual Popularity. Computers & Graphics, 2017, 63(APR.):28–36 4. Narayanan, S., Polys, N., Bukvic, I.I.: Cinemacraft: exploring fidelity cues in collaborative virtual world interactions. Virtual Real. 24(1), 53–73 (2018) 5. Tadeja, S.K., Seshadri, P., Kristensson, P.O.: AeroVR: an immersive visualisation system for aerospace design and digital twinning in virtual reality. Aeronaut. J.-New Ser. 124(1280), 1–21 (2020)

Application of FCM Clustering Algorithm

49

6. Thomsen, A.: Intraocular surgery - assessment and transfer of skills using a virtual-reality simulator. Acta Ophthalmol. 95(A106), 1–22 (2017) 7. Maicher, K., Danforth, D., Price, A., et al.: Developing a conversational virtual standardized patient to enable students to practice history-taking skills. Simulat. Healthc. 12(2), 124–131 (2017) 8. Konstantinov, I.L., Potapov, D.G., Sidelnikov, S.B., Voroshilov, D.S., Gorokhov, Y.V., Katryuk, V.P.: Computer simulation of the technology for producing a stamped billet for the piston of an internal combustion engine of an unmanned aerial vehicle. Russian J. Non-Ferrous Metals 62(1), 32–38 (2021). https://doi.org/10.3103/S1067821221010107 9. Badawy T, Henein N A. Three-Dimensional Computational Fluid Dynamics Modeling and Validation of Ion Current Sensor in a Gen-Set Diesel Engine Using Chemical Kinetic Mechanism. Journal of Engineering for Gas Turbines & Power, 2017, 139(10):102810.1–102810.11 10. Darabkh, K.A., Alturk, F.H., Sweidan, S.Z.: VRCDEA-TCS: 3D virtual reality cooperative drawing educational application with textual chatting system. Comput. Appl. Eng. Educ. 26(5), 1677–1698 (2018) 11. Alfieri, V., Pedicini, C., Possieri, C.: Design of a neural virtual sensor for the air and charging system in a Diesel engine. IFAC-PapersOnLine 53(2), 14061–14066 (2020)

Logistics Distribution Path Planning and Design Based on Ant Colony Optimization Algorithm Yan Wang(B) Chengdu Industry and Trade College, Chengdu Technician College, Chengdu 611731, Sichuan, China [email protected]

Abstract. Logistics distribution path planning is at the core of the logistics industry transportation process, but because logistics distribution accounts for a large proportion of the total logistics cost, so the optimization of distribution vehicle path is a hotspot of current research. This paper mainly studies logistics distribution path planning and design based on ant colony optimization algorithm. In this paper, the traditional ant colony algorithm (ACA) is optimized and improved by introducing constraints such as vehicle distance and load capacity, taking cost and load capacity as optimization objectives, and the physical distribution path is planned and designed by using the optimized ACA. The logistics data of a certain logistics enterprise in Shanghai is simulated and compared with genetic algorithm and ACA. The experimental results show that the proposed ant colony optimization algorithm saves 5.4% of the total transportation cost compared with the ACA and 2.7% of the total transportation cost compared with the genetic algorithm. Keywords: Ant Colony Algorithm · Logistics Distribution · Path Planning · Algorithm Improvement

1 Introduction In the 21st century, with the rapid development of social economy and the increasing prosperity of trade, the changes of enterprise competition focus, customer demand and consumption pattern bring great opportunities and challenges to the logistics industry. Since the first source of profit (reducing labor and raw material costs) and the second source of profit (improving labor productivity and expanding market share) were proposed, people found that the space for the development of these two sources of profit is becoming less and less, and the value of logistics has been paid more and more attention by scholars and enterprises. In order to vigorously develop the logistics industry, China has issued a number of policies and plans [1]. So far, the definition of logistics has yet to reach a consensus, that logistics in our country is to receive the goods from the supplier to flow process, according to the actual needs, a series of activities, to meet user requirements. Rationalization of vehicle distribution routes is an effective measure for enterprises to reduce transportation costs [2]. Therefore, vehicle routing planning, customer service time and other requirements to reduce the total transportation cost, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 50–58, 2023. https://doi.org/10.1007/978-3-031-31775-0_6

Logistics Distribution Path Planning and Design

51

namely, vehicle routing optimization has become one of the important research topics [3]. Using scientific and effective methods to design the route of vehicles has become a key content of logistics activities. According to the actual demand of distribution materials, foreign scholars considered the demand of simultaneous pickup and delivery, vehicle load and time window on the basis of vehicle routing problem, deriving various vehicle routing problems that are more consistent with the actual distribution demand [4]. Some scholars introduced the linear programming method in operations research into the vehicle routing problem, established a mathematical model based on the truck gasoline routing problem, and planned several routes for trucks to complete the task of transporting gasoline. However, due to the complexity of solving the global optimal solution of the vehicle routing problem, At that time, no global optimal solution was planned for this problem in the practical sense [5]. Subsequently, a large number of scholars at home and abroad began to conduct in-depth research on vehicle routing problem modeling from optimization objectives, constraints and other aspects and optimize the method of solving vehicle routing problem. A scholar proposed that the constraints of picking up and delivering customer materials were added on the basis of the vehicle routing problem, forming the vehicle routing problem of picking up and delivering goods simultaneously. From a distribution center to multiple libraries and pick up a number of books, the number of books shipped to each library and the number of books picked up are different. Due to the limited load capacity of vehicles and a certain number of vehicles, it is necessary to comprehensively consider the spatial location between libraries and constraints such as the requirement that the number of books sent by multiple libraries should not exceed the load capacity of vehicles, so as to plan a distribution route scheme with the minimum cost of vehicles [6]. It is an effective way to control the cost of logistics distribution to scientifically plan the distribution path through optimization algorithm. The strength of logistics route planning technology will directly affect the production cost and production efficiency, and it has important practical and theoretical significance for the research of vehicle route planning.

2 Path Planning based on Ant Colony Optimization Algorithm 2.1 ACA Overview ACA is a new bionic algorithm, which can be traced back to the 1990s. In the process of observing the ants searching for food in nature, it was found that the individual behavior in the ant colony was relatively simple and disorderly, but the overall foraging behavior of the ant colony showed certain regularity. Based on this, the ACA was proposed, which mainly simulated the behavior of ants in nature to quickly search for and transport objects [7, 8]. Ant colony found by a large number of observations, although in practice will face different situation, but all can be found in a relatively short period of time is the most concise path to find food, and the subsequent ants will be in accordance with this article repeatedly the minimalist path to reach the food, and found they ant colony of individuals left a walk in the path of the chemicals, called “pheromone” And other individuals with a keen perception of the “pheromone”, in the face of different

52

Y. Wang

paths, they will take the initiative to choose the path of “pheromone” relatively strong, on the path at the same time also stay the same “pheromone”, this will lead to a path of “pheromone” increasingly rich, in “pheromone” continuously under the premise of positive growth, The colony gradually determines the optimal path to find food quickly. ACA is a self-organizing algorithm and a parallel algorithm in essence. It is based on positive feedback and has strong robustness [9, 10]. The process of ant colony searching for food is shown in Fig. 1.

b

obstacles

ant colony a

c

d

food

Fig. 1. Diagram of ant search path

According to the above, ACA has the most important influence in the process of finding the optimal path in the following three aspects: (1) Selection strategy: the path with higher pheromone concentration is more attractive to ants. (2) Update strategy: the path with more pheromones released by ants will be preserved, while the path with less pheromones will be gradually eliminated with evaporation. Swarm optimization (pso) as a new generation of heuristic algorithm, in the face of the larger scope of route planning, customer point of a vast amount of logistics transportation problem model has strong adaptability, has a unique iterative result positive feedback mechanism and paratactic type calculation method, powerful algorithm between the compatibility and stability of iterative result and so on merits, but also accompanied by some disadvantages at the same time, For example, the number of iterations is more, prone to “premature” phenomenon to produce local optimal solutions and so on. Since the ACA is used to solve the travel problem in early research, the relative mathematical model established by the ant colony algorithm is the most classic. Therefore, this module takes the ant colony algorithm as an example to explain the relevant. Suppose that a businessman needs to visit a number of destinations. It is stipulated that each destination can be visited only once, and all destinations must be visited and returned to the original place. Minimize the total distance during the visit. TSP is one of the NP (non-deterministic polynomial) problems in combined optimization, that is, the non-deterministic problem that represents the complexity of polynomials.

Logistics Distribution Path Planning and Design

53

2.2 Optimization of ACA (1) Improvement of city selection probability The accumulation of pheromone concentration on the paths between cities has significant effect on ant k (k = 1,2…. N) The selection of the next city to visit plays a decisive role. The traditional selection probability is composed of pheromone concentration and distance heuristic parameters. In the VRP problem studied in this paper, vehicles are constrained by load, generally speaking, improving the load rate of vehicles has a great impact on the control cost. In this paper, the influence of the ratio between the current vehicle carrying weight and the maximum vehicle carrying weight is added into the calculation of selection probability, and the influence of the weight of goods required at each point is added. The improvement method is shown in Formula (1). ⎧ τ (t) α ∗ η (t) β ∗wλ ⎪ ⎪  [ ij ] [ ijα ] jβ λ , j ∈ allowk ⎨ s∈allowk [τis (t)] ∗[ηis (t)] ∗ws (1) pijk (t) = ⎪ ⎪ ⎩ 0, j ∈ / allowk Allow k = (1,2…. Tabu k (k = 1,2… N) represents the set of cities that ant K has passed through, and tabuk is constantly updated with the selection of nodes by ants; η ij (t) is the heuristic parameter, η ij (t) = 1/d ij, which is the expected value of ants from city I to city J. W is the sum of the current load and the demand of the selected next node; λ represents the parameter of vehicle load degree, and its value is the ratio of the current vehicle load to the maximum vehicle load. Implementation method: When the algorithm selects the next target point, it first calculates the degree of vehicle load, which is the ratio of the current vehicle load to the maximum vehicle load, and then calculates the selection probability value of each point in allow K table successively. After calculating the selection probability value of each point, select the next target city or return to the original distribution center according to roulette. (2) Improvement of volatile factors ACA mainly selects cities by pheromone concentration, and the pheromone concentration accumulated on the path between cities plays a decisive role in the selection of the next visit city by ants. Volatile factor is a factor that directly affects pheromone concentration, and its value directly affects the convergence speed of the algorithm. If the value is too high, pheromone volatilizes too fast; if the value is too low, pheromone accumulation is too high. In this paper, different volatile factor values are selected in different time periods. At the initial stage of the algorithm, the algorithm needs to have strong global search ability, so the ρ value is set to a large value to improve the randomness of the algorithm. After iterative process to a certain number of times, we should raise the convergence speed of algorithm, the value of rho will update to a smaller value, in order to improve the local search ability of the algorithm, so as to better to find the optimal solution, but also can reduce the possibility of algorithm into local optimum situation, this article set the value of rho update along with the number of iterations, set the initial value is 0.6, The value is updated to 0.3 after a certain number of iterations, and the volatile

54

Y. Wang

factor value is set to 0.1 after the algorithm has been duplicated for more than half of The Times, as shown in Formula (2). ⎧ 0.6, NC ∈ (0, N− max/4) ⎨ ρ = 0.3, NC ∈ (NC− max/4, NC− max/2) (2) ⎩ 0.1, NC ∈ (NC− max/2, NA− max) Implementation method: Maxgen was set at the beginning of the algorithm, and the initial value of the volatile factor was also set. The initial value in this paper was 0.6. In each cycle of the algorithm, a judgment statement was added to determine whether the number of iterations of the algorithm reached the volatile factor update condition, if so, the volatile factor value would be updated. (3) Setting of genetic operation In each iteration of the algorithm, the optimal four paths will be selected according to the path found by ant colony rules for genetic operation. The genetic operation was carried out for 30 times in each cycle iteration, and the crossover probability Pc = 0.9 and mutation probability Pm = 0.1 were set to calculate the fitness function values of these paths. The best one was retained and compared with the original optimal solution. If it was better than the original optimal solution, the new solution would be saved as the optimal solution, otherwise it would remain unchanged. The definition of Chinese logistics terms is: logistics refers to the transportation of goods from warehouse to customers or distribution centers, and the organic combination of a series of operations such as packaging, loading and unloading, warehousing, transportation scheduling, distribution processing, etc., to meet customers’ needs. Logistics includes: transportation, storage, packaging, handling, distribution and related logistics information. What this paper does is logistics path planning, as shown in Fig. 2. There are various ways to express logistics distribution problems. It can be roughly described as follows: assuming that there are specific locations and required customer points, the distribution will be carried out from the distribution center to the demand points by car, and then returned to the logistics distribution center after the distribution is completed. It is necessary to provide the shortest traffic route for the traffic distance and meet the following restrictions: the demand of all distribution stations on each distribution route must be equal to or less than the total capacity of each distribution vehicle; The total length of each distribution route shall not exceed the maximum travel of the vehicle at one time; The requirements of each customer point must be limited to one vehicle. The goal is to minimize costs such as distance and time.Fig. 3 is an example of a simple logistics distribution line.

Logistics Distribution Path Planning and Design

Transaction

Inspection Arrival

Distribution Outbound

Deliver goods Warehousing

Put on the shelf

Platform

Fig. 2. Logistics Flow Chart

Outbound

55

56

Y. Wang

Customer demand points

Route 1

Route 2

Distribution Centre

Route 3

Fig. 3. Example of logistics distribution

3 Simulation Experiment Settings 3.1 Data Sources This paper selects the actual logistics order data of a logistics enterprise in the city on a day. A distribution center in a certain place has 48 orders to be distributed on that day. It is planned to provide a reasonable transportation plan for each vehicle of the distribution center. Known to the distribution center has 10 cars, cars are small freight car, the car’s biggest load for 3 tons (1500 kg), every car fixed the startup cost is 100 yuan, every car cost for 4 yuan per kilometer, vehicle loading and unloading of goods time is 10 min, other time consumption temporarily ignored, the highest penalty fee is RMB 50 yuan per car. On a given day, 30 customers need to ship goods from the distribution center. 3.2 Experimental Verification The ant colony optimization algorithm is compared with genetic algorithm and ACA, and it is proved that the designed ant colony optimization algorithm can get the optimal solution in solving the path optimization model with time window. In order to make the simulation results of the algorithm have strong comparability and persuasion, the same group of actual data are used for experimental verification in the two

Logistics Distribution Path Planning and Design

57

verification. Under the condition that all platforms and hardware devices are consistent, the specific operating environment is as follows: CPU is 3.40 GHz, operating memory is 8 GB, operating system is Windows10, and simulation software is Matlab2019a.

4 Simulation Experiment Results and Analysis Due to a single experiment has a certain randomness and chance, in order to better validation of ant colony optimization algorithm in solving the searching capability of the model, respectively using ant colony and genetic and ant colony optimization algorithm performs 10 times, after completion of each algorithm of optimal number of iterations and model the total transportation cost, then average experimental results of 10 times, as shown in Table 1. Table 1. Results of three algorithms for path optimization problems punishment cost (CNY)

total mileage (km)

total cost (CNY)

ACA

423

1051

243

2104

5708

GA

415

1034

346

2057

5547

ACOA

402

918

134

2019

5396

2500

6000

2000

5000 4000

1500

3000 1000

cost

Mileage

average mileage average cost (km) (CNY)

2000

500

1000

0

0 ACA

GA

ACOA

Algorithm average mileage(km)

total mileage(km)

average cost(CNY)

punishment cost(CNY)

total cost(CNY) Fig. 4. Results of three algorithms for path optimization problems

As shown in Fig. 4, from the comparison of the final obtained time penalty cost, one of the vehicles solved by ant colony optimization algorithm has no time penalty cost

58

Y. Wang

at all, while the other vehicles have less penalty cost, so the delivery service quality of these vehicles is good. According to the average driving distance of each vehicle, ant colony optimization algorithm can reduce the driving distance and driving cost to a certain extent. Compared with ACA and genetic algorithm, the fusion algorithm saves 5.4% and 2.7% of the total transportation cost.

5 Conclusions Through the establishment of reasonable logistics distribution path, can effectively improve the efficiency of logistics distribution, greatly reduce the cost of enterprise distribution, improve the quality of enterprise service and competitiveness, but also provide sustainable and healthy development for the national economy. Therefore, vehicle routing planning has always been a hot issue of social concern. In this work, the ACA is used to solve the vehicle path design problem, and the traditional ACA is improved. The improved algorithm introduces restrictions such as the weight of a vehicle, the maximum travel distance and the time window, and takes distribution costs as the main optimization target. Although the improved algorithm makes up for some shortcomings of the traditional ACA, it is still immature and the quality of the solution needs to be improved. We hope that the performance of the improved ACA can be further improved and combined with other intelligent algorithms to improve the performance of the ACA.

References 1. Breunig, U., Baldacci, R., Hartl, R.F., et al.: The electric two-echelon vehicle routing problem. Comput. Oper. Res. 103, 198–210 (2019) 2. Bernal, J., Escobar, J.W., Linfati, R.: A simulated annealing-based approach for a real case study of vehicle routing problem with a heterogeneous fleet and time windows. Int. J. Shipping Transp. Logist. 13(1/2), 185 (2021) 3. Kleff, A., Bräuer, C., Schulz, F., Buchhold, V., Baum, M., Wagner, D.: Time-dependent route planning for truck drivers. In: Bekta¸s, T., Coniglio, S., Martinez-Sykora, A., Voß, S. (eds.) Computational Logistics. ICCL 2017. LNCS, vol. 10572, pp. 110–126. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68496-3_8 4. Mainwaring, G., Olsen, T.O.: Long undersea tunnels: recognizing and overcoming the logistics of operation and construction. Engineering 4(2), 249–253 (2018) 5. Loske, D., Klumpp, M.: Human-AI collaboration in route planning: an empirical efficiencybased analysis in retail logistics. Int. J. Prod. Econ. 241, 108236 (2021) 6. Verma, A., Campbell, A.M.: Strategic placement of telemetry units considering customer usage correlation. EURO J. Transp. Logist. 8(1), 35–64 (2017). https://doi.org/10.1007/s13 676-017-0104-9 7. Husain, N.P., Arisa, N.N., Rahayu, P.N., et al.: Least squares support vector machines parameter optimization based on improved aca for hepatitis diagnosis. Jurnal Ilmu Komputer dan Informasi 10(1), 43 (2017) 8. Rahimi, M., Kumar, P., Yari, G.: Portfolio selection using aca and entropy optimization. Pak. J. Stat. 33(6), 441–448 (2017) 9. Sekiner, S.U., Shumye, A., Geer, S.: Minimizing solid waste collection routes using ACA: a case study in gaziantep district. J. Transp. Logist. 6(1), 29–47 (2021) 10. Saramud, M.V., Kovalev, I.V., Kovalev, D.I., et al.: Modification of the ACA for multiversion software application development. In: IOP Conference Series: Materials Science and Engineering, vol. 1047, no. 1, p. 012155 (9pp) (2021)

The Development of Power Grid Digital Infrastructure Based on Fuzzy Comprehensive Evaluation Shengyao Shi(B) , Qian Miao, Shiwei Qi, Zhipeng Zhang, and Yuwei Wang State Grid Jilin Electric Power Co., Ltd., Economic and Technical Research Institute, Changchun 130000, Jilin, China [email protected]

Abstract. In order to determine the development form of the new digital infrastructure of Jilin power grid, the key factors affecting the development form are analyzed. The paper first summarizes and sorts out all kinds of potential influencing factors, and then uses Delphi method to screen them, so as to form an evaluation index system. And then a fuzzy comprehensive evaluation model is constructed to evaluate and study the performance of the three development forms of industrial Internet, big data Center and the application of new technology. Finally, combined with the performance of the evaluation indicators, determine the key factors and weak links. The evaluation results show that the response of Jilin provincial government has been affirmed, but the pertinence is not strong, the economic structure and the development of new digital infrastructure complement each other, the demand for social development is strong, but some areas lack core technology. Keywords: Fuzzy Evaluation · the Digital Infrastructure · Influencing Factors

1 Introduction Jilin Province has always been an important old industrial base in China, with a prominent strategic position. Although Jilin Province made a major adjustment in the industrial structure and the overall economy maintained stable development during the “13th fiveyear Plan” period, the total economic output and development speed of Jilin Province still have some distance compared with the national average level. Therefore, the development of new digital infrastructure is an effective way to promote local economic development. As an important part of new infrastructure, new digital infrastructure is not only the basis of the development of digital economy, but also the long-term strategic direction of new infrastructure construction. The new digital infrastructure does not open up some brand-new high-tech industries on the basis of existing industries, but to make full use of digital technology to tap the value of data and information, and through the intelligent transformation of transportation, energy, industry and other traditional industries, so as to improve the development efficiency of the industry. Specific to the scope of the power © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 59–68, 2023. https://doi.org/10.1007/978-3-031-31775-0_7

60

S. Shi et al.

grid, it is the specific practice of 5G Network, Artificial Intelligence, Industrial Internet, Big Data Center and many other technologies and applications in power grid companies [1, 2]. The development of new digital infrastructure construction by power grid companies is the key to the transformation and upgrading from a single power company to an energy Internet enterprise [3–5]. At present, Jilin power grid is actively promoting the construction and development of new digital infrastructure of power grid, which has formed three major development forms, including Industrial Internet, Big Data Center and the Application of New Technology, but in the process of actual construction, it is necessary to explore the key influencing factors in view of these three forms, so as to find out the weak links and put forward the corresponding solutions. For this reason, this paper takes Jilin Province as an example, by building the evaluation index system of influencing factors, and using the way of fuzzy comprehensive evaluation to carry out the result analysis, in order to provide a theoretical basis for Jilin Power Grid to determine the focus of digital new infrastructure development.

2 Construction of Evaluation Index System 2.1 Carding of Influencing Factors In this study, with the help of literature review and field investigation, it is concluded that the influencing factors include government response, economic, social and technical aspects [6]. Among them, the government response level will affect the speed and direction of the development of new infrastructure construction, and play an important role in the development of power grid digital infrastructure, while the impact on the economic level will determine the healthy and stable operation of the new power grid digital infrastructure. The impact of the social level will determine the scale and direction of the new power grid digital infrastructure construction. The technical level will determine whether some of the needs of the new power grid digital infrastructure can be realized, and these factors affect the development of the new power grid digital infrastructure. 2.1.1 Government Response Level The new digital infrastructure of power grid is derived from the new infrastructure category put forward at the national level. Therefore, Enterprises take an active part in it at the call of the government. In the process of participation, responses at all levels of government will have an impact on it. The details are shown in Fig. 1.

The Development of Power Grid Digital Infrastructure

61

Fig. 1. Influencing factors at the level of government response

2.1.2 Economic Level At the economic level, the main influencing factors are the economic strength of the power grid company itself in the construction of the project and the overall external economic operation. The investment planning capacity of power grid enterprises determines the development scale of the new digital infrastructure. In addition, the external macroeconomic operation is also a factor that can’t be ignored. The details are shown in Fig. 2.

Fig. 2. Influencing factors at the economic level

2.1.3 Social Level The convenience brought by the construction of new digital infrastructure will increase the demand for new digital infrastructure. At the same time, construction activities will promote the development of industry. Therefore, the development needs of these industries will also have a certain impact on the development of new power grid digital infrastructure. The details are shown in Fig. 3.

62

S. Shi et al.

Fig. 3. Influencing factors at the social level

2.1.4 Technical Level Market demand is a major factor to promote the development of new power grid digital infrastructure. The investment degree of the power grid to the relevant technical personnel, the investment level to the technology and the matching and coordination ability with the original facilities and equipment all have a relevant impact on the investment and decision-making of enterprises. Technical standards also have an important impact on the development of new digital infrastructure of power grid. The details are shown in Fig. 4.

Fig. 4. Influencing factors at the technical level

The Development of Power Grid Digital Infrastructure

63

2.2 Screening of Evaluation Indicators In order to build a practical and effective index system that can be used to evaluate and measure the key factors affecting the construction and development of new digital infrastructure in power grid, on the basis of extensively combing and summarizing all kinds of potential influencing factors, we follow the principles of dynamic, maneuverability and comparability [7], and use the Delphi method [8, 9] to invite five experts to rate the influencing factors sorted out according to their influence on the construction and development of digital new infrastructure. The factors below the average score are screened out, and the evaluation index system is promoted by the remaining factors, as shown in the following Fig. 5: Comprehensive Evaluation Index System Government response level Tax incentives R&D funds input Laws, regulations and policies Electricity price support for industry and commerce

Technical level Technological innovation ability Technical barriers Technical maturity The industry standard Data confidentiality

Economic level Corporate profitability Financing ability Level of economic development The industrial structure

Social level Convenience to the society Urgency and necessity Development of Related Industries Relevant talent level Level of high-tech industry

Match with the original facilities The difficulty of construction and development

Fig. 5. Comprehensive evaluation index system

3 Construction of Fuzzy Comprehensive Evaluation Model and Result Analysis 3.1 Weighting After obtaining the final comprehensive evaluation index system, the weight of the target layer should be given according to the importance of each index. For this reason, this study chooses AHP (Analytic hierarchy process) to carry out empowerment work [10].

64

S. Shi et al.

Considering that there is a mature commercial software Yaahp dedicated to AHP empowerment, this study uses this software to carry out work. As shown in Table 1. Table 1. Weighting results based on AHP method Index

Weight

Index

Weight

Index

Tax incentives

0.0916

The industrial structure

0.0411

Technical barriers 0.0147

R&D funds input

0.1879

Convenience to the society

0.0046

Technical maturity

0.1300

Laws, regulations and policies

0.0493

Urgency and necessity

0.0519

The industry standard

0.0094

Electricity price support

0.0220

Development of Related Industries

0.0090

Data confidentiality

0.0416

Financing ability

0.0504

Relevant talent level

0.0278

Match with the original facilities

0.0416

Ability to use funds 0.0120

Level of high-tech industry

0.0160

The difficulty of construction and development

0.0234

Level of economic development

Technological innovation ability

0.0900

0.0857

Weight

3.2 Fuzzy Comprehensive Evaluation Considering that most of the indicators in the comprehensive evaluation index system of the key factors affecting the development form constructed by this project are qualitative indicators with fuzzy attributes, the fuzzy comprehensive evaluation method is selected to construct the evaluation model. Fuzzy comprehensive evaluation method is a comprehensive evaluation method for fuzzy objectives which is difficult to be defined, which is gradually developed on the basis of fuzzy mathematics theory put forward by Zadeh in 1965. The evaluation of the fuzzy comprehensive evaluation model is divided into five levels. Set up an evaluation set V = {v1, v2, …, v5} = {excellent, good, average, fair, poor}. Correspondingly, the measurement scale vector H = {100, 80, 60, 40, 0} is established. In this paper, in the form of a questionnaire survey, a total of 30 relevant experts were selected to score the factors affecting the digital new infrastructure of the power grid, and 28 valid questionnaires were collected. Taking the “Industrial internet” as an example, the detailed process of fuzzy comprehensive evaluation is listed as follows.

The Development of Power Grid Digital Infrastructure

65

3.2.1 First-Level Fuzzy Comprehensive Evaluation. Taking the government level as an example, the sub-factor evaluation results of the indicators at the government level are shown in the following Table 2: Table 2. Evaluation results of government-level factors of industrial internet Grade level

Excellent

Good

Tax incentives

0

7

Average 7

Fair 0

Poor 14

R&D funds input

0

7

21

0

0

Laws, regulations and policies

0

7

21

0

0

Electricity price support

0

0

7

14

7

Through the method of fuzzy statistics, the fuzzy comprehensive evaluation matrix at the government level is as follows: ⎡ ⎤ 0 0.25 0.25 0 0.5 ⎢ 0 0.25 0.75 0 0 ⎥ ⎥ R1 = ⎢ (1) ⎣ 0 0.25 0.75 0 0 ⎦ 0 0 0.25 0.5 0.25 In this paper, the weighted average operator is selected for fuzzy operation, and the comprehensive evaluation vector at the government level is obtained as follows: ⎡

0 ⎢ ⎢0 [0.0916, 0.1879, 0.0493, 0.022] × ⎢ ⎣0 0

0.25 0.25 0.25 0

0.25 0.75 0.75 0.25

0 0 0 0.5

⎤ 0.5 ⎥ 0 ⎥ ⎥ = [0, 0.0822, 0.2063, 0.011, 0.0513] 0 ⎦ 0.25

(2)

By the same token, it can be calculated that the comprehensive evaluation vector at the economic level is [0.0103, 0.0438, 0.0746, 0.0606, 0], the comprehensive evaluation vector at the social level is [0.0130, 0.0572, 0.0202, 0.019, 0], and the comprehensive evaluation vector at the technical level is [0.0557, 0.0388, 0.1325, 0.0914, 0.0325]. 3.2.2 Two-Level Fuzzy Comprehensive Evaluation On the basis of the first level fuzzy comprehensive evaluation, the fuzzy comprehensive evaluation matrix of the first layer factor set is obtained. ⎡ ⎤ 0.0000 0.0822 0.2063 0.0110 0.0513 ⎢ 0.0103 0.0438 0.0746 0.0606 0.0000 ⎥ ⎥ [0.3507, 0.1892, 0.1093, 0.3507] × ⎢ ⎣ 0.0130 0.0572 0.0202 0.0190 0.0000 ⎦ (3) 0.0557 0.0388 0.1325 0.0914 0.0325 = [0.0229, 0.0570, 0.1351, 0.0494, 0.0294] The comprehensive evaluation matrix of the normalized first layer factor set is [0.0779, 0.1939, 0.4599, 0.1683, 0.1].

66

S. Shi et al.

3.2.3 Score Transformation According to the corresponding score of the evaluation set of each factor set, the score of each index can be obtained as shown in Table 3, 4 and 5. Table 3. Fuzzy evaluation value of various first-level indicators of Industrial Internet Grade level

Excellent

Good

Average

Fair

Poor

Score

Government

0.0000

0.2656

0.4976

0.0418

0.1950

52.7765

Economic

0.0724

0.3087

0.5254

0.0936

0.0000

67.1987

Society

0.1583

0.4983

0.1610

0.1824

0.0000

72.6502

Technical

0.2116

0.1473

0.3333

0.1842

0.1236

60.3118

Overall

0.0779

0.1939

0.4599

0.1683

0.1000

57.6255

Table 4. Fuzzy evaluation value of first-level indicators of Big Data Center Grade level

Excellent

Good

Average

Fair

Poor

Score

Government

0.2656

0.0000

0.3124

0.3349

0.0870

58.7020

Economic

0.0000

0.0211

0.8340

0.0724

0.0724

54.6300

Society

0.0140

0.3608

0.1610

0.3794

0.0848

55.1022

Technical

0.0140

0.3564

0.3466

0.1594

0.1236

57.0858

Overall

0.0943

0.1908

0.4188

0.2163

0.0797

58.4785

Table 5. Fuzzy evaluation value of each first-level index in the Application of New Technology Grade level

Excellent

Good

Average

Fair

Poor

Score

Government

0.2656

0.3124

0.0209

0.2931

0.1079

64.5325

Economic

0.0000

0.0000

0.6154

0.3122

0.0724

49.4116

Society

0.0140

0.6066

0.2458

0.0488

0.0848

66.6301

Technical

0.1251

0.2567

0.1242

0.1711

0.3229

47.3434

Overall

0.1355

0.3441

0.1572

0.2187

0.1445

59.2598

3.3 Results Analysis As shown in Table 3, 4 and 5, in terms of industrial Internet, among the first-level indicators, the social level performs best, with a score of about 73, and 49.83% of experts believe that this level is “good”. The government response level is the worst, with a score of about 53, and 49.76% of experts believe that this level is “average”. Except for the government response level, the scores of the other three levels are all more than 60 points, but the government response level has the highest weight, thus

The Development of Power Grid Digital Infrastructure

67

dragging down the overall evaluation value. Generally speaking, at the government level, although most experts think that its performance is “good” or “average”, no expert thinks that its performance is “excellent”, so the government needs to pay more attention and help. The overall performance at the economic and social levels has been affirmed and needs to be maintained. Although the technical performance is good, experts have different opinions on its grading, indicating that although there are certain technologies, there are still some key or advanced technologies that need to be further strengthened. In the big data center, among its first-level indicators, the government response level performed relatively best, with a score of about 59 points. 31.24% of the experts thought that the level was “Average”, while 33.49% of the experts thought the performance was “Poor.“ this also makes the score of this level less than 60 points, while the scores of the other two levels are even lower. The score at the economic level is the lowest, about 55 points. Although 83.4% of the experts believe that this level is “Average”, the weight at the economic level is relatively small, so it fails to drive the overall performance. Generally speaking, at the level of government response, the opinions of experts are not unified, and those who think that “Excellent”, “Average” and “Poor” account for a certain proportion, indicating that the government’s response is lack of pertinence. At the economic level, the vast majority of experts believe that the performance is “Average”, which shows that from an economic point of view, the current overall environment of Jilin Province is relatively unsuitable for vigorously promoting the construction of big data Center, and the lack of corresponding industrial support is the basis for the development of big data Center. There is a serious differentiation between the two levels of expert opinions at the social level, and they think that their “Good” and “Poor” account for similar proportions, indicating that the degree of social recognition is relatively low, so it is necessary to further strengthen publicity. The technical performance is worth affirming, which shows that the construction of big data Center has a certain technical basis for Jilin Province and Jilin Power Grid. In terms of the application of new technologies, the social level performs best in the first-level indicators, with a score of about 67 points, and 60.66% of the experts believe that this level is “Good”. The technical level is relatively the worst, with a score of about 47 points, and 32.29% of the experts believe that this level is “Worst”. In addition to the technical level, there is also a score of less than 60 at the economic level. At this level, about 93% of the experts believe that its performance is “Average” or “Poor”, which is the main reason for the low score at this level. Generally speaking, the performance of the government response level is good, and the government’s support for the application of new technology has been affirmed. At the economic level, no experts agree that its performance is “Good” or above, indicating that the economic benefits of the current application of new technology have not yet been fully revealed, and we need to focus on the value of technology application. The good performance at the social level shows that it is highly recognized by the community as a whole and hopes that new technologies can be widely used. On the other hand, there are different opinions on the evaluation of technology, indicating that there may still be some key technologies that need innovation and breakthrough, and we need to continue to strengthen the research on technological innovation and master the core technology.

68

S. Shi et al.

4 Conclusion In general, government support, economic development, social demand and technological progress all have a very important impact on the development of new digital infrastructure of power grid. According to the analysis of this paper, the government needs to strengthen the attention and support of the Industrial Internet. The construction of Big Data Center needs to strengthen social publicity. The Application of New Technology needs to strengthen the innovation ability of technology and pay attention to its economic value. The new digital infrastructure of power grid connects the huge investment and demand on the one hand, and the continuously upgraded consumer market on the other, which plays a very important role in stimulating the economic growth of Jilin Province and promoting the high-quality development of economy. In order to promote the further development of the new digital infrastructure of power grid, the government should pay more attention to the new digital infrastructure of power grid and improve the relevant laws and regulations. Relevant enterprises should also increase the investment in core technologies to master the core competitiveness. It is very important to promote the development of various undertakings step by step for the further development of Jilin Power Grid.

References 1. Sahbani, S., Mahmoudi, H., Hasnaoui, A., et al.: Development prospect of smart grid in Morocco. Procedia Comput. Sci. 83, 1313–1320 (2016) 2. Saunavaara, J., Laine, A., Salo, M.: The Nordic societies and the development of the data centre industry: digital transformation meets infrastructural and industrial inheritance. Technol. Soc. 69, 101931 (2022) 3. Schappert, M., Von Hauff, M.: Sustainable consumption in the smart grid: from key points to rco-routine. J. Clean. Prod. 267, 121585 (2020) 4. Rahim, S., Wang, Z., Ju, P.: Overview and applications of Robust optimization in the avantgarde energy grid infrastructure: a systematic review. Appl. Energy 319, 119140 (2022) 5. Argyroudis, S.A., Mitoulis, S.A., Chatzi, E., et al.: Digital technologies can enhance climate resilience of critical infrastructure. Clim. Risk Manag. 35, 100387 (2022) 6. Silva, P.M., Moutinho, V.F., Moreira, A.C.: Do social and economic factors affect the technical efficiency in entrepreneurship activities? Evidence from European countries using a two-stage DEA model. Socio-Econ. Plann. Sci. 82, 101314 (2022) 7. Yvars, P.-A., Zimmer, L.: A model-based synthesis approach to system design correct by construction under environmental impact requirements. Procedia CIRP 103, 85–90 (2021) 8. Chinyundo, K., Casas, J., Bank, R., et al.: Delphi method to develop a palliative care tool for children and families in sub-Saharan Africa. J. Pain Symptom Manag. 63(6), 962–970 (2022) 9. Rahayu, P., Wulandari, I.A.: Defining e-portfolio factor for competency certification using fuzzy Delphi method. Procedia Comput. Sci. 197, 566–575 (2022) 10. Alaqeel, T.A., Suryanarayanan, S.: A fuzzy Analytic Hierarchy Process algorithm to prioritize Smart Grid technologies for the Saudi electricity infrastructure. Sustain. Energy Grids Netw. 13, 122–133 (2018)

Virtual Reality Technology in Indoor Environment Art Design Shuran Zhang(B) College of Landscape Architecture and Arts, Northwest A&F University, Yangling, Xianyang 712100, Shaanxi, China [email protected]

Abstract. The data deconstruction ability and visual imaging ability of virtual reality technology determine that it has a great prospect in the field of interior environment art design. This paper studies the application of virtual reality technology in indoor environmental art design of Internet of Things smart home through the analysis of the application of virtual reality technology in the expression of indoor environmental art design. Through the analysis and research on how to deconstruct and reshape the interior environmental art design of virtual reality technology, This paper discusses the new ideas and methods that virtual reality technology brings to indoor environmental art design in the comparison and selection of design schemes, communication with users, demonstration of construction technology, etc., which makes indoor environmental art design more reasonable and efficient, and contributes to the overall scientific control of indoor environmental art design. This article mainly studies the application of virtual reality technology in the interior environment art design, introduces the concept and principle of virtual reality technology, and carries on the professional processing to the interior environment art design. The data test shows that the application of virtual reality technology in indoor environmental art design effectively improves the level of indoor environmental art design. Keywords: Virtual Reality Technology · Indoor Environment · Art Design · Technology Application

1 Introduction The role of environmental art design allows designers to better ways to extract the features of the world, and the use of interactive environment makes the designers to design innovative thinking, ideas, and add a creative way, to improve the creative elements, from the beginning to the final end of design can play their best state. However, in this period, many aspects of environmental art design are not sound, and it still needs to be improved and innovated. However, with the continuous progress of technology, its effects will be more powerful. The application of virtual reality technology in the interior environment art design provides a set of technical tools to solve the interior environment art design, and provides a strong support for the interior environment art design. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 69–77, 2023. https://doi.org/10.1007/978-3-031-31775-0_8

70

S. Zhang

About the research of virtual reality technology, many scholars at home and abroad have carried on the research to it. In studies abroad, Pletz C proposed that immersive virtual reality (IVR) can now be used on a large scale, but few organizations in Germanspeaking countries seem to have used this technology on a large scale in education and training. As a result, little is known about technology acceptance. The question is how to interpret the degree of technology acceptance and what specific technology influences can be identified in the field of training [1]. Atsz O proposed the use of virtual reality technology to prevent the transmission of novel Coronavirus. In addition, COVID-19 can be viewed as an opportunity for industries and destinations to market their products and services. Therefore, this technology will be very helpful in the post-COVID-19 recovery of tourism [2]. Akdere M proposed to explore the effectiveness of virtual reality (VR) technology as a platform for innovative learning in cultivating cross-cultural competence, including cross-cultural knowledge, attitudes and beliefs. The study was based on data (n = 101) from STEM undergraduates in their first-year technical program at a large public university in the Midwest. T-test results of paired samples, which assessed each component of cross-cultural competence, showed that average scores increased from before (T1) to (T2) two weeks after the VR intervention [3]. VR technology adds many new elements to the works of environmental art and design, and also promotes the development mode of social economy, production power and innovation degree of thought, bringing great reform significance. Although VR technology is so powerful, but technology is always for people to improve the help. VR technology tries its best to highlight its own characteristics, and solves various difficulties of users from multiple directions and angles, so as to enhance the relationship between design and people [4, 5].

2 Design Exploration of the Application of Virtual Reality Technology in Interior Environment Art Design 2.1 Virtual Reality Technology In the narrow sense, virtual reality refers to the use of computer technology, wearable display, 3D glasses, data rings, digital pods and other tools to enhance the realization of virtual scenes. Human development of various devices from their own physical behavior into computer signals, so as to send action signals to the computer, using sensors to obtain view, hearing, feeling and other information. A series of transformation of these signals, from collection to saving database, from database retrieval, back to human, the whole process is the narrow sense of virtual technology [6, 7]. The broad definition of virtual reality is to build a set of interactive environments similar to the display world. In including the above concept, definition, access to all related to the reality of the simulation of a complete set of equipment and facilities, and associated with the relevant technology and tools. And the core points locked on the space interaction, expand the scope of application [8, 9]. The virtualization technology is shown in Fig. 1.

Virtual Reality Technology in Indoor Environment Art Design

Virtual machine

Virtual machine

Virtual machine

Network

Virtual machine

Optical fiber

Storage

Storage Storage

Network

Fig. 1. Virtualization technology

Virtual reality has the following basic features, as shown in Fig. 2:

immersion

interaction

Imagination

Fig. 2. The basic features of virtual reality

71

72

S. Zhang

1) Real immersion: Real immersion is when the user experiences the real virtual world, so you can measure immersion by how immersed the user is in the virtual environment. In the ideal of virtual immersion, it is difficult for users to distinguish the real from the fake. In the visual, auditory and other sensory fusion. 2) Real-time interactivity: Interactivity refers to the interaction between the user and the machine so that the virtual machine can be completely manipulated. Use reasonable means to observe and imitate the virtual environment, and realize the embodiment of the natural environment. Many years ago, users mostly used computers and various input devices to operate the display screen. However, nowadays, with the high development of science and technology, many interactions and controls are completed by head-mounted wearing, data gloves and other tools. Use these tools for many practical applications, such as motion locking, target tracking, visual presentation, voice interaction, etc. 3) Rich conception: Conceivability means making people imagine objects or situations. In the virtual environment, it is necessary to create conditions, use virtual technology, reflect the scene or object, and operate the object or target. However, the completion of these actions is not real, but conceived, which is a virtual scene [10–12]. 2.2 Application of Virtual Reality Technology in Interior Environment Art Design 1) Virtual reality shows design works intuitively: Traditional works usually have to deal with the link, and the works often show handwork or butterfly style, cartoon style or architectural style, but these expressions have monotonous and dull design effect, so it is difficult to express the art of environmental design [13]. 2) Communication between the two sides of virtual reality improvement project: Virtual reality often helps projects to promote project effects through its own functions, which include three aspects. The first stage: virtual reality often creates designs based on the virtual environment. If there are problems or mistakes in the design, the virtual reality model can quickly find the location and cause of the problems, and the designer can quickly correct the scheme. It is because of the increased degree of communication between the two sides that problems can be found in time and solutions can be put forward in time [14]. 3) Virtual reality displays design elements and provides atmosphere: Environmental art design has many characteristics, such as unity, integrity and so on. Works generally contain the main scene and background, so background for environmental art design has a very prominent role. The background usually includes green plants, people, cars, infrastructure and other parts, which are used to set off the atmosphere. Virtual reality decomposes the background and combines scattered and trivial individuals as a whole, so as to enhance the aesthetic feeling of reality and increase the user’s immersion feeling [15–16]. The naturalization design of gardens includes the design of plants, water and human environment. Integrating plants into the interior, this design fully reflects the integration of human and nature, making it have an internal and external connection. Similarly,

Virtual Reality Technology in Indoor Environment Art Design

73

like the park, you can decorate the interior as a corner of the garden and place plants indoors for layout. This kind of beauty is very vivid. At the same time, it strengthens the purification of the air, gives full play to the role of indoor space, and forms a harmonious unity between the limited indoor space environment and the unlimited outdoor space environment.Its application is shown in Fig. 3.

Example 2

Example 1

Example 3

Fig. 3. Example of interior environment art design

3 Explore the Application Effect of Virtual Reality Technology in Interior Environment Art Design The main equipment used for photo collection in this paper are tripod and CANON IXUS 95IS digital camera. The fixed focal length was roughly 38 mm, and the photos were taken at 30 degrees apart. A total of 12 photos were taken. This also determines the hFOV size of the horizontal viewing Angle of the digital camera: hfov =

360 n

(1)

where n is the number of photos taken. Thus, the pixel focal length f of the camera can be estimated: f =

W 2tan(hfov/2)

(2)

74

S. Zhang

where W is the width of the real image taken by the camera. VR uses not just one part, but the whole part of the scene. a. Preparation and conception: Environmental art design uses scientific means to carry out the concept. However, during the design process, the designer’s ideas are always relatively abstract and erratic, which makes the designer immerse in the virtual environment, input information from various angles and multiple perspectives, and stimulate the thinking constantly at the same time, so that the thinking can always be active. This is conducive to the link of innovative thinking and the insertion of art and the realization of goals. b. Elaboration and improvement stage: Now VR technology is involved in the design process, allowing designers to examine designs in real-life VR. In VR, the browsing speed can be very fast, and the perspective relationship can be automatically changed. Objects can be simply generated, copied, scaled and other operations in real size, and can be saved and modified. There is more flexibility in design. VR technology is an operational 3d sketching tool. Not only can you operate from a full scale perspective, but you can also observe local and detailed processing in detail. VR technology can be used to observe and refine whether it is the whole or the part, macro or micro, whether it is the layout observation in urban planning or a screw in interior design. c. Verification and demonstration phase: A good design scheme can get twice the result with half the effort. To apply VR technology in this link, can more fully the expression of multidimensional space environment art image, plan achievement is no longer a cold a few pages, but the light color combination of comprehensive effect, exquisite, vivid, strong sensitivity, strong interactivity, good use can produce realistic effect, foil the emotional atmosphere, an unexpected effect.

4 Investigation and Research Analysis of the Application of Virtual Reality Technology in Interior Environment Art Design This section uses Matlab experimental platform. Average value method, weighted average method and multi-resolution spline method based on optimal suture line were used to compare the image fusion effect after using rectangular positioning block. The three indexes of the original image and the fused image are calculated. The smaller the root mean square error is, the smaller the difference between two images is. The higher the SNR, the better the fusion effect and quality. The larger the PSNR is, the more information the fused image extracts from the original image, or the closer the fused image is to the ideal image, of course, the better the fused image will be. Table 1 uses three algorithms, including virtual reality technology test data. The first line of the table is Image fusion method, which uses three methods, namely mean value method, Weighted Average method and VR method. After extracting location blocks, different image fusion algorithms can obtain different fusion effects. The evaluation parameters are obtained by comparing the fusion image with the image before fusion. As can be seen from the chart, the effect of VR method is better than average method and weighted average method for the fusion of the same rectangular positioning blocks

Virtual Reality Technology in Indoor Environment Art Design

75

Table 1. The compare of three fusions to Image Processing Image fusion method

PSNR

SNR

RMSE

mean value method

31.0012

62.3142

6.7001

Weighted average method

31.0231

71.0123

5.2012

VR method

32.0512

72.3212

4.0126

80

mean value method

Weighted average method

VR method

70 60 50 40 30 20 10 0 PSNR

SNR

RMSE

Fig. 4. Image fusion effect comparison diagram

after extraction. In this paper, the rectangular positioning block algorithm can extract most of the image information and get the ideal image Mosaic effect, as show in Fig. 4. The data show that the application of virtual reality technology in indoor environment art design has a very obvious better effect on indoor environment art design. In addition, this article compares virtual reality technology with traditional indoor environmental art and design technology. Through 10 environmental art professionals, the design under the two technologies is scored, and the average score is calculated to determine which technology is more suitable for the current design. Among them, the scoring directions are divided into: cleanliness, aesthetics, artistry and innovation. The statistical results are shown in Fig. 5:

76

S. Zhang

10

Virtual reality

Tradition

9 8 Average

7 6 5 4 3 2 1 0 Neatness

Aesthetics Artistry Judging criteria

Innovative

Fig. 5. Comparison of the effects of the two technologies on environmental art and design

It can be seen from the data in the figure that no matter what aspect is judged, the scores given by environmental art and design art professionals under virtual reality technology are higher than those under traditional art and design. Therefore, virtual reality technology has a great promotion effect on indoor environmental art and design, and its application to this is very in line with the concept of text design.

5 Conclusions The development of cultural industry has gradually become the main body of the national economy, and the importance of innovation has been put forward in the concept of development. All industries and enterprises should take innovation as the new driving force, so as to “make innovations for the better, make innovations every day and make innovations every day”. Design must carry on self-innovation, self-improvement, and need continuous innovation, continuous progress, continuous development. New technologies, new thinking and new methods need to be applied. The application of virtual reality technology in interior environment art design provides innovation power for interior environment art design and technical foundation for the development of interior environment art.

References 1. Pletz, C.: Which factors promote and inhibit the technology acceptance of immersive virtual reality technology in teaching-learning contexts? Results of an expert survey. Int. J. Emerg. Technol. Learn. (iJET) 16(13), 248 (2021) 2. Atsz, O.: Virtual reality technology and physical distancing: a review on limiting human interaction in tourism. J. Multidiscip. Acad. Tour. 6(1), 27–35 (2021)

Virtual Reality Technology in Indoor Environment Art Design

77

3. Akdere, M., Acheson, K., Jiang, Y.: An examination of the effectiveness of virtual reality technology for intercultural competence development. Int. J. Intercultural Relat. 82(1), 109– 120 (2021) 4. Appel, L., Peisachovich, E., Sinclair, D.: CVRRICULUM program: benefits and challenges of embedding virtual reality as an educational medium in undergraduate curricula. Int. J. Innov. Educ. Res. 9(3), 219–236 (2021) 5. Katona, J.: A review of human-computer interaction and virtual reality research fields in cognitive InfoCommunications. Appl. Sci. 11(6), 2646 (2021) 6. Lee, S.Y., Bak, S.H., Bae, J.H.: An effective recognition method of the gripping motion using a data gloves in a virtual reality space. J. Digit. Contents Soc. 22(3), 437–443 (2021) 7. Takami, A., Taguchi, J., Makino, M.: Changes in cerebral blood flow during forward and backward walking with speed misperception generated by virtual reality. J. Phys. Ther. Sci. 33(8), 565–569 (2021) 8. Thomas, S.: Investigating interactive marketing technologies-adoption of augmented/virtual reality in the Indian context. Int. J. Bus. Compet. Growth 7(3), 214–230 (2021) 9. Mancini, M., Cherubino, P., Cartocci, G., et al.: Forefront users’ experience evaluation by employing together virtual reality and electroencephalography: a case study on cognitive effects of scents. Brain Sci. 11(2), 256 (2021) 10. Rauf, F., Hassan, A.A., Adnan, Z.: Virtual reality exergames in rehabilitation program for cerebral palsy children. Int. J. Comput. Appl. 183(19), 46–51 (2021) 11. Abbas, R.L., Cooreman, D., Sultan, H.A., et al.: The effect of adding virtual reality training on traditional exercise program on balance and gait in unilateral, traumatic lower limb amputee. Games Health J. 10(1), 50–56 (2021) 12. Mathis, F., Williamson, J.H., Vaniea, K., et al.: Fast and secure authentication in virtual reality using coordinated 3D manipulation and pointing. ACM Trans. Comput.-Hum. Interact. 28(1), 1–44 (2021) 13. Guaitolini, M., Petros, F.E., Prado, A., et al.: Evaluating the accuracy of virtual reality trackers for computing spatiotemporal gait parameters. Sensors 21(10), 3325 (2021) 14. Ma, J., Wu, D., Huang, M.: The detection system of automobile interior environment based on single chip microcomputer. Open Access Libr. J. 8(12), 9 (2021) 15. Gao, J., Kim, I.S.: A comparative study on healing environment elements and WELL building standard criteria for interior environment design of geriatric hospitals. Korean Inst. Inter. Des. J. 29(1), 72–80 (2020) 16. Jamshidi, S., Pati, D.: A narrative review of theories of wayfinding within the interior environment. HERD 14(2), 290–303 (2020)

Application of Digital Virtual Art in 3D Film Animation Design Effect Yueping Zhuang(B) Xiamen University Tan Kah Kee College, Zhangzhou 363105, Fujian, China [email protected]

Abstract. The appearance of 3D film and television animation (TA) has brought revolutionary changes to the development of animation industry. Digital virtual art is a key element in the design of 3D film and TA, affecting the overall effect of 3D animation. The purpose of this paper is to study the application of digital virtual art in 3D film and TA design effects. This paper analyzes the concept of computer animation, analyzes and summarizes the technical characteristics and applications of 2D animation and 3D animation according to the spatial dimension, and summarizes the performance characteristics and advantages of computer animation. Summarizes the advantages of digital animation technology in 3D film and TA and the development direction of 3D film and TA in digital animation, analyzes the advantages of animation production technology in 3D film and TA, at the same time, from the animation production technology The three aspects of special effects technology analyze the existing problems of 3D film and TA, put forward the production principle of 3D film and TA, and put forward the technical method of 3D film and TA production and transformation. Experiments have shown that in the 100-person questionnaire survey, the vast majority of audiences prefer animations produced by digital virtual art. Therefore, the speed and sophistication of digital virtual art production are far superior to the previous traditional animation production technology. Keywords: Digital Virtual Art · 3D Film and Television Animation · Special Effects Art · Digital

1 Introduction Three-dimensional animation has infinite picture expression, can simulate the most realistic scene, characters, props, etc., can show wind, rain, lightning and fog and other special effects, simulation degree can reach the audience cannot distinguish between film and television and the actual degree. It can be seen that the span of its picture performance is very large. The advantages of virtual digital features and other digital features are evident in this film, such as in the digital film Final Fantasy Movie Edition. All scenarios were simulated digitally. In particular, the heroine’s hair, skin, expression and so on and real people. Only now can digital technology simulate such an amazing sense of reality. Digital animation virtual a digital world reconstructed from the real © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 78–86, 2023. https://doi.org/10.1007/978-3-031-31775-0_9

Application of Digital Virtual Art in 3D Film Animation Design Effect

79

world. Three-dimensional animation adopts the virtuality of digital technology to make the picture so rich. Although its production process is very close to the actual film, it is completed in the virtual environment of the computer, and can also realize the picture effect of two-dimensional cartoon. These effects are impossible to achieve in the traditional two-dimensional means [1, 2]. In the study of the application of digital virtual art in 3D film and TA design effect, many scholars study it and achieve good results. For example, Jankoska M studies the development process of digital virtual art from the historical traces of art development, and the above literature explains the development process of digital virtual art through a large number of examples and illustrations [3]. Eremeeva P A uses new digital media technology to present it to the audience in new and innovative forms. People can directly have a dialogue with the characters in the painting. Many works have obtained new forms of presentation. A new interpretation of the works from a certain perspective is also a way to understand the author’s creative intention [4]. This paper first reviews the development process of digital virtual art; Secondly, the concept, composition and classification of digital virtual art are sorted out and summarized; Later, the influence of digital virtual art on film and TA was studied, From the creation tools, work display, work presentation, expression theme of four aspects of specific analysis; To study the specific application of digital virtual art in film and TA again, Mainly from the application status quo, the application examples, the application value of the three aspects of the analysis; Finally, the summary outlook of digital virtual art is made, Analyzing the important factors that determine the nationalization and internationalization development of digital virtual art from three aspects, Thus for its development of a clear direction. At the end of this paper, two groups of comparative experiments are conducted to compare the differences between digital virtual art and traditional 3 D animation technology, from which we can learn that digital virtual art is the best choice for the future development of 3 D animation.

2 Research on the Application of Digital Virtual Art in the Design Effect of 3D Film and TA 2.1 Characterization of Digital Virtual Technology (1) Integration of technology and art Digital virtual art is a new art form formed by the combination of digital media technology and art. It has developed from flat and static to dynamic and comprehensive, from single media to multimedia, and from flat to three-dimensional, which has greatly enriched the media language. The changes brought about by digital media and the resulting changes have a profound impact on the internal relationship between artists, works and audiences, which are fully reflected in digital virtual reality. With the support of digital multimedia technology, digital virtual art has achieved all-round innovation in the form of artistic expression such as creation, bearing and dissemination, and has undergone profound changes in aesthetic consciousness, experience and thinking. During this period, the audience’s psychological state, value orientation, aesthetic awareness and time concept have changed.

80

Y. Zhuang

Digital virtual art creates a fantastic and surreal scene through digital technology and the powerful imaging ability of computers, making people’s real feeling like a dream full of uncertainty. The potential of perception floats without gravity. In the world of images, it can touch and transmit the surface and language changes of creatures created by computers, as well as the height, depth and speed of the huge, exciting, exciting and numb space of individuals or groups. A new virtual aesthetic experience was born. (2) Construction of virtual reality Through their own unique ways, artists have found virtual reality and applied it to creative activities. The establishment of virtual reality is to simulate human feelings through computers, so as to stimulate human senses and nerves. The artists strive to break through the restrictions of the screen, create a real three-dimensional visual, auditory, tactile and other senses with computers, let the audience enter an imaginary image, and try to attract the audience’s eyes and other senses with virtual images and simulation technology, so as to give the viewer the most real impression. At present, there are three types of digital virtual reality systems: the first is interactive virtual reality, which uses a virtual viewer’s space to interact with 3D joystick; Second, live virtual reality, through a variety of output devices, allows people to experience real life. The third is distributed virtual reality technology, which uses virtual reality technology to connect different audiences, so that they can have virtual communication in a virtual world. The three virtual reality systems stimulate people’s various senses through the use of hallucinations, so that the viewer has a feeling of being in the artistic atmosphere created by the artist, so as to obtain a comprehensive immersion aesthetic sense of vision, hearing, taste, smell, touch and so on. The faster the computer is, the more real that feeling will be. Virtual reality describes a kind of unreal-like space composed of illusion and feeling. The viewer will have an illusion when walking in this space. Virtual reality technology is to blur, negate or eliminate the difference between reality and virtual world by using the latest image technology. The purpose is to create an illusion that is difficult to distinguish true from false, so that the senses can feel the same information generated by reality, thus inducing the brain to think that it is in a real environment. In other words, digital virtual art realizes another kind of “reality” by processing real images. 2.2 Influence of Digital Virtual Art on Film and TA Design Due to the rise of modern technology, the wave of virtual technology has swept all aspects of people’s daily life, study and work. The development of modern art is bound to rely increasingly on technology. The application of virtual technology has a profound impact on the overall development of art. Virtual art was born in this environment. It is an art form formed by introducing the elements of virtual technology into art creation. At present, the academic community’s understanding of virtual art is still inconsistent. We must first clarify the meaning of “virtual”, and then make a clearer understanding of it. The virtual technology is shown in Fig. 1: With the rapid development of technology, new digital images and media have spread to every aspect of our lives. Due to the development of computer technology, many design

Application of Digital Virtual Art in 3D Film Animation Design Effect

Game

Teaching

81

Urban construction

Virtual technology

Aerospace

Rail transit

Medical care

Fig. 1. Application of virtual technology

software emerged, which contributed significantly to the creation of traditional gambling and communication systems. With the popularization and ease of use of network technology, the network has become an important channel to obtain information in modern life. Communication platforms, including Youku, Tudou, iQiyi, Letv, Thunder watch and other video websites. The media has also become a means of much communication. This shows that the independence and diversity of new mixed media games make up for weaknesses such as isolation and poor interactivity of movies and TV shows, and the public can choose to watch sports events according to their own preferences. With the development of online video platform and multimedia media, original sports services and in-depth sports services have found a way to growth. Combined with the previous game types, the Internet provides digital painting, printing, collage, independent picture frame and other people’s beauty services, favored by people [5, 6].

82

Y. Zhuang

2.3 The Application of Digital Virtual Art in Film and TA The application of digital virtual art in film and TA has the following characteristics. 1. Digital virtual art is becoming more and more widely used in film and TA. From pure animation scene, to human and animation mixed scene, and then to pure reality scene, almost every film and TA works contain the composition of digital virtual art. 2. Digital virtual art plays a more powerful role in film and TA. From depicting the details of characters, to rendering grand scenes, to accurate and smooth expressions and behaviors, all are more vivid and expressive because of the existence of digital virtual art. 3. The application of digital virtual art in film and TA has always permeated the production of film and TA. This process is the close combination of technology and art, the process of artistic improvement, and the process of artistic realization of technology. In the process of modern film and TA production, it is necessary to integrate the thinking of digital virtual art from the conception of the plot, considering which scenes can be realized through digital virtual art, and which scenes already used by predecessors can be completely presented through digital virtual art. In the production process, character modeling, behavior, performance methods and so on, all need to be integrated into the thinking and means of digital virtual art, such as image matting technology requires actors to perform in front of the blue curtain, 3D film needs special shooting Angle and machine position, etc. When the works are shown, equipment and props can express digital virtual art, such as 3D cinemas and glasses [7, 8]. 2.4 Application Value of Digital Virtual Art in Film and TA From a deeper technical level, digital image can meet the maximum needs of legendary film and television development projects, which is the greatest value of digital image in the film and television game industry. According to Marxism, the value of goods refers to the attribute that commodities can meet certain needs of human beings. Digital image can meet some needs of the development of film and television, which is the value of digital image in film and television sports. The highest value of media is mythological works, namely the dissemination and expression of human meaning. From stone characters, books, live performances to film and TA, from understanding, illustration, simulation to reality, the importance of the development and transformation of the media is a continuation of the legendary works. The key is: richer expression, faster absorption, and more flexible. This is the value of digital images in movies and TV games [9, 10]. 2.5 Application of the Sampling Survey Algorithm In this paper, the application of digital virtual art in the effect of 3D film and TA design is questionnaire, and 2000 people in 20 cities. The algorithm of this questionnaire mainly uses the Bayesian algorithm. The Bayesian classification algorithm is described as follows: (1) Each data sample is represented by an n-dimensional eigenvector X = {x1 , x2 , . . . , xn }, describing the n attributes A1, A2, …, An, respectively.The n measures of the samples.

Application of Digital Virtual Art in 3D Film Animation Design Effect

83

(2) Assuming m classes, which are indicated by C1, C2, …, Cm, respectively. Classifying the unknown samples to the Ci, the conditions shown in formula (1) must be met:   (1) P(Ci |Xi ) > P Cj |Xi 1 ≤ j ≤ m, j = 1 This P(Ci |Xi ) is the maximum posterior probability. (3) In the case of prior probability of Ci class is unknown, P(C1 ) = P(C2 ) = . . . = P(Cm ) otherwise P(Ci ) = SSi , Si is the number of training samples in Ci class and S is the total number of training samples. According to the following Bayesian formula of the [11, 12] P(X |Ci )P(Ci ) (2) P(Ci |X ) = P(X )

3 Application and Design Experiment of Digital Virtual Art in 3D Film and TA Design Effect 3.1 Production Environment The hardware configuration tested in this system requires CPU frequency of 2.0 GHz, 8 GB, hard disk space of 512 GB, and screen resolution of 1024 * 768. The operating system uses Windows XP SP3, Windows Vista, Windows 7, and Windows 8, and the database is SQL Server2008 Enterprise Edition. Two groups of experimenters use the same equipment and different technologies to make the same 3D film animation with the same content. 3.2 Satisfaction Survey An audience of 100 was randomly selected and equally distributed between 10 and 50 years. The animation produced by the two different animation techniques in the above experiment was given to 100 viewers, so that the audience can choose their satisfactory animation [13, 14].

4 Experimental Analysis of Digital Virtual Art in the Design Effect of 3D Film and TA 4.1 Comparison Between Traditional 3d Film and Television Production Technology and Digital Virtual Art This paper takes the traditional 3D film animation production technology and the latest digital virtual art to make the same time of 3D film animation with the same content, with 30 s, 60 s, 120 s, 300 s and 600 s, respectively, to record the consumption time of the two methods. The data is shown in Table 1. It can be clearly seen from Fig. 2 that the time consumption of 3D animation from digital virtual art is much lower than that of traditional 3D animation. In this way, digital virtual art can produce the same 3D animation more conveniently, quickly and effectively, and with the update of technology, the convenience and functionality of digital virtual technology will be continuously enhanced. Therefore, digital virtual technology is the best choice to make 3D animation in the future.

84

Y. Zhuang Table 1. Comparison of time used for two animation production techniques 30

60

120

300

600

Old

5

20

90

240

720

New

3

5

10

30

52

800

Old

New 720

700 600

time

500 400 300

240

200 90

100 5

3

20 5

10

52

30

0 30

60

time 120

300

600

Fig. 2. Time comparison of different animation production techniques to produce the same animation

4.2 The Sophistication of the Two Animation Production Techniques The animation produced by the two different animation techniques in the above experiment was given to 100 viewers, so that the audience can choose their satisfactory animation. The data are shown in Table 2. The animation produced by the two different animation techniques in the above experiment was given to 100 viewers, so that the audience can choose their satisfactory animation. The data are shown in Table 2. From Fig. 3, we can clearly see that the vast majority of audiences are more satisfied with the animation produced by digital virtual art, so the digital virtual art not only has less time to make animation, but also makes more sophisticated animation, which is more loved by the audience.

Application of Digital Virtual Art in 3D Film Animation Design Effect

85

Table 2. Comparison of audience satisfaction with the two animation production techniques 30

60

120

300

600

Old

47

21

18

14

17

New

53

79

82

86

83

Old 600

300

New 83

17

86

time

14

120

82

18

60

79

21

30

53

47

0

20

40 per cent 60

80

100

120

Fig. 3. Viewer satisfaction with animations made by both techniques

5 Conclusions Digital virtual art uses science and technology as an effective tool to express the beauty of art, which has the charm of original art, and due to the close relationship with science and technology, it is the most development potential and vitality in the whole field of art and design. In this paper through the animation technology, hand-painted effect, digital technology simulation hand-painted effect, obtained with digital virtual art production 3D film and TA design is the inevitable trend of digital age, digital technology like traditional materials is a production tool, is also in the effect of artificial control, but compared with the traditional 3D film and TA production way, digital virtual art is more efficient. In the animation creation, the advantages of multiple software combined with mutual application. The production method of using 3 D software modeling paste material for rendering also needs to combine 3 D software and 2 D drawing software, not only simply build the model paste material, but also in-depth research on both, so as to make the work better presented. With the progress of science and technology and

86

Y. Zhuang

the continuous update of software, the means of digital virtual art will be more perfect, providing the possibility for more hand-painted animation creation. It is believed that with the progress of science and technology, there will be more and better 3D film and TA works using digital virtual art. Acknowledgements. Fund Project: 2020 Fujian Provincial Youth Teacher Education and Research Project (Science and Technology)–Research on the Innovative Application of 3D Mapping Technology in the Background of Cultural and Tourism Integration (JAT200929).

References 1. Essangri, H., Bensbih, S., Majbar, M.A., et al.: 232 “SuturesApp”: a digital application to improve cancer patients understanding of surgical procedures. Br. J. Surg. 109(Supplement_1), znac039.147 (2022) 2. Hala, H.H., Glms, V.: Digital storage of cultural heritage data: Openheritage3D example. Turk. Online J. Des. Art Commun. 11(2), 521–540 (2021) 3. Jankoska, M.: Application CAD methods in 3D clothing design. Tekstilna Industrija 68(4), 31–37 (2020) 4. Eremeeva, P.A.: Features of digital technology application in health care. Bus. Strat. 8(8), 223–227 (2020) 5. Alrikabi, H., Al-Malah, A.R., Hamed, S.I.: The interactive role using the mozabook digital education application and its effect on enhancing the performance of eLearning. Int. J. Emerg. Technol. Learn. (iJET) 15(20), 21–41 (2020) 6. Harahap, R.M., Santosa, I., Wahjudi, D., et al.: STUDY of interiority application in deaf space based lecture space case study: the Center of Art, Design & Language in ITB building. J. Accessibility Des. All 10(2), 229–261 (2020) 7. Poux, F., Valembois, Q., Mattes, C., et al.: Initial user-centered design of a virtual reality heritage system: applications for digital tourism. Remote Sens. 12(16), 2583 (2020) 8. Hunsaker, A.J., Rocke, L.: Street art in the library: transforming spray paint into a digital archive and virtual reality experience. J. Digit. Media Manag. 7(3), 279–291 (2019) 9. Ghani, S.: A systematic literature review: user experience (UX) elements in digital application for virtual museum. Int. J. Adv. Trends Comput. Sci. Eng. 9(3), 2801–2807 (2020) 10. Nortvig, A.M., Petersen, A.K., Helsinghof, H., et al.: Digital expansions of physical learning spaces in practice-based subjects - blended learning in Art and Craft & Design in teacher education. Comput. Educ. 159(4), 104020 (2020) 11. Klorman, E., Hatten, R.S.: A theory of virtual agency for western art music. (Indiana University Press, Bloomington 2018). x + 312 p. 70.00 (hb.). ISBN: 9780253037978. Music Anal. 39(1), 135–141 (2020) 12. Danilenko, L.: Digital technologies in graphic design of UK in early 1980s: visual manifestations and application features. Art Res. Ukraine (19), 68–74 (2019) 13. Bao, W.: The application of intelligent algorithms in the animation design of 3D graphics engines. Int. J. Gaming Comput.-Mediated Simul. 13(2), 26–37 (2021) 14. Chen, X., Jiang, H., Xuan, T.: Designing deployable 3D scissor structures with ball-and-socket joints. Comput. Animation Virtual Worlds 30(1), e1848.1–e1848.13 (2019)

Digital Media Art and Visual Communication Design Method Under Computer Technology Jiaxin Chen(B) Guangdong Technology College, Zhaoqing 526100, Guangdong, China [email protected]

Abstract. With the diversified development of science and technology and social culture, digital technology provides a better platform for the high-speed dissemination of information, and the “screen-based” lifestyle is becoming more and more common in people’s daily life. This lifestyle has led to visual forms The way of viewing and disseminating language has undergone qualitative changes. Therefore, in order to meet people’s new aesthetic concepts in such an environment, the visual form language conforms to the times and combines with the time dimension. The main purpose of this paper is to conduct research on digital media (DM) art and visual communication design based on computer technology. The most essential difference from the traditional visual form language is the integration with the time dimension. The dimension of time makes visual design more quality and valuable, and allows the viewer’s senses to be extended indefinitely, which satisfies the diverse needs and dissemination needs of image information processing in the DM environment, and promotes visual design into a new era. Keywords: Computer Technology · Digital Media · Visual Communication · DM Arts

1 Introduction With the advancement of science and technology, information processing technology, and Internet technology, and the further development of image processing technology, the spiritual core of traditional visual art works has also been rapidly developed and integrated with the innovation and change of “media”. Visual design has been difficult to develop independently from the context of DM. It is not the art category represented by visual design that conforms to this wave of technology. The change of media brings people more experience related to media time. Media time is not a new concept, but the era that has the most significant impact on visual design and even the entire artistic time concept is the information age we are currently living in, and there is a growing trend [1, 2].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 87–96, 2023. https://doi.org/10.1007/978-3-031-31775-0_10

88

J. Chen

Based on this situation, artists have made a huge number of innovative creative attempts, but new problems have arisen one after another—many visual design works are presented in the process of connecting the artist’s own thinking level with the diversified information in the era of integrated media and big data. It shows a “weak” attitude, that is, the port of personal thinking cannot carry the “thinking collective” of people in the era of big data. In order to solve this common problem, Johansson B et al. introduced the time dimension into visual design. The visual formal language after integrating the time dimension is endowed with stronger attraction and expressive power, providing more possibilities for artists at the creative level. At the same time, it can greatly optimize the look and feel experience of the work [3]. At the same time, Karyotakis MA believes that the ubiquitous Internet link port also expands the dissemination and coverage of visual design works, and from another perspective, it also expands the vision of the viewers of visual design works [4]. The main purpose of this paper is to study DM art and visual communication design methods based on computer technology, based on the study of plane composition language in the context of DM, with the relationship between time dimension and visual formal language as the main line, analysis and research in The diversified, three-dimensional, and dynamic expression of visual formal language in the current DM context, and then grasp the transformed visual formal language as a whole, help people to form a diversified and perceptual cognition of visual art, and make traditional The information conveyed by the formal language is more comprehensive, three-dimensional, vivid and accurate, which makes the formal language more appealing, contemporary and scientific.

2 Research on DM Art and Visual Communication Design Under Computer Technology 2.1 Introduction to Computer Technology Computer technology is the core technology of the third technological revolution. Computer is the product of scientific and technological innovation. It has penetrated into people’s daily life. Its appearance marks that mankind has entered an era of information and that machinery has replaced human intellectual labor. Its application is shown in Fig. 1:

Digital Media Art and Visual Communication

Government

Education

89

Finance

Computer technology

Trade

Manufacturing

Traffic

medical care

Retail

Fig. 1. Application of computer technology

2.2 The Era of DM DM is a mixture of digital content and digital technology. It includes not only pure digital content, but also the support of various theories, technologies and hardware. It has two main features [5, 6]. (1) DM, on the one hand, uses advanced digital technology to improve the quality and efficiency of data collection, production and transmission. Digital technology also facilitates the emergence of all kinds of DM. The expression form of DM is different from the traditional media. It can combine words, images, music, comments, narrative and other forms of expression with perceptual language. (2) At the same time, with the continuous change of people’s demand for information, the generation of DM makes the public more creative and personalized content. In the era of DM, the Internet is ubiquitous, and people’s work and life can be separated from the limitations of reality, such as time and space. The generalization of this method is quite insightful. In the age of DM, people can leverage their strengths in technology and interact in the process of communication. With the help of the Internet, different audiences can also get thes [7, 8]. 2.3 The Performance Characteristics of DM Art DM art DM technology and digital information technology are the carrier and support, and the dependence on media is relatively obvious. Compared with traditional art, DM art shows brand-new performance characteristics from the perspectives of creators, dissemination process, and viewers.

90

J. Chen

(1) Interactivity and Participation DM art is an art that emphasizes participation, communication and interaction. It has changed the way of one-way communication of traditional art, provided us with a brand-new and unique interactive experience, enhanced and expanded the way people experience art and participated in it. The relationship between man and man, man and art. In today’s technologically advanced world, artists can make full use of various DM equipment, seek a new visual image, and lead DM art into a new era from multiple perspectives such as vision, hearing, and touch. Participation and interaction constitute the unique aesthetic value of DM art. (2) Technical DM art is an art form built on the basis of a specific technology. Therefore, in the creation of DM art works, technology, or technical means, is an extremely important link. The technical characteristics of DM art are more obvious than all previous art forms, although computer technology is the most important technical cornerstone of DM art. A more in-depth and comprehensive understanding of computer software and hardware systems, including accurate grasp of technical parameters and performance; mastery of technical procedures directly affects whether creators can maximize the potential of computers to create more breathtaking DM art, rather than allowing one’s own creation to follow the trajectory of existing technology [9, 10]. (3) Editability and Replicability Once the traditional art is completed, it is difficult to change the visual effect. With the combined force of various digital technologies, DM art is omnipotent in creation. The artistic effects generated by virtual creation can realize changes in visual effects through digital production, reproduction, editing and integration, such as reproducing the real material world, and can also create new virtual worlds and symbols. This kind of variability and reproducibility is no longer a realistic imitation of the real world, but through digital simulacrum, or re-art creation or transformation of the original material according to the creator’s intention. (4) Collage DM art is rooted on the large platform of digital technology. After sampling and quantization, any form of information is flattened into a code stream of 0 and 1 in the same form. This digitized information model unifies disparate traditional information. Therefore, in the creative process of DM art, the mutual collage and fusion of various information has become a distinctive feature. The collage characteristic of DM art is first manifested in the intersection and combination of different art forms. For many types of DM art works, the sensory channel that appeals to the viewer is not a single one, but acts on multiple sensory systems such as vision, hearing and even touch at the same time. This form of collage makes the works have Strong artistic appeal [11, 12]. 2.4 Algorithm Research (1) Quantum image storage model Image processing is one of the important branches in the field of information processing, and it is widely used in many important fields such as weapon guidance, satellite remote sensing, and biomedicine. However, due to the continuous

Digital Media Art and Visual Communication

91

development of the current image acquisition technology, the image resolution has increased sharply, and the image acquisition frequency has gradually accelerated, which has led to the rapid expansion of the current image processing data volume. However, due to the slow development of the computing power of classical computing, the high real-time performance and high accuracy of image processing can no longer be guaranteed. Introducing quantum mechanism into image processing to improve the computational performance of image processing is a feasible way to solve the performance problem of classical image processing. At present, this research direction is still in its infancy, and most of the research focuses on how to store image information in quantum states. In this section, we will introduce and analyze the four existing quantum image storage models. Before introducing the quantum image storage model, we first give the storage model of classical digital images. Generally speaking, in order to be able to use classical computers to process and display images, classical digital images need to be represented as a two-dimensional pixel matrix after pixel value quantization and coordinate discretization. Its mathematical expression is as follows:   (1) I = f (i, j) i ∈ {0, 1, . . . , N − 1}, j ∈ {0, 1, . . . , M − 1} Therefore, classical image processing refers to the processing and calculation of this two-dimensional pixel matrix. (2) Quantum matrix model The quantum matrix model was proposed by S. E. Venegas Andraca. In this model, each pixel in the image is represented by a qubit, so when the size of the classical digital image is N × M, the representation of the quantum matrix model is shown in Eq. (2),   (2) I = ϕij , i ∈ {0, 1, . . . , N − 1}, j ∈ {0, 1, . . . , M − 1} The color information of the pixel is represented by the phase information of the ground state of the qubit at the corresponding position. For example, θij in Eq. (3) can be used to represent the color value of the pixel at the position (i, j).   ϕij = cos θij |0 + sin θij |1 (3)

3 Experimental Research on DM Art and Visual Communication Design Under Computer Technology Discuss the influence of time dimension on the basic elements of visual formal language: 3.1 Static Vision Traditional visual design is difficult to achieve a dynamic visual experience due to the limitations of expression techniques and media. Nowadays, static vision in the context of

92

J. Chen

DM shows more diverse possibilities with the rapid development of photography technology, image processing technology and other means. For example, under the influence of increasingly advanced photography technology, virtual three-dimensional technology, material simulation technology, etc., static visual design works related to simulated reality can even reach the level of fidelity that resembles the real, and affected by real-time rendering technology, the viewing range of static pictures changes on the picture. The rendering of details hardly changes. In the game scene, the screen is constantly enlarged, for example, the scene of dew drops on the green plants on the ground can also be loaded in an instant. This kind of surprise picture with fake and real can also be combined with the virtual image language to create a new and diverse visual style. The collage-like style formed by this new virtual and reality is not limited to the display of pure twodimensional vision., in line with the above question of “the influence of time dimension on space”, this influence has turned to three-dimensional development in the category of static vision, which enhances the interaction and experience between the viewer and static vision. Affected by media time and related technologies, works can be saved in cloud-like time, and can be converted to any tense at any time, including the past represented by a certain step in the production process in the past or the calculation formed by program deduction out of the future. To sum up, in the context of DM, the impact of time dimension on static visual design is extremely diverse, whether it is on the artist, on the work itself, on the viewer, it has a very diverse impact, breaking its original cause. The current situation where the development is limited due to the change of dimensions, the direction of super precision, stronger experience and communication is developing rapidly. 3.2 Motion Vision In the context of DM, the dynamic design that integrates the time dimension presents the time line where the work is located, the ability to retrospect and deduce the dynamic evolution process, and the pictures or graphics at each time point can be displayed independently as part of the work, presenting a more diverse visual representation. “Advancements in digital technology play a major role in developing more complex and unique dynamic visual designs. The ready availability of advanced computer animation, video editing and music synthesis software.”, mechanical intelligence, from automation to intelligent artistic creation process. Allows artists to create “past or future” renditions without the time-consuming use of expensive specialized simulation equipment. DM art endows visual design with dynamic attributes. The dynamic process itself contains a certain temporality, and the time dimension is more like the result of flatly narrating the past and the future, allowing artists and viewers to select and randomly generate results. It is the original intention of the artists to control the works as a whole through the time dimension, and makes the visual design works show a richer visual expression in the time dimension.

Digital Media Art and Visual Communication

93

3.3 Other Sensory Auxiliary Visual Linkage The time dimension can bring the virtual visual space in the visual design into the real space, so other information in the real space that can be recognized outside the vision is naturally included in the artist’s creative consideration. Similar to the physical attributes and tactile perception brought to the viewer by texture, visual design works can be transformed into vision and then improved to the perception of thinking according to their characteristics. Other sensory stimuli can be generated under the conversion of brain thinking. For example, the auditory perception of ice cubes hitting the glass wall will create two transparent visual images of ice cubes and glass cups in the human brain under the action of thinking. The cold simulated tactile image, and the sense of smell will also have related reactions. Therefore, the effect of visual linkage with other senses has also become a visual design work brought into the real space, which is another major breakthrough after integrating the time dimension. In addition to being able to simulate and transform each other in thinking, the five senses can also carry a certain amount of information and memory. Artists provide more channels for conveying the emotions of their works. The more senses people participate in when they recognize things The cognition of things is relatively more comprehensive, so food that can mobilize multiple senses will be more favored instinctively, and the same is true for visual design works. Therefore, the visual design works of multi-sensory auxiliary visual linkage stimulate the sensory functions of the viewers at multiple levels, instead of stopping at a single visual experience, but integrating multiple senses under the same sensory channel, so that the viewer can be more realistic and effective. Recognize the message and spiritual core conveyed by the work.

4 Experimental Analysis of DM Art and Visual Communication Design Under Computer Technology 4.1 Quantum Image Compression In order to more comprehensively compare the compression ratios of the FRQI and NEQR models, we use 6 images for testing. Among these 6 images, the color value distribution of the first 4 images has a certain degree of regularity, while the last 2 images are completely randomly generated. The specific compression ratio is shown in Table 1: Table 1. Comparison table of compression ratios of FRQI and NEQR models for different images

FRQI NEQR

A

B

C

D

E

F

Average Value

93.75%

92.19%

50%

62.5%

0%

1.56%

50%

4/64

5/64

32/64

24/64

64/64

63/64

97.28%

95%

78.13%

85.23%

45.35%

43.65%

10/368

20/400

84/384

52/352

141/258

142/252

74.11%

94

J. Chen

120.00%

100.00%

FRQI 97.28%

95%

93.75%

92.19%

NEQR

85.23% 78.13%

74.11%

80.00% 62.50%

60.00% 50%

45.35%

43.65%

C

0% E

1.56% F

50%

40.00%

20.00%

0.00% A

B

D

Average Value

Fig. 2. Comparison of image compression ratios of FRQI and NEQR models

As can be seen in Fig. 2, under the 6 test images, the compression ratio comparison results of the two quantum image models of FRQI and NEQR during image compression. It can be seen that for these 6 images, the NEQR model can achieve a compression rate of about 74.11%, while the FRQI model can only achieve a compression rate of 50%, so the new quantum image model can achieve about 1.5 times compression in image compression rate increase. Specifically analyzing the data, it can be found that when the color distribution of the image is very regular, both models can achieve better image compression efficiency. However, with the weakening of this law, the advantages of the NEQR model in terms of compression rate are gradually highlighted. Especially when the color distribution of the image is completely random, that is to say, it is basically difficult for the pixels in the same area to have the same color value, and the FRQI model is difficult to use the method of minimum Boolean expression to compress the image construction process. However, for such images, the NEQR model can still achieve 40%–50% compression. Through such comparative tests, we can get a qualitative conclusion that using the method of least Boolean expression, the NEQR quantum image model can achieve better image compression efficiency than the FRQI model. 4.2 Algorithm Testing To test the algorithm, a classical computer is used to simulate the operation of the QSobel algorithm for image edge extraction. All simulations are coded in C language and run

Digital Media Art and Visual Communication

95

on the computer. Four commonly used test images, Rice, Peppers, Lena and Camera, are used as test data. The specific data are as Table 2 follows: Table 2. Threshold value when extracting features from test images Test image

0.16

Thresholds for QSobel and Sobel algorithms

Rice

0.071

Peppers

0.087

Lena

0.071

Camera

0.143

Rice

Peppers

Lena

Camera

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0 Thresholds for QSobel and Sobel algorithms Fig. 3. Feature extraction threshold

As can be seen from Fig. 3, the thresholds for feature extraction for the test image are Rice: 0.071; Peppers: 0.087; Lena: 0.071; Camera: 0.143. By using the thresholds shown in Fig. 3, All image edge extraction results are available.

96

J. Chen

5 Conclusions Visual design in the context of DM can be regarded as an art form that relies on new media technology. It is a comprehensive art category that integrates various disciplines of art, technology, and humanities, including creation, design, application, communication and other related processes. It is also an inevitable trend of the continuous development and evolution of the information society in the Internet era. The DM produced under the combination of modern communication, computer and other information technologies with commerce, culture, art and other industries is a carrier for recording, processing, disseminating and obtaining information by means of binary means. Professional and artistic means of artistic creation. With the development of 5G image processing technology becoming more and more cutting-edge, the iterative DM technology has broken through the definition of different media, and has promoted the integration of various media forms. The design elements are perfectly combined and presented, bringing a good multi-sensory experience to the audience. Based on this DM era, more and more colorful design expressions were finally born.

References 1. Bonnevie, E., Rosenberg, S.D., Goldbarg, J., et al.: Building strong futures: the feasibility of using a targeted DM campaign to improve knowledge about pregnancy and low birthweight among black women. Matern. Child Health J. 25(1), 127–135 (2021). https://doi.org/10.1007/ s10995-020-03068-1 2. Penkowska, G.: Visual communication in the age of DM. Problemy Opieku´nczoWychowawcze 592(7), 29–36 (2020) 3. Johansson, B., Odén, T.: Struggling for the upper hand: news sources and crisis communication in a DM environment. J. Stud. 19(10), 1–18 (2017) 4. Karyotakis, M.A., Panagiotou, N., Antonopoulos, N., et al.: DM framing of the Egyptian Arab spring: comparing Al Jazeera, BBC and China daily. Stud. Media Commun. 5(2), 66–75 (2017) 5. Ni Made, R., Komang Sudirga, I., Gede Yudarta, I.: The appreciation of the innovative Wayang Wong performing arts through DM. Psychology (Savannah, Ga.) 58(1), 5241–5252 (2021) 6. Peicheva, D., Milenkova, V.: Knowledge society and DM literacy: foundations for social inclusion and realization in Bulgarian context. Calitatea Vietii 28(1), 50–74 (2017) 7. Dalope, K.A., Woods, L.J.: DM use in families: theories and strategies for intervention. Child Adolesc. Psychiatr. Clin. N. Am. 27(2), 145–158 (2018) 8. Rahma, R.A., Sucipto, Affriyenni, Y., et al.: Cybergogy as a DM to facilitate the learning style of millennial college students. World J. Educ. Technol. Curr. Issues 13(2), 223–235 (2021) 9. Silveira, P.: Remembering and forgetting on the internet: memory, DM and the temporality of forgiving in contemporary public sphere. Varia História 37(73), 287–321 (2021) 10. Romer, D., Moreno, M.: DM and risks for adolescent substance abuse and problematic gambling. Pediatrics 140(Suppl 2), S102–S106 (2017) 11. Savina, E., Mills, J.L., Atwood, K., et al.: DM and youth: a primer for school psychologists. Contemp. Sch. Psychol. 21(1), 1–12 (2017) 12. Reyna, J., Hanham, J., Meier, P.: The Internet explosion, DM principles and implications to communicate effectively in the digital space. E-Learn. DM 15(1), 36–52 (2018)

Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm Jingchao Yuan(B) North University of China, Taiyuan, Shanxi, China [email protected]

Abstract. In my country’s large-scale manufacturing industry, accurate measurement and ingredients are the guarantee of output and quality in industrial production, an indispensable part of energy saving, consumption reduction, and automatic process control, and an important indicator of industrial production. Rotor scales are favored by many industrial manufacturers because of their advantages of high measurement accuracy, high stability, and simple structure. The main purpose of this paper is to design and research the instrumentation automation control system based on HSV space and genetic algorithm. This paper mainly designs a rotor scale measurement control algorithm and designs the rotor scale control instrument, which effectively improves the performance of the domestic instrument and the measurement accuracy and stability of the rotor scale system. Experiments show that, compared with several other traditional meter reading recognition algorithms, the relative error of the single-pointer recognition reading algorithm designed in this paper is significantly better than other single-method recognition results. Keywords: HSV space · Genetic Algorithm · Instrumentation Automation · Automated Control

1 Introduction In the rotor scale metering control system, the control instrument is the key to the continuous, stable and controllable feeding and discharging of the rotor scale. However, at present, the measurement control instruments in our country have shortcomings such as simple measurement control algorithm, large volume, low precision, few functions, poor stability, and limited application range. Instrument design index analysis In the design of the instrument, whether the designed instrument can be used and whether each performance index meets the needs of the industry is an important research content of instrument design. In order to make the designed instrument meet the requirements, through investment in the market The various performances and industrial requirements of the rotor scale control instruments used in production are sorted out and summarized, and the various performances of the current instruments are understood [1, 2]. In the research on image extraction and automatic control of instrumentation, Hashemour introduced a new algorithm for rolling bearing fault diagnosis by using a variety of new methods to suppress nonlinear noise by bispectral method [3]. Therefore, to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 97–106, 2023. https://doi.org/10.1007/978-3-031-31775-0_11

98

J. Yuan

extract features, feature description algorithms developed for images are used. Goddard designed a low-cost, modular, and programmable analysis platform [4], including a 3Dprinted autosampler and peristaltic pump, that can independently access a larger number of solution bottles at a lower cost, but is comparable to commercial equipment. Than the total number of solution bottles that can be accommodated. The main purpose of this paper is to design and research the instrumentation automation control system based on HSV space and genetic algorithm. The composition, mechanical structure and working principle of the metering control system are studied in detail, and the actual reason for the fluctuation is investigated and analyzed, and a flow metering control of the discharge port of the rotor scale based on the principle of torque balance and a rotor based on fuzzy PID control are designed. Algorithm for flow metering control at the feed inlet of the scale. The performance of the algorithm is tested through the instrument control system. Through the analysis of the test data and test chart, it is verified that the metering control algorithm can effectively reduce the fluctuation of materials and improve the stability of the flow rate of the rotor scale.

2 Design and Research of Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm 2.1 Features of Rotor Scale Through the above research on the working principle of the rotor scale and the material measurement and quantitative feeding system of the rotor scale, the characteristics of the rotor scale can be obtained [5, 6]: (1) The structure of the rotor scale is simple, compact, and has a small construction area, which effectively reduces the investment in capital construction. In addition, it cooperates with the flow valve control to solve the phenomenon of easy-to-run material and flushing. When the kiln condition is not good, the system responds quickly and can be adjusted in time. Additions and subtractions. (2) The rotor scale directly measures the material quality in the compartment through the weighing sensor, and the airflow and material type in the rotor scale have no effect on the weighing accuracy. (3) The material in the rotor scale compartment is pushed by the blades, so that the transmission speed of the material is consistent with the angular speed of the rotor in real time, which is beneficial to control the material flow and improve the measurement accuracy. (4) In the production process of the rotor scale, materials that can withstand highstrength pressure are used, and they have good explosion-proof properties. The material measurement and quantitative feeding system of the rotor scale is a sealed space, which makes the material not generate dust outside during material transportation, and does not require additional work of exhaust and dust collection, which is conducive to environmental protection [7, 8].

Instrumentation Automation Control System Based on HSV Space

99

2.2 Overall Design of Instrument Software System At present, there are two main languages used in the development of single-chip microcomputers. One is based on different CPUs, and there will be different assembly languages. The other is C language with some high-level language advantages. The use of C language has become a mainstream, and it has the characteristics of low resource occupancy of assembly language, rich library functions for programming calls, faster operation speed than other high-level languages, and good portability. Therefore, software design and development is implemented on Keil programming software through C language [9, 10]. (1) Structural analysis of instrument software system Through the analysis of the functional requirements of the instrument, the control instrument we designed should have the functions of rotor scale data acquisition, data filtering and processing, flow control, rotor scale verification, communication, and parameter access functions. In order to save resources to the maximum extent for each functional module, the entire software system needs a reasonable logical structure, and the program has readability and portability. Divide the program into two categories: active execution part and passive execution part. The active execution part includes modules such as system initialization, display refresh, keyboard scanning, data processing, and interrupt program. The passive execution part includes the rotor scale verification function module, the parameter access module and so on. (2) Instrument function and operation method The instrument requires staff to complete the operation of the rotor scale and call various functions of the instrument through the interface of the display screen and keyboard input. At the same time, in order to allow users to understand the functions of each option easily and get started quickly, the invocation of instrument functions should be straightforward., simple operation, clear display and other characteristics, the option settings should have clear classification and clear organization [11, 12]. 2.3 Algorithm Research (1) Principle of genetic algorithm Genetic algorithm is mainly to encode the solution set of each problem into a chromosome, and optimize each chromosome in the initial population based on the fitness criteria, so as to generate a new population, only retaining a higher population, and not a population with poor adaptability. In this environment of survival of the fittest, the offspring not only retain the genes of the previous generation, but also have better chromosomes than the previous generation. Under the condition of group iteration, the most suitable group is found, and on this basis, the evolution simulation is carried out. In this cycle of survival of the fittest, the best solution is to find the most suitable chromosome. The process is shown in Fig. 1:

100

J. Yuan

Code

Assess individual fitness in the population

Evolution

Initialize population

Selection, crossover and variation

Fig. 1. Flow of genetic algorithm

(2) Model establishment and simulation research According to the characteristics of the actual rotor scale measurement control system, the control of the flow valve opening in the rotor scale measurement control system is to output a 4-20mA electrical signal to the receiver of the valve positioner through the instrument. The magnitude of the signal generates a corresponding proportional electromagnetic force, and the pneumatic signal is controlled to adjust the control valve according to the magnitude of the electromagnetic force, thereby controlling a process of adjusting the valve from 0 to 90 degrees. The mathematical model of the valve positioner system can be represented by a first-order inertia plus pure lag model, the formula is as follows: G(s) =

K e−Ls 1 + Ts

(1)

In the formula, K is the gain, T is the time constant of the first-order inertial system, and L is the pure lag time. (3) Median filter Compared with the neighborhood average method, median filtering does not have problems such as edge blurring. The advantage is that it cannot only remove sharp interference noise, but also preserve image edge details. A standard one-dimensional median filter is defined as: yk = med{xK−N , xK−N +1 , . . . , xK+N −1 , xK+N }

(2)

Instrumentation Automation Control System Based on HSV Space

101

where med represents the median of the image.

3 Experimental Research on Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm 3.1 Selection of Hardware Equipment: (1) Camera Cameras are video input devices connected to PCs and are widely used in video calls, road monitoring, and people monitoring. Because of its unique advantages, gigabit network has been widely used in machine vision and other industries. And some factories often need a long transmission distance due to the complex environment. The traditional USB data cable has a transmission distance of about 5 meters, which cannot meet the needs of the site. Therefore, the camera with a network cable interface is preferred. (2) Instrument to be tested Taking into account the meter reading recognition algorithm in this paper, this paper selects a thermometer with a range of 0-100 °C as the instrument to be tested for experimental testing. (3) Computer The computer is a Lenovo computer equipped with the Windows 10 operating system, and is equipped with the LabVIEW 2015 development environment to provide hardware support for the software. (4) Fixed platform Considering the complex and changeable factory environment, this paper adopts a liftable fixed platform to fix the camera. The platform has good stability, and the camera will not shake seriously after rotating. 3.2 Each Functional Module (1) Instrument type module 1) Select the “Add Device” button, and enter the instrument name in the pop-up dialog box. 2) Fill in the relevant parameters in the corresponding positions of the unit, the upper limit of the range, the lower limit of the range, the upper limit of the safety area and the lower limit of the safety area. 3) Click the “Parameter Storage” button. 4) If you want to continue adding instrument equipment, repeat the above three steps, and the instrument parameter storage process ends. The instrument parameter reading process is completed automatically by the system. When the function switch is turned to the automatic inspection position, the system will automatically read the instrument parameter database without manual selection.

102

J. Yuan

(2) Image acquisition and preprocessing module The function of this module is to acquire the instrument image through the image acquisition device, and to preprocess the acquired image to prepare for the instrument detection link. The image preprocessing operation has been introduced in detail in Sect. 2 and will not be described here. Here are the main steps of the process: 1) Set the camera parameters and save (the last saved settings will be automatically read after each restart of the camera. 2) Rotate the camera, focus on the first meter, adjust the focus, and zoom in on the picture so that the entire dial image fills the screen. This removes interference from those surrounding objects, improving system speed and accuracy. 3) Click the “Preset” button to record the current position information into the camera, and the current meter is set as preset point 1. 4) Input time parameters at the corresponding positions of focusing time, image recognition time and inspection interval, and click the “Parameter Storage” button. At this time, each parameter is saved in the corresponding time database. 5) Perform steps ➁-➃ for the remaining meters in turn, until all the location information and time information of the meters are entered. (3) Feature detection module The feature detection module is a link used to extract the features of the instrument image. When the preprocessed image is obtained, it is necessary to detect the image, identify the straight line equation of the central axis where the pointer is located, and further take the midline of the two straight lines to calculate the central line. (4) Reading recognition module This part is the last step in the identification of the reading of the pointer meter. After the above steps, the slope k of the straight line where the axis of the pointer is located is obtained. With k, the current reading of the meter can be obtained according to the mathematical relationship between the reading and the angle, and the reading is displayed on the user interface.

4 Experiment Analysis of Instrumentation Automation Control System Based on HSV Space and Genetic Algorithm 4.1 Measurement Control Algorithm Performance Test Through the above experimental methods, the performance of two metering control algorithms is tested. One is a metering control algorithm based on torque balance and fuzzy PID control principles for the discharge port and inlet flow of the rotor scale and the other is a metering control algorithm for the discharge port. The metering control algorithm of flow based on the principle of torque balance is combined with the metering control algorithm of conventional PID control of the flow of the feed inlet. The test results are shown in Fig. 2:

Instrumentation Automation Control System Based on HSV Space

103

Table 1. Key performance indicators Control type

Overshoot/σ%

Adjustment time/s

Steady state error ess

Conventional PID Control

0.65%

10

0.43

Fuzzy PID control

0.12%

15

0.06

16

0.70%

15 0.65%

14

0.60%

12 10

0.50%

10

0.40%

8 0.30%

6

0.20%

4 0.12%

2 0

0.43 Conventional PID Control Steady state error ess

0.06 Fuzzy PID Control Adjustment time/s

0.10% 0.00%

Overshoot/σ%

Fig. 2. Analysis and comparison of key performance indicators

Through the comparison of the adjustment curves of the baffle opening in the two experimental figures and the comparison of the key performance indicators in Table 1, it can be concluded that in the rotor scale metering control system, compared with the conventional PID control, the fuzzy PID control has the advantages of complex control objects in this system. Better control effect is better, with stronger adaptability, shorter response time, smaller overshoot, better dynamic and static characteristics, especially for the application of non-linear adjustment of flow valve, which greatly improves accuracy and stability of the flow. 4.2 Accuracy Comparison of Meter Identification Methods The comparison results of manual reading and system automatic reading are as follows Table 2:

104

J. Yuan Table 2. Comparison of system identification results and manual estimation results

Numbering

Manual estimation (MPa)

Identification result (MPa)

Absolute error (MPa)

Relative error(%)

(a)

0.45

0.44

-0.01

2.2

(b)

0.22

0.23

0.01

4.5

(c)

0.11

0.11

0.00

0

(d)

0.64

0.63

-0.01

1.5

(e)

0.52

0.52

0.00

0

(f)

0.19

0.19

0.00

0

(g)

0.55

0.55

0.00

0

(h)

0.73

0.72

-0.01

1.3

It can be seen from the results that the system identification reading results are accurate, and the relative error is large when the actual value of the instrument is small, but it has met the numerical accuracy requirements of the instrument. Because the reading calculation in the algorithm is based on pixel statistics, the accuracy of the algorithm is slightly higher than the reading result of the human eye. Table 3 below shows the accuracy comparison of several main meter identification methods. (a) is the recognition error of the recognition algorithm in this paper on a single-pointer analog meter, (c) is the automatic interpretation method under uniform illumination, (c) and (d) both use Hough and (b) do not perform the expansion operation, only use the center The result of recognition by projection method. Table 3. Algorithm Accuracy Comparison Numbering

algorithm

Absolute error

Relative error

(a)

This article

0.005

1.188

(b)

central projection

0.005

1.300

(c)

Non-Uniform Lighting Hough

3.470

4.200

(d)

ORB + Improve Hough

0.010

4.300

From the data presented in Fig. 3, compared with several other traditional meter reading recognition algorithms, the relative error of the single-pointer recognition reading algorithm designed in this paper is significantly better than other single-method recognition results.

Instrumentation Automation Control System Based on HSV Space

Absolute error

105

Relative error

5 4.5

4.2

4.3

4 3.47 3.5 3 2.5 2 1.5

1.188

1.3

1 0.5

0.005

0.005

(a)

(b)

0.01

0 (c)

(d)

Fig. 3. Algorithm Accurate Analysis

5 Conclusions Starting from the actual demand, this paper identifies the readings of multiple meters in industrial production, monitors the status of the meters, and provides real-time information. Instrumentation plays an important role in industrial production and plays an important role in real-time monitoring of various parameters in production. In this paper, through the in-depth research on the measurement control system of the rotor scale, aiming at the shortcomings of the measurement control system of the rotor scale, such as low measurement accuracy, low stability, single function of the measurement control instrument, low anti-interference ability, and backward measurement control method, a design A metering control algorithm based on torque balance principle and fuzzy PID control theory, this algorithm effectively improves the metering accuracy and stability of the rotor scale metering control system; a metering control instrument is designed, which enriches the functions of the metering control instrument and improves the measurement accuracy of the metering control instrument is improved, and the performance of the metering control instrument is improved.

106

J. Yuan

References 1. Sweeney, M.W., Kabouris, J.C.: Modeling, instrumentation, automation, and optimization of water resource recovery facilities. Water Environ. Res. 89(10), 1299–1314 (2017) 2. Van Doren, V.J.: Exploring the basic concepts of multivariable control. Control Eng. Cover. Control Instrum. Autom. Syst. Worldwide. 64(2), 26–28 (2017) 3. Hashempour, Z., Agahi, H., Mahmoodzadeh, A.: A novel method for fault diagnosis in rolling bearings based on bispectrum signals and combined feature extraction algorithms. SIViP 16(4), 1043–1051 (2022) 4. Goddard N.J., Gupta, R.: 3D printed analytical platform for automation of fluid manipulation applied to leaky waveguide biosensors. IEEE Trans. Instrum. Measur. 2021, 1 (2021) 5. Charles, B.: Fog computing for industrial automation. Control Eng. Cover. Control Instrum. Autom. Syst. Worldwide 65(3), 28–32 (2018) 6. Roberto, F., Francesco, et al. Seismic retrofitting of existing RC buildings: a rational selection procedure based on Genetic Algorithms - ScienceDirect. Structures, 2019, 22(C):310–326 7. Nalivaiko, V.I., Ponomareva, M.A.: Fabrication of thin-film axicons with maximum focal lengths. Optoelectron. Instrum. Data Process. 56(4), 393–397 (2021) 8. Arias-Osorio, J., Mora-Esquivel, A.: A solution to the university course timetabling problem using a hybrid method based on genetic algorithms. Dyna (Medellin, Colombia) 87(215), 47–56 (2020) 9. Saraolu, R., Kazankaya, A.F.: Developing an adaptation process for real-coded genetic algorithms. Comput. Syst. Sci. Eng. 35(1), 13–19 (2020) 10. Pen, E.F.: Dynamics of diffraction efficiency of superimposed volume reflection holograms at their simultaneous recording in photopolymer material. Optoelectron. Instrum. Data Process. 56(4), 340–349 (2020). https://doi.org/10.3103/S875669902004010X 11. Lestari, D., Daimunte, M.R., Daimunte, M.R.: Rancang Bangun home automation berbasis ethernet shield Arduino. Al-Fiziya J. Mater. Sci. Geophys. Instrum. Theoret. Phys. 3(1), 21–28 (2020) 12. Quesada, J., Calvo, I., Sancho, J., et al.: A design-oriented engineering course involving interactions with stakeholders. IEEE Trans. Educ. 2020, 1–8 (2020)

Biological Tissue Detection System Based on Improved Optimization Algorithm Haihua Wang(B) Xi’an Haitang Vocational College, Xi’an 710038, Shaanxi, China [email protected]

Abstract. Biological tissues are the basic components that constitute the structure of all living organisms. There are many types and complex internal components. The effective identification of different types of biological tissues is of great significance for the diagnosis of human diseases and the safe intake of animalderived foods. Due to the similarities and differences in the elements in different types of biological tissues, this paper focuses on how to extract element characteristics in the same type of tissue for micro-difference identification. Based on the completion of the system construction, in view of the complexity, specificity and high-dimensionality of biological tissue spectral data, combined with machine learning methods, useful information is extracted from complex spectral data and a more robust analysis model is established. And improve the algorithm to improve its analysis performance. Because the surface of biological tissue samples has the characteristics of unevenness, cracks, and softness at room temperature, the experiment needs to precisely control the position of the laser focus relative to the surface of the sample, so as to achieve precise control of the laser pulse direction and laser focusing. The final results of the research show that the spectral recognition accuracy and accuracy of PC1 are 93.61% and 95.65%, respectively, and the spectral recognition accuracy and accuracy of PC2 are 87.56% and 98.46%, respectively. The spectral recognition accuracy can reach more than 85%, and the accuracy rate is %. Can reach more than 90%. Keywords: Biological Tissue · Spectral Data · Machine Learning · Laser Pulse

1 Introduction Biological tissue is the basic component that constitutes the structure of living body. It is a cellular structure between living body cells and organs. It is composed of a large number of cells with similar shapes and functions and intercellular substance. Through the ordering and combination of different types of biological tissues, a structure with a certain shape and completing a certain physiological function can be formed, that is, the organs of life, such as limbs, internal organs, etc. [1]. Therefore, it has certain practical significance for biological tissue detection system based on the improved optimization algorithm. In recent years, many researchers have conducted research on biological tissue detection systems based on improved optimization algorithms, and achieved good results. For © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 107–116, 2023. https://doi.org/10.1007/978-3-031-31775-0_12

108

H. Wang

example, Hailin HE believes that by detecting and identifying biological tissues and identifying and classifying different types of biological tissues, the problem of adulteration of meat products and the mixed sale of real and fake meat products can be effectively avoided [2]. Vishnuvar thanan believes that the biological tissue detection process is complex, and has shortcomings such as strong subjectivity and lack of objective data support. Therefore, it is necessary to develop a new fast, accurate, reliable and objective quantitative inspection method [3]. At present, scholars at home and abroad have carried out a lot of research on biological tissue detection systems. These previous theoretical and experimental results provide a theoretical basis for the research in this paper. This paper is based on the theoretical basis of the improved optimization algorithm, combined with the biological tissue detection system for research and analysis, and through a series of experiments to prove that the improved optimization algorithm has a certain feasibility in the biological tissue detection system, through the PCA contribution rate analysis of normal biological tissue spectrum It can be seen from the experimental data of biological tissue detection accuracy analysis that the cumulative interpretation rate and contribution rate of PC1 are 85.3% and 44.5%, respectively, and the cumulative interpretation rate in biological tissue spectrum PCA is generally higher than the contribution rate.

2 Related Theoretical Overview and Research 2.1 Introduction to Biological Tissue Detection Technology Biological tissues are divided into muscle tissue, connective tissue, epithelial tissue and nerve tissue. There is a high similarity in element information between various tissues. How to realize the rapid detection and identification of trace elements between different tissues is of great significance in the field of biological research, and its detection accuracy directly affects human life and health. Due to the high proportion of muscle tissue and connective tissue in the living body, compared with epithelial tissue and nerve tissue, the content of each element in muscle tissue (animal-derived food intake) and connective tissue (living internal organs) can be directly impact on human health [4, 5]. Most of the biological research fields revolve around the above two biological organizations. Many researchers in the biological field have proposed methods including physics, immunology, and biomolecules, mainly including the following: (1) Microscopic detection method Microscopic detection method is a method to detect biological tissue by studying the morphological and structural characteristics of meat and bone meal in animal bone tissue, which can distinguish different types of animals, including mammals, fish and poultry and other animal species [6]. This detection method does not require a large number of samples, and the operation of experimental instruments is relatively simple, but requires researchers with professional knowledge to operate, so the detection results are subject to a certain degree.

Biological Tissue Detection System Based on Improved Optimization

109

(2) Enzyme-linked immunosorbent assay ELISA is to connect biological tissue samples with enzymes through internal antibodies and specific reactions produced by antibodies, and quantitatively analyze biological tissues by studying the color reaction between enzymes and substrates. At present, it is commonly used to detect veterinary drug residues and illegal drugs in animal-derived foods [7]. Enzyme-linked immunosorbent assay has complex operation steps, and many factors affect the reaction, including the concentration of reference standard, solid phase carrier, etc., which increases the complexity of the experiment to a certain extent. (3) Polymerase chain reaction The polymerase chain reaction mainly studies the DNA molecules in biological tissues, and achieves the purpose of qualitative analysis of biological tissues by analyzing the DNA gene sequences in biological tissues [8]. The polymerase chain reaction can amplify DNA fragments and amplify DNA fragments with less content to millions of times in a short time, so as to identify and analyze biological tissue types. However, this method requires researchers with professional knowledge to operate, and the detection cycle is long. (4) Molecular spectrum detection Common molecular spectrum detection methods include near-infrared spectroscopy and Raman spectroscopy. The molecular spectrum detection method can collect the vibrational and rotational energy level spectra of molecules in biological tissues, so as to conduct qualitative and quantitative analysis of the molecular structure level of biological tissues, and realize the purpose of identifying different biological tissues [9]. Molecular spectroscopic detection is a non-destructive testing method, but due to the restriction of similar molecular structure components in biological tissues, the effectiveness of molecular spectroscopic detection method decreases to a certain extent. The above methods are mainly aimed at measuring biological tissues at the molecular level in biological tissues. For example, enzyme-linked immunosorbent assay and polymerase chain reaction mainly conduct qualitative analysis of biological tissue types by detecting protein molecules and DNA molecules in biological tissues. The method can ensure the detection accuracy to a certain extent, but the sample preprocessing process is cumbersome, and often requires researchers with certain professional knowledge to operate, the detection results are subject to a certain degree, and the detection cycle is long. With the improvement of people’s living and medical conditions, the growth of people’s life cycle and age, the change of people’s lifestyle, as well as the defects of people’s body tissues and organs, and the loss of system dysfunction, have become the main health threats of people, and also the main factors leading to disease and death. Traditional therapies such as autotransplantation, allotransplantation and allotransplantation have improved the survival rate and health status of patients to some extent, but there are still many problems and deficiencies. Tissue engineering is an interdisciplinary field of biomedicine, cell biology, molecular biology, materials science, design engineering and other disciplines, and is a hot spot in the current biomedical field. In order to realize the accurate detection of various parameters in the bioreactor, the micro-current detection

110

H. Wang Bioinformation reading

Information reception Data storage Intelligent monitoring

Signal generation

External hard disk

Database

Signal transmission

Fig. 1. The basic structure

system based on the tissue engineering bioreactor can detect it. The basic structure is shown in Fig. 1. Based on tissue engineering, human tissue can be designed and manufactured by using seed cells, biomaterials, growth factors and other elements. Tissue engineering is based on multi-disciplinary intersection. Through the combination of cell biology and biomaterials, scaffold materials are used in vitro or in vivo to establish special tissues with normal structure and function, so as to repair and improve damaged organs. In order to make tissue cells have good physiological functions and can be used to repair human body defects, many studies of tissue engineering are focused on culturing cells to improve cell function, improve cell mass transfer ability, and effectively control glucose content, dissolved oxygen content and pH value in cells. At present, because the monitoring system of bioreactor is not intelligent enough, it generally depends on the experience of experts to judge and replace the culture medium, which leads to the contamination of tissue cells, which leads to the reduction of biological activity of tissue cells, thus increasing the cost of culture.

Biological Tissue Detection System Based on Improved Optimization

111

2.2 Machine Learning Algorithms and Their Applications Machine learning is a multi-domain interdisciplinary subject that has attracted much attention. It integrates probability theory, statistics, optimization theory and other disciplines in the field of mathematics. Its theoretical basis also involves the fields of life sciences, such as neuroscience, brain science, etc. Machine learning is now being introduced into various disciplines, providing powerful tools for problems such as classification prediction, fit estimation, pattern recognition, and more. Machine learning is the field of study of how computers perform tasks without being explicitly programmed [10]. We know that for simple tasks assigned to computers, algorithms can be programmed to tell the machine how to perform all the steps required to solve the problem at hand; for more complex tasks, designing algorithms by hand can be a huge challenge. Machine learning allows computers to learn from the data they are given or the process of performing tasks so that they can accomplish those tasks. Typically machine learning methods fall into three broad categories based on the nature of the "signals" or "feedbacks" used by the learning system: Supervised learning: Provides a computer with example input and its desired output (a labeled dataset) with the goal of Learn general rules for mapping inputs to outputs. Unsupervised learning: Without example input corresponding to its desired output (unlabeled dataset), a learning algorithm is used to find structure or rules in the input. Reinforcement learning: A computer program interacts with a dynamic environment in which the computer must perform specific goals [11]. As it searches the problem space, the program is given reward-like feedback and tries to maximize it. Machine learning and pattern recognition algorithms have developed over the past few years to play an important role in areas such as biological tissue imaging and computational neuroscience, as they can be used to mine large amounts of physiological data and process and analyze observational images. Deep learning is one of the rapidly developing techniques of machine learning technology in recent years, which uses multi-layer artificial neural networks to automatically analyze signals or data. This kind of artificial neural network with a large number of hidden layers is composed of multiple layers of artificial neurons stacked on each other, which is called deep neural network, sometimes also called multilayer perceptron) [12]. Deep convolutional neural network is one of the representative algorithms, and the convolutional layer can provide powerful feature extraction ability. The convolution kernels in these convolutional layers are randomly initialized and then trained to implement supervised or unsupervised machine learning techniques to perform specific tasks. On this basis, environmental parameters such as temperature, humidity and pressure were identified. Without affecting cell growth, glucose, dissolved oxygen, pH and other indicators were fused using data fusion technology. The bioreactor discrimination mode is shown in Fig. 2.

112

H. Wang

Dissolved oxygen

Glucose

PH

Information unit Multi-sensor

Data fusion Discrimination mechanism

Fig. 2. The bioreactor discrimination mode

3 Experiment and Research 3.1 Experimental Method Biological tissue detection transforms the features into a normal distribution by calculating the standard score, which is related to the overall sample distribution, and each sample point has an impact on standardization. Standardization needs to calculate the mean and standard deviation of the feature, and its calculation formula is:   n  1   S= (x − x)2 (1) n−1 i−1

x − min x∗ = max − min

(2)

In the formula, x* is the new sequence after normalization, and x is the mean of the features of the original sequence s is the standard deviation. Each standardized feature has a mean of 0, a variance of 1, and is dimensionless. The standardized feature processing method is suitable for data sets with outliers and noise, and can indirectly avoid the influence of outliers and extreme values through centralization.

Biological Tissue Detection System Based on Improved Optimization

113

3.2 Experimental Requirements Based on the improved optimization algorithm, this experiment studies the biological tissue detection system, and models and identifies the extracted biological tissue spectral information [13, 14]. The generalization of the algorithm and the balance of the recognition ability are studied, the recognition model is gradually optimized, and then the recognition performance, prediction accuracy and other parameters of each model are evaluated by calculating different evaluation indicators, and finally the purpose of recognition of different types of biological tissues is achieved [15].

4 Analysis and Discussion 4.1 Analysis of PCA Contribution Rate of Normal Biological Tissue Spectrum In the experiment, the above 12 bands were selected as analytical spectral lines to reduce the dimension of normal tissue and biological tissue spectral data. For the analysis of the PCA contribution rate of the normal biological tissue spectrum, the experimental data are as follows as shown in Table 1. Table 1. PCA contribution rate Biological Tissue

Cumulative interpretation rate(%)

Contribution rate(%)

PC1

85.3

44.5

PC2

77.6

56.6

PC3

63.5

77.4

PC4

46.5

60.5

From Fig. 3, it can be seen from the above results that the cumulative interpretation rate and contribution rate of PC1 are 85.3% and 44.5%, respectively, the cumulative interpretation rate and contribution rate of PC2 are 77.6% and 56.6%, respectively, and the cumulative interpretation rate of PC1. And contribution rates were 63.5% and 77.4%, respectively, and the cumulative interpretation rate and contribution rate of PC1 were 46.5% and 60.5%, respectively. It can be seen that the cumulative interpretation rate of drugs in biological tissue spectrum PCA is generally higher than the contribution rate.

114

H. Wang

Cumulative interpretation rate(%) 90

85.3 77.6

80

77.4

Experimental data

70

63.5

60.5

56.6

60 50

Contribution rate(%)

46.5

44.5

40 30 20 10 0

PC1

PC2

PC3

PC4

Experimental variables Fig. 3. Analysis of PCA contribution rate of normal biological tissue spectrum

4.2 Accuracy Analysis of Biological Tissue Detection Through the analysis of PCA contribution rate of normal biological tissue spectrum, the cumulative interpretation rate of drugs in biological tissue spectrum PCA is generally higher than the contribution rate. The experiment continues to analyze the detection accuracy of biological tissue. The experimental data is shown in the following Fig. 4. As shown in Fig. 4, the recognition accuracy of the improved optimization algorithm for the spectral data of the four biological tissues has reached more than 90%, of which the spectral recognition accuracy and accuracy of PC1 are 93.61% and 95.65%, respectively, and the spectral recognition accuracy of PC2 and The accuracy rates are 87.56% and 98.46%, respectively, the spectral recognition accuracy and accuracy of PC3 are 85.69% and 97.74%, and the spectral recognition accuracy and accuracy of PC4 are 88.67% and 91.56%, respectively. It can be seen that the spectral recognition accuracy can be Reach more than 85%, and the accuracy rate can reach more than 90%.

Biological Tissue Detection System Based on Improved Optimization

Accuracy(%)

Spectral recognition rate(%) 91.56

PC4 Experimental data

115

88.67 97.74

PC3

85.69 98.46

PC2

87.56 95.65

PC1

93.61 75

80

85

90

95

100

Experimental variables Fig. 4. Analysis of the accuracy of biological tissue detection

5 Conclusions In this paper, based on the improved optimization algorithm, the research on biological tissue detection system is studied, and a series of experiments are carried out to prove that the improved optimization algorithm has certain feasibility in biological tissue detection system. From the experimental data of tissue detection accuracy analysis, it can be seen that the cumulative interpretation rate and contribution rate of PC1 are 85.3% and 44.5%, respectively, and the cumulative interpretation rate in biological tissue spectrum PCA is generally higher than the contribution rate. The recognition accuracy of the improved optimization algorithm for the spectral data of the four biological tissues has reached more than 90%, and the spectral recognition accuracy can reach more than 85%. How to more effectively integrate the experience of medical experts, so as to further improve the performance of feature selection and algorithm, and display the results of medical data set mining to medical experts and then apply them in the medical clinical field requires further research.

References 1. Vishnuvarthanan, A., Rajasekaran, M.P., Govindaraj, V., et al.: An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 57, 399–426 (2017) 2. Hailin, H.E., Zheng, J., Fangli, Y.U., et al.: Exoskeleton robot gait detection based on improved whale optimization algorithm. J. Comput. Appl. 5(82), 45–53 (2019)

116

H. Wang

3. Vishnuvarthanan, A., Govindara, V., et al.: An automated hybrid approach using clustering and nature inspired optimization technique for improved tumor and tissue segmentation in magnetic resonance brain images. Appl. Soft Comput. 57, 399–426 (2017) 4. Rossi, R., Fang, M., Zhu, L., et al.: Calculating and comparing codon usage values in rare disease genes highlights codon clustering with disease-and tissue- specific hierarchy. PLoS ONE 2(17), 1–22 (2022) 5. Kerkel, L., Seunarine, K., Henriques, R.N., et al.: Improved reproducibility of diffusion kurtosis imaging using regularized non-linear optimization informed by artificial neural networks. 74(1), 15–30 (2022) 6. Kasturi, S.: Current status of intra-vascular imaging during coronary interventions. World J. Cardiovasc. Dis. 11(8), 31–36 (2021) 7. Kumar, R., Kumar, S., Sengupta, A.: Optimization of bio-impedance techniques-based monitoring system for medical & industrial applications. IETE J. Res. 2, 1–12 (2020) 8. Blochet, B., Bourdieu, L., Gigan, S.: Fast wavefront optimization for focusing through biological tissue (Conference Presentation). SPIE BiOS. Soc. Photo-Optic. Instrum. Eng. (SPIE) Conf. Ser. 2(1), 3–18 (2017) 9. Sadat-Hosseini, M., Arab, M.M., Soltani, M., et al.: Predictive modeling of Persian walnut (Juglans regiaL.) in vitro proliferation media using machine learning approaches: a comparative study of ANN, KNN and GEP models. Plant Methods. 18(1), 9–11 (2022) 10. Pecorini, G., Chiellini, F., Puppi, D.: Mechanical characterization of additive manufactured polymeric scaffolds for tissue engineering. 2022(1)52–96 (2022) 11. Kalozoumis, P.G., Marino, M., Carniel, E.L., et al.: Towards the development of a digital twin for endoscopic medical device testing. 40(2), 2069–2081 (2022) 12. Wagner, D.L, Klotzsch, E.: Barring the gates to the battleground:DDR1 promotesimmune exclusion in solid tumors. 7(2), 11–78 (2022) 13. Yousif, M., Salim, A., Jummar, W.K.: A robotic path planning by using crow swarm optimization algorithm. Int. J. Math. Sci. Comput. 7(1), 20–25 (2021) 14. Wang, C., Li, J., Rao, H.: Multi-objective grasshopper optimization algorithm based on multigroup and co-evolution. Math. Biosci. Eng. MBE 18(3), 2527–2561 (2021) 15. Duan, Y., Liu, C., Li, S.: Battlefield target grouping by a hybridization of an improved whale optimization algorithm and affinity propagation. IEEE Access. 99, 1 (2021)

Application of Computer Network Security Technology in Software Development Min Xian(B) , Xiang Zheng, and Xiaoqin Ye Robotics Engineering Laboratory for Sichuan Equipment Manufacturing Industry, Deyang 618000, Sichuan, China [email protected]

Abstract. Computer plays an important role in human life and work, and the popularization of computer education level is more closely related to the overall strength of the country. Software development is an important factor to improve the level of the computer. At the same time, the application of software security technology is also an important way to ensure the normal operation of the computer. This requires paying attention to the specific application of security technology in software development computers. The purpose of this paper is to investigate the technical problems of computer security software development. This work provides an overview of the dynamic data reliability images, based on interactions within the data during software development. The algorithm can be applied to various stages of the software version to protect the copyright of the software. The test in this paper shows that the average correction rate reaches about 40% in open source projects, and the software can also run safely during normal use. It has an excellent ability to resist various cyber-attacks. Keywords: Computer Security · Software Development · Computer Technology · Cyber Security

1 Introduction The emergence of software technology has enabled people to develop and deliver products with better quality and cost-effectiveness of the software, and the “software crisis” problem has been somewhat solved. But with the continuous development and practice of software development, people find that the traditional software engineering process cannot well solve another and more important problem about the software, that is, the security issues. At present, the copyright protection problems facing the software are mainly the illegal replication and illegal use of the software, that is, the behavior of software piracy. Software as a product often brings great economic and social benefits, and its value has been paid more attention. Moreover, due to the digital characteristics of the software, the software has low replication cost and high replication efficiency problems exist, resulting in copyright infringement occurring from time to time. The development of Internet technology has also significantly increased the popularity of digital products, including software, and the resulting copyright security problems are © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 117–126, 2023. https://doi.org/10.1007/978-3-031-31775-0_13

118

M. Xian et al.

also becoming more and more common. The demand for software security protection is also increasingly [1, 2]. In the research of computer security software development technology, many scholars have studied it and achieved good results. For example, ZongluSha proposed a software watermark algorithm based on the equation coefficient, which hides the watermark information by establishing the reciprocal of the equality coefficient and the mapping dictionary of the watermark data. The advantage is that code is not inserted into the software, so the execution of the program does not reduce [3]. JianpengZhu has proposed a fragile software watermarking algorithm for software configuration management. The Fragability of the watermark is used to detect changes in software to address defective [4] in traditional software configuration management. From this point of view, the computer security software development technology has also begun to attract the attention of the Internet industry in China, and there are also a considerable part of the research results worth learning from. In view of the current increased resources for primary software development and the limitations of the SME implementation process, this paper focuses on three key software security technologies used in coding, testing and configuration states. Software development introduces the foundation of software security life cycle development in detail, and puts forward a perfect software security strategy for small and medium-sized enterprises, so that software security can cover the whole software life cycle system, so as to achieve the purpose of saving security company costs and improve the level of security software.

2 Research on the Technology of Computer Security Software Development 2.1 Principles of Computer Software Development Computer software development is supported by advanced technology, and the basic principles of development must be strictly followed in the actual development process. The key to computer software development is to ensure the quality and reliability, provide technical development details, ensure the intelligence of the software development concept, and promote the development of norms through the standard life cycle. Responsible for the evaluation of computer-developed software execution with technology and engineering support, guaranteeing the quality of the software, and ensuring that the software is developed under the expected use restrictions. In the new stage of the development of modern society, the level of computer technology has been gradually improved, and the software development has also slowed down. Insufficient assistance for community development. The foundation of software technology is in the development of computer software, so technological innovation is one of the foundations of the development. It is related to the realization of many computer functions, such as remote control, network support, etc., so it has laid its foundation. For the development of computer networks [5, 6].

Application of Computer Network Security Technology

119

2.2 Security Risks in Computer Software Development (1) The vulnerability of the software itself Complex computer software development once the design loopholes appear, it is easy to affect the performance of the software applications, and there will even be security risks. Bring a lot of inconvenience to users. In complex network environments, there are multiple security risks, and users must update and upgrade users’ computer software in time to avoid technical security risks. Moreover, in the operation process of the computer system, due to the complex environment, it is easy to damage the basic application and network communication, thus affecting the security and stability of the computer system, and even bring unexpected losses to users[7]. (2) Information management problems In the operation process of the computer system, there are certain security risks in the process of information transmission, which are mainly manifested as active attack and passive attack. Active attacks are mainly information content and user status deception. Passive attacks do not destroy the transmission of information, but will capture information through a pyramid or monitoring system, so their presence is more hidden and brings greater relative harm. (3) Hacking or virus attack

Computer network system and software security operation is facing a lot of war is hackers and viruses, hackers use computer technology to analyze your computer problems and its loopholes, and use some computer network technology means to attack the computer, causing the user’s network system is paralyzed, affect the normal development and use of computer software. And hacking is global, and people in various countries may be attacked by hackers and steal their personal information, causing improper losses [8, 9].Computer viruses are very destructive and infectious. When the computer is infected with the virus, the virus will quickly destroy the whole computer, bring adverse effects on software development, and even cause serious damage to the entire computer system. Computer network information is a kind of property, which must be effectively protected. The network virus can analyze the network resources to determine the danger of its existence, and use its own defects to attack it, thus damaging its security. By analyzing the interrelationship of various resources in the computer network information system, we can evaluate the scope and severity of the harm caused by them. The relationship between various basic elements of risk assessment of computer network information system is shown in Fig. 1.

120

M. Xian et al.

Security incidents Computer network information

Safety requirements

Information database

Risk assessment

Intelligent diagnosis

Fig. 1. The relationship between various basic elements of risk assessment of computer network information system

Computer technologyand database technology have developed rapidly in the new century. Regional computer networks have been widely used in government, enterprises, enterprise offices, production control, enterprise communications and other aspects. Widely used in regional computer networks, and have achieved obvious results. Especially in the production process of government and enterprise units, the adoption of intelligent and networked distributed production and manufacturing and office management system can improve the automation, informatization and intelligence of production and operation management, can greatly improve the production efficiency and office management refinement of government and enterprise units, and can improve the sharing degree of government and enterprise units, operation management and production and manufacturing information through network sharing and transmission of data information, it is important to improve the informatization of industrial enterprise decision-making. However, in the application of enterprise, enterprise office and production control network, because most of the users are non-professionals, the management of the network is not standardized, and the awareness of network security is not high, so in the network application, it is easy to be attacked by hackers, trojans and viruses. At present, due to the particularity of the region, the penetration of different factions has led to more and more security problems in the continuous development and application of computer

Application of Computer Network Security Technology

121

network information systems in the region, such as network attacks, diversified forms of threats, high number of vulnerabilities in network operating systems, and the rapid spread of viruses. 2.3 Model-Based Security Vulnerability Test This paper mainly uses three independent models, but uses interrelated techniques at the same time. The model or specification of the basic elements of the application (describing the expected behavior of the system), the implementation model (describing the behavior of many relevant elements), and the attack model. When the model does not resolve a UK-specific problem, the model does not call or determine content related to that particular problem. An incomplete model can be very flexible to creating potential threats: when an unexpected behavior or an event occurs, test cases generated by this incomplete model may also be compatible with the model, so when using a test attack, an exercise when you determine, you receive a pass that the certificate has expired. Therefore, incomplete models are a major problem in model-based safety testing. Complete the model by adding behavioral elements also proved feasible for some simple details, as adding a large number of concerns may limit the use of model-based tests. Thus, by isolating multiple models, implementing models and attack models can cause weaknesses even if the specification is incomplete or not specific [10]. The implementation model is central to the development of test cases to identify implementation-level vulnerabilities in the software. When the implementation model is combined with the canonical model, it can identify region-specific subprinting problems. When used in conjunction with an attack mode, it can provide normal local content vulnerabilities. The test cases were designed based on three combinations of models. The combined physical results are called error conditions, including specification problems and uncertain assumptions, with error conditions defined by the predictive logic. Combining these three models, the solution is to determine whether the implementation model error changes (as defined in the attack model). For any of these predictions, finding a solution would mean that, at each corresponding point, finding a false positives change in the implemented negative test case (also known as a counterexample) would indicate that the error model has been applied. Counterexample is applied correctly, then the execution process of legitimacy will fail [11]. 2.4 Algorithm Application of Computer Security Software Development Technology (1) Backpropagation In the neural network, errors are corrected by back-propagation. Each neuron has a disturbance factor called bias b, which can measure the sensitivity of correcting errors. When b is relatively large, the sensitivity is higher. When it is small, the sensitivity of the neural network is relatively low, which can be expressed by the equation: ∂C ∂u ∂C = =δ ∂b ∂u ∂b

(1)

122

M. Xian et al.

(2) Fuzzy mathematics In fuzzy mathematics, measuring the closeness of two fuzzy sets utilizes the inner and outer products of the sets. The larger the inner product, the closer the fuzzy sets are; the smaller the outer product, the closer the fuzzy sets are. Taken together, the "lattice closeness" is used to characterize the closeness of two fuzzy sets 1 [Pi · Vi + (1 − Pi × Vi )] (2) 2 In general, the software security vulnerability mining technology based on fuzzy metrics constructs a fuzzy set of software execution paths by abstractly interpreting and modeling software systems and attributes, and at the same time combining various vulnerability attributes to generate information that may contain unknown vulnerabilities. Fuzzy set, compare the lattice closeness of two fuzzy sets to complete the vulnerability mining of the software under test [12]. σi (Pi , Vi ) =

In the risk assessment, the quality risk assessment of computer network information system is a common risk assessment method. It requires that the risk be classified based on the experience of risk analysis, the basic principle of risk assessment and the main perception ability, and according to the criteria of risk assessment and similar cases, based on the level of risk, the degree of impact and other factors, and the degree of impact of the risk. Because there are various risk factors in the computer network information system, such as security threats and the dynamic changes of various risk factors, it is a complex and difficult work to evaluate them. It is necessary to comprehensively consider a variety of factors. Some risks can be quantified, but some risks cannot be quantified, cannot be quantified, cannot be quantified, cannot be quantified, is difficult to quantify, and is difficult to quantify. In a complex computer network information system, qualitative analysis is the premise and basis for qualitative research. It can become an important basis for quantitative research, so as to better reflect the objective development law of computer network information system. Qualitative analysis is the key of risk assessment and the basis of assessment results and opinions.

3 Design Experiment of Developing Technical Problems Based on Computer Security Software 3.1 Software Test Asking the test system to install the browser and IIS supporting Web service applications, WindowsServer integrates XMLWeb services with fast, reliable and secure network solutions. Conduct the FindBugs test of the Internet open source projects already existing on the Internet. 3.2 Practical Application The test of client performance, server performance, and network performance test applications is reasonably combined to realize the performance bottleneck analysis and prediction of the whole system. Test applications to network performance focus on monitoring mature network application performance, network analysis, and network application

Application of Computer Network Security Technology

123

performance prediction using advanced automation technologies. For applications for server performance testing, you can monitor the system itself or the commands brought by the system. The aim is to comprehensively monitor the performance of the database and servers. For client performance testing the purpose is to detect the performance of the client application. Primary task eliminates the bug checked by the system.

4 Experimental Analysis of Technical Technology Based on Computer Security Software 4.1 FindBugs Software Testing for Several Open-Source Projects As the development mode of open source software projects is becoming increasingly standardized, many well-known software projects will constantly use static analysis tools to find the problems and repair in the code in the development process, so there are few remaining defects in the code. In this paper, struts2-core, ibatis, nutz, dwr, jdom and other software packages are scanned with FindBugs s, with a small number of scanning results, based on which optimization and sorting has little significance. Therefore, prefuse, webgoat, jfreechart, and sandmark were finally selected as the subjects of the experimental analysis, which respectively used the number of defects in the results after the FindBugs s scan ranging from 180 to 950 (in Table1). Table 1. Accuracy comparison of the project results Projectname misdescription serious accuracyrate Prefuse

10

5

45.99%

jFreeChart

6

1

28.66%

WebGoat

80

6

38.60%

SandMark

27

15

44.20%

As shown in Fig. 2 is the initial results after scanning for these several items using FindBugs s.Where false positives refers to a situation where the scan tool analysis error or the problem does not cause problems in the context of the code.Accuracy in the chart refers to the ratio of software defects worth modifying in this paper, that is, the ratio belonging to moderate and severe defects in the total number.

124

M. Xian et al.

accuracyrate

serious

misdescription

44.20% SandMark

15

Project name

27 38.60% WebGoat

6 80 28.66%

jFreeChart

1 6 45.99%

Prefuse

5 10 Number of misdescription

Fig. 2. The accuracy of the results analyzed by FindBugs for several open source projects

4.2 Development and Application of the Actual Safety System This paper develops a small system in the platform trial process. The software security development and management platform manages the whole process of development effectively. The experimental data are shown in Table 2. This article uses the number of security issues discovered and fixed at each stage of development. Table 2. Compare that on the number of security problems solved stage

Testresult1

Testresult2

designphase

2

3

codingstage

51

49

testphase

27

25

2

1

maintenancephase

Application of Computer Network Security Technology

125

As shown in Fig. 3, if more security problems are found and fixed in the earlier software life cycle stage, the security of the software will be greatly improved in the whole software development process, and ultimately improve the security of the system.At the same time, the project management personnel also have a good understanding of the security situation of the software projects in each period through the software security development and management platform.

Test result 1

70

Number of questions

60

51

50

Test result 2

49

40 27

30

25

20 10

2

3

2

1

0 -10

design phase

-20

coding stage

test phase

maintenance phase

Stage

Fig. 3. The number of security issues discovered and fixed by the software security development management platform at each stage of development

5 Conclusions This paper is about the research of software security guarantee method in software development methodology. This paper uses the knowledge of risk management of security engineering to solve the security problems in the field of software development, and draws on the existing theory and practice of typical software security development life cycle, applies a series of mature ideas of model community and security model community, and focuses more attention on the technical guarantee of computer security software development in the process of software development. In addition, in terms of support tool construction, referring to some existing process support tools and mode library construction ideas, forming a development tool suitable for the theoretical method process can provide partial automation support. Software is an essential part of the way people use computers for work and education. Therefore, in the future, the development technology of the software security will be more and more developed.

126

M. Xian et al.

Acknowledgment. Supported by Sichuan Science and Technology Program, NO: 22ZDYF2965.

References 1. David, A., Xavier, B., Dolors, C., et al.: Non-functional requirements in model-driven development of service-oriented architectures. Sci. Comput. Program. 168(DEC.15), 18–37 (2018) 2. Zhao, Y., Yang, Z., Ma, D.: A survey on formal specification and verification of separation kernels. Front. Comput. Sci. China. 11(4), 585–607 (2017) 3. Gardner, Z., Philippa, S.: Verified trustworthy software systems. Philos. Trans. R. Soc. A. Math. Phys. Eng. Sci. 375(2104), 20150408 (2017) 4. Ibrahim, J., Zhu, J., Hanif, S., Shafiq, S., et al.: Emerging trends in software testing tools & methodologies: a review.Int. J. Comput. Sci. Inf. Secur. 17(7), 108–112 (2019) 5. Ateoullar, D., Mishra, A.: Automation testing tools: a comparative view. Int. J. Inf. Comput. Secur. 12(4), 63–76 (2020) 6. Agrawal, A., Zarour, M., Alenezi, M., et al.: Security durability assessment through fuzzy analytic hierarchy process. Peer J. Comput. Sci. 5(5), e215 (2019) 7. Jafari, A.J., Rasoolzadegan, A.: Security patterns: a systematic mapping study. J. Comput. Lang. 56, 100938 (2020) 8. Vidas, T., Larsen, P., Okhravi, H., et al.: Changing the game of software security. IEEE Secur. Priv. 16(2), 10–11 (2018) 9. Belinda, B.I., Emmanuel, A.A., Solomon, N., et al.: Evaluating software quality attributes using analytic hierarchy process (AHP). Int. J. Adv. Comput. Sci. Appl. 12(3), 165–173 (2021) 10. Meldrum, S., Licorish, S.A., Owen, C.A., et al.: Understanding stack overflow code quality: a recommendation of caution. Sci. Comput. Program. 199, 102516 (2020) 11. Jj, A., Jyl, B., Ys, A.: A data type inference method based on long short-term memory by improved feature for weakness analysis in binary code - ScienceDirect. Fut. Gener. Comput. Syst. 100, 1044–1052 (2019) 12. Devi, T.R., Rama, B.: Software reusability development through NFL approach for identifying security based inner relationships of affecting factors. Int. J. Elect. Comput. Eng. 10(1), 333 (2020)

The Application of Financial Technology in the Intelligent Management of Credit Risk Under the Background of Big Data Tingting Xie(B) Aba Teachers University, Aba, Sichuan, China [email protected]

Abstract. The application of big data (BD) technology in the field of financial technology is becoming more and more extensive, and its development trends and prospects are becoming wider and wider. The transformation from traditional financial industry to modern information technology is an important challenge currently facing our country. This article analyzes and designs the intelligent management of credit risk from the perspective of financial technology. This article mainly conducts related research on the intelligent management of credit risk through case analysis and data mining techniques. Through systematic testing, it is found that my country’s credit risk is worthy of further study. In the past 4 years, the bank’s indicators have fluctuated steadily, and the loss rate has been as high as 2%. This requires an intelligent management system to be put on the agenda. Keywords: Big Data · Background · Financial Technology · Credit Risk · Intelligent Management

1 Introduction BD has become an important information resource in today’s society, and it is an inevitable trend for the development of modern information technology to a certain stage. Since my country’s economy has entered the “new normal” period, the government has put forward higher requirements on enterprises, individuals and other stakeholders. On the one hand, it is necessary to improve the ability of macro-control and the ability to deal with risks. On the other hand, it is also necessary to improve the management level and reduce the risk cost in order to achieve the best benefit effect. There are many research theories on the application of financial technology in the intelligent management of credit risk in the context of BD. For example, Some scholars said that computing and technology are also accompanied by the continuous innovation and development of the financial investment industry [1]. Some scholars analyzed the necessity of BD analysis technology in financial risk control [2]. Some scholars said that the rapid development of financial technology such as BD and artificial intelligence has provided strong technical support for commercial banks to strengthen credit risk management [3]. Therefore, this article also uses the background of BD and applies © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 127–136, 2023. https://doi.org/10.1007/978-3-031-31775-0_14

128

T. Xie

data mining technology to research financial technology and its credit risk management system. This article first studies the value and application of data in credit risk management. Then it elaborates and analyzes the problems existing in the application of BD credit investigation in financial credit risk management. Afterwards, the application of data mining in risk management is researched. Finally, it explains the application of financial technology in credit risk management. In the third part, design the risk management system and conduct test experiments to get the results.

2 The Application of Financial Technology in the Intelligent Management of Credit Risk Under the Background of BD 2.1 The Value and Application of Data in Credit Risk Management In the past, BD was defined as large capacity and multiple types. Today, the concept of BD has been developed. Excellent data analysis skills are the key to innovation in today’s financial markets. Information related to credit risk management, fund management, transaction execution, security and fraud prevention has gradually become one of the core competitiveness of commercial banks [4, 5]. (1) The value of data in credit risk management First, data insight has become a key factor in loan approval decisions. The data can better reflect whether the debtor has the willingness and ability to repay. The data forms an accurate portrait of the debtor, effectively solving the problem of information asymmetry. Secondly, the data provides a basis for the accounting of risk management costs of commercial banks. Earnings hedging risk is the main principle of commercial bank product innovation. Only by calculating the LGD can product pricing be established. The calculation of LGD requires the support of historical data. Third, data is an important parameter for risk management of commercial banks. The particularity of loans means that data information plays a special role in the risk management of commercial banks. Loan amount, loan term and interest rate are also important factors of credit. The calculation of loan terms, interest rates, amounts, etc. must be data-driven. The risk management activities of commercial banks are carried out in an orderly manner on the basis of these data and information [6, 7]. (2) Application of data to credit risk management One of them is to identify risks. The core of risk management is to resolve information asymmetry and identify risks. BD makes false information nowhere to hide. When customers conceal information, banks will quickly expose it through big information. Use data to identify risky potential customers and give negative credit, so the first thing to do is to avoid risks [8, 9]. The second is risk assessment. With the development of BD, the banking industry must apply BD analysis in credit products in order to maximize the return on investment of pricing under controllable risks. In market control, the risk is within a controllable range and the framework and pricing are appropriate to achieve the best interest rate setting [10, 11].

The Application of Financial Technology

129

The third is risk quantification. Risk quantification is the theme of commercial bank risk management, and it is also the watershed between traditional risk management and modern risk management [12]. The fourth is risk early warning. In the context of BD, the banking industry has accumulated a large amount of data, and the external Internet also has a large amount of information. By integrating this information, an automated early warning system can replace a lot of manual work. BD insights provide opportunities for early risk warning. 2.2 Problems of BD Credit Investigation in the Application of Financial Credit Risk Management (1) The breadth and depth of data collection cannot meet the requirements of risk management The inadequacy of the scope of data collection is manifested in: On the one hand, it is incompatible with the traditional central bank credit investigation system, and lacks the basic and credit investigation data of financial institutions. On the other hand, most survey services collect data on the basis of their own business platforms. In the absence of profitability, there is a lack of complementarity and verification between the data. BD credit products cannot meet risk management requirements (2) BD credit products fail to meet the requirements of risk management, including the comprehensive mapping of personal credit, the universality and independence of credit ratings, and the digital and quantitative description of borrower credit. A score that can be applied to any application scenario. (3) The impact of using BD for credit investigation needs to be further evaluated Most of the basic data required for BD credit investigation is obtained online, with the exception of inactive lenders. (4) BD credit investigation scenarios urgently need to be expanded Since the data collection and processing standards of the BD credit investigation industry have not yet been established, the credit investigation systems established by various credit investigation agencies vary from standard data collection to indicator weights and risk model parameters. Different institutions have different cross-validation rules for credit products, which are not comparable and lacking. (5) Lack of laws and regulations for BD credit investigation In the current era of BD with the Internet of Everything, personal data is about data protection and is spreading across platforms. The treatment of information service companies creates personal virtual portraits. This is very important for us in the real world. Therefore, personal data has obvious attributes and resources. The application of virtual portraits can help solve the problem of economic and social market failures and bring convenience to social services.

130

T. Xie

2.3 Application of Data Mining in Risk Management Data mining is a research process supported by a database. This research involves finding and sorting out large amounts of data, with the goal of finding valuable hidden events and information. This research process involves the use of prediction, artificial intelligence and statistics and other scientific technologies to conduct in-depth analysis and scientific extraction of data, discover the knowledge contained therein, and find valuable information. The process of exploring knowledge from data is an important method of data mining. It is also called knowledge discovery process (KDD) and data mining in the process of database and knowledge extraction. The specific process is shown in Fig. 1.

Pretreatment Extract Source data Clean up the data

Visualization

Database Model Fig. 1. KDD Detailed Process Diagram

The knowledge discovery (KDD) process in the database is an important process for identifying new, effective, potential and ultimately understandable models. Data mining algorithms include K-Means clustering algorithm for clustering, principal component analysis method for dimensionality reduction, and hierarchical clustering method.

The Application of Financial Technology

131

The framework of the K-Means clustering algorithm is as follows: (1) Given m data samples, set Q = 1, and randomly select K initial cluster centers. (2) Solve the distance D between each data sample and the initial cluster center. If the conditions are met: F[a, C(F)] = min{F[xq , Cq (Q)], q = 1, 2, · · · m}

(1)

So Aq ∈ Rk (3) Let Q = Q + 1, calculate the new cluster center: 1 aq , l = 1, 2, · · · , L m ml

C1 (2) =

(2)

q=1

And the value of the error sum of squares criterion function Lz =

ml K  

ak − Cl (2)2

(3)

l=1 k=1

Judgment: If |Lz (Q + 1) − Lz (Q)| ≺ θ , , it means the algorithm is over. Otherwise, if Q = Q + 1, then return to step (2) again. The hierarchical clustering method decomposes the given data in a hierarchical manner until a certain condition is met. First, the data objects are formed into a clustered tree structure, and then decomposed from top to bottom according to the hierarchy. Cluster analysis is applied to specific cases of data mining, mainly through the method of “clustering”, to classify a large number of obviously irregular data into several data sets of different properties. 2.4 Fintech Fintech refers to a new type of high-tech industry based on the Internet, through cloud computing, information management and other technologies, with the help of a cloud platform to achieve financial communication. Compared with the traditional industrial economy, it has obvious characteristics: one is in product research and development. In the traditional industry, due to the defects in its own development and management mode, the opaqueness of transactions between enterprises, high transaction costs and serious information distortion have been caused. The second is the lack of scientificity and authority when analyzing and applying BD. Third, with regard to credit risks, my country has not yet formed a unified standard and standardized credit rating system. It can not only provide more convenient and safer services to customers. At the same time, it can also achieve deeper monitoring of product or service quality and efficiency. It has two major characteristics: intelligence and integration. Intelligentization: Realize credit risk control by analyzing and integrating the original or statistical information generated in a large number of business processes.

132

T. Xie

In the context of BD, financial technology can classify customers according to their different needs. In the traditional enterprise management model, relevant information is mainly obtained through manual review and analysis. With the advent of the Internet era, information technology has become more and more developed and popular, and people’s requirements for network platform security have continued to increase, and innovative applications of financial technology with data mining and cloud computing as the main means have appeared. Intelligent analysis system developed based on BD technology. The system is composed of banks, financial institutions and Internet companies. After statistics and collection of a large amount of customer information in this information system, personalized services can be provided for different needs. At the same time, it can also design products that meet their requirements and meet their own business needs based on these customer groups, so as to achieve a win-win situation between customers and financial institutions. The system can analyze a large amount of data to obtain more information about enterprise risk management, financial status, etc., and use this information to help banks develop a reasonable and effective credit evaluation system. Based on cloud computing platform development, design and realization of intelligent management. The system is a comprehensive financial technology model system structure established for target users by analyzing, evaluating and serving a large number of customer groups and providing them with personalized services. It mainly includes the use of different types of data resources, such as the storage of statistics and analysis results of internal financial indicators of the enterprise. The traditional financing method is mainly bank loans, but this single financing channel makes the development of enterprises slow. With the application and development of BD technology in credit risk management, its advantages have gradually become prominent. Fintech has realized information sharing through the integration, analysis and mining of large amounts of data on the Internet platform, and the use of advanced information technologies such as cloud computing. At present, some companies in our country have begun to use large databases combined with social networks to collect and use customer transaction data. For example, Ali’s “microfinance” can provide enterprises with more financing channels. Fintech companies’ financing methods are mainly bank loans, and BD technology provides a new platform for them. Through this platform, cloud computing and other related methods can be used to integrate and analyze customer transaction information and obtain corresponding credit scores. In this case, it is possible to combine risk control with the development of the enterprise itself. For financial technology in the context of BD, the financing method is mainly to adopt the indirect credit model of commercial banks, that is, to conduct transactions through third-party payment platforms. Issuing bonds and convertible preferred stock: This model has become the mainstream form abroad. At present, some companies in my country have begun to try to use bonds or convertible preferred stocks to solve liquidity problems. The credit risk assessment is shown in Fig. 2. At present, the bank’s credit evaluation and analysis of customers are based on the financial data of the company’s development in the past. That is, by examining the development status of enterprises in recent years, and selecting corresponding evaluation indicators for analysis, it can be rated. However, this method is not effective in determining the company’s debt servicing capacity. While

The Application of Financial Technology

Personnel

133

Visit customers Quantitative analysis

Hub

Customer credit

Authorization control

Relevant personnel

Firewall

Relevant personnel

Fig. 2. The credit risk assessment

a combination of factors must be taken into account when evaluating credit, the most important one is the business’s operating condition, especially cash flow. From this point, it can be seen that in the previous operation stage, the company’s financial situation can be used as a reference. But that is not the only criterion.

3 Realization of Credit Risk Management System 3.1 Operating Environment Hardware environment Database server: IBM P6 620. Document server: IBM X3660. Application server: IBM X3660. WEB server: high-end PC (above PI18000, memory above 2G) or minicomputer. Other: network card. Software environment: Microsoft Windows 2013 Server, Microsoft Windows NT Server, Microsoft Intermet Explorer 9.0 Operating system: IBM AIX system Database: IBM DB2 v10.01

134

T. Xie

Other software: Tomcat5.6. 25, JAVA virtual machine Development software: My eclipse. 3.2 Technical Architecture The system is based on the company-level J2EE application architecture. The reception is mainly based on the JSPX framework of the MVC model. The backend is a business rules platform centered on rule analysis and parallel computing. JSPX is a pure JAVA rapid development framework, and also provides a large number of web components. JSPX includes: Jspx Bean and Jspx Web control. Corresponds to the OOP of the database table or business entity. The framework supports automatic generation of POJOs. These POJOs can also be called data objects, which are mainly used for business operations or page display. The web components provided by the framework mainly include trees, menus, tables, Ajax components, standard HTML repackaging, Jspx pages, and Jspx page controllers. For business purposes, using a multivariate analysis method, based on two aspects of business scope and business rules, consider the internal connections and interactions between variables. 3.3 System Test of Risk Measurement Results For the credit risk measurement model, the system calculates the value of each credit risk indicator (default probability, default rate, risk level, listing rate, non-performing loan migration rate, etc.) on a monthly basis. It is convenient for bank managers to monitor and understand the risk situation of the whole bank in a timely manner. Risk development trend to reduce the occurrence of credit risk.

4 Analysis of System Statistics 4.1 All Indicators Statistics of the Whole Bank Use the system designed in this paper to carry out data statistics for each row. The changes in the bank’s non-performing rate, default rate and expected loss rate are shown in Table 1: Table 1. Statistics of All Indicators of the Whole Bank Defective rate

Annual loss rate

Default rate

Expected loss rate

2018

2.56

1.897

2.016

0.854

2019

4.735

2.15

2.41

0.912

2020

5.916

2.207

2.845

1.012

2021

4.154

1.97

2.986

0.901

The Application of Financial Technology Expected loss rate

Default rate

0.901

2021

4.154

1.012

Years

2.207 0.912

2019

0.854

2018 0

1

2.845 5.916

2.41 2.15 2.016 1.897 2

Defective rate

2.986

1.97

2020

Annual loss rate

135

4.735

2.56 3

Rate

4

5

6

7

Fig. 3. Statistical Results of Bank Indicators

It can be seen from Fig. 3 that the expected annual loss rate is lower than the expected monthly loss rate, indicating that the bank’s expected loss is declining. In addition, the default rate and default rate fluctuated greatly, indicating that bank credit asset defaults are increasing.

5 Conclusion In the context of BD, financial technology is very different from traditional management methods. my country’s credit risk evaluation standards are not perfect, lack of uniform standardization and scientification. Moreover, credit risk management still has the problem of incomplete information disclosure system. Financial technology innovation in the context of BD is based on the interpenetration and integration of traditional financial industry, Internet technology, information and communication and other related fields to produce new information technology. Through the statistical results of the system designed in this article, we can see that my country’s bank credit situation is not optimistic. For this reason, it is necessary to strengthen the risk management and early warning of financial technology.

References 1. Wajcman, J.: The digital architecture of time management: science. Technol. Human Values 44(2), 315–337 (2019) 2. Kotova, E.E., Korablev, Y.A., Pisarev, A.S., et al.: Information technologies in the management of technical systems - development of the engineering education. J. Phys. Conf. Ser. 1864(1), 012124 (7p.) (2021) 3. Askar, E.H.F.A., Aboutabl, A.E., Galal, A.: Utilizing social media data analytics to enhance banking services. Intell. Inf. Manag. 14(1), 14 (2022) 4. Weber, R., Musshoff, O.: Risk-contingent credit for sovereign disaster risk management. Int. J. Dis. Risk Reduct. 56(4), 102105 (2021)

136

T. Xie

5. Trejo García, J.C., Martínez García, M.Á., Venegas Martínez, F.: Credit risk management at retail in Mexico: an econometric improvement in the selection of variables and changes in their characteristics.Contaduría Y Administración. 62(2), 399–418 (2017) 6. Kiatsupaibul, S., Hayter, A.J., Somsong, S.: Confidence sets and confidence bands for a beta distribution with applications to credit risk management. Insurance - Amsterdam 75(1), 98–104 (2017) 7. Irani, R.M.,Meisenzahl, R.R., et al.: Loan sales and bank liquidity management: evidence from a U.S. credit register. Soc. Sci. Electron. Publ. 2015(1), 1–66 (2017) 8. Al-Own, B.,Minhat, M., Gao, S.: Stock options and credit default swaps in risk management. J. Int. Financ. Mark. Inst. Money. 53(MAR), 200–214 (2018) 9. Gatfaoui, H.: Equity market information and credit risk signaling: a quantile cointegrating regression approach. Econ. Model. 64(aug.), 48–59 (2017) 10. Jimbo Santana, P., Lanzarini, L., Bariviera, A.F.: Fuzzy credit risk scoring rules using FRvarPSO. Int. J. Uncertain. Fuzz. Knowl. Based Syst. 26(DEC.SUPPL.1), 39–57 (2018) 11. Jaber, J.J., et al.: Evaluation of portfolio credit risk based on survival analysis for progressive censored data.AIP Conf. Proc. 1830(1), 1–9 (2017) 12. Tripathi, D., Edla, D.R., Kuppili, V., et al.: Evolutionary extreme learning machine with novel activation function for credit scoring. Eng. Appl. Artif. Intell. 96(4), 103980 (2020)

Measurement and Prediction of Carbon Sequestration Capacity Based on Random Forest Algorithm Jiachun Li1 , Jiawei Fu2(B) , and Justin Wright3 1 Department of Economics and Management, Northeast Electric Power University, Jilin, Jilin,

China 2 Department of Electrical Engineering, Northeast Electric Power University, Jilin, Jilin, China

[email protected] 3 Achieve Xiamen International School, Xiamen, Fujian, China

Abstract. Global climate change characterized by the effects of climate warming has become an indisputable fact. With an increase in the number of disasters and extreme weather observances, climate change poses a serious threat to the survival and development of human society. Strategies such as “carbon peak” and “carbon neutral” have become common globally for coping with the impending changes; effective methods of carbon sequestration will provide a feasible path for a proper double carbon strategy. In this study, random forest algorithms were used to analyze climate characteristics, and the data then preprocessed from tree age and forest survival state, so as to obtain the forest carbon sequestration capacity of different survival modes. Based on the data of forest cover in Beijing, the carbon sequestration capacity of the city in the next 100 years and the optimal strategies that forest managers should use are predicted to provide reference for effective forest planting and management planning. We strive to develop forest management plans without violating natural conditions, balance felling and planting, and maximizing social value. Keywords: Carbon Sequestration · Random Forest Algorithm · Time Prediction Model

1 Introduction Forests are not only the main body of terrestrial ecosystems, but also are one of the largest carbon stocks. Forests in the geosphere, biosphere of biogeochemical process play the important roles of “buffer” and “regulator”. Atmospheric composition monitoring and measurement of CO2 flux and model simulation studies show that forest ecosystem carbon density is higher than other vegetation ecosystems, which accounts for 46% of the world’s terrestrial carbon reserves and is the largest carbon sink on earth [1]. Its carbon exchange has an important impact on the global carbon balance. Forest vegetation concentrates the global terrestrial biomass and its carbon storage becomes a key factor for studying the absorption and emission of CO2 by forest ecosystems to the atmosphere. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 137–144, 2023. https://doi.org/10.1007/978-3-031-31775-0_15

138

J. Li et al.

The function of forests to fix atmospheric CO2 and maintain global carbon balances have become a hot research topic in many countries [2]. Carbon sequestration, including physical sequestration and biological sequestration, is a measure to increase the carbon content in the carbon pool other than the atmosphere. Using plant photosynthesis to improve an ecosystem’s ability to absorb and store carbon by controlling carbon flux is called biosequestration [3]. Plants convert carbon dioxide from the atmosphere into carbohydrates through photosynthesis and fix it in the form of organic carbon in plants or soil, thus reducing the concentration of carbon dioxide in the atmosphere and slowing the trend of global warming [4]. This study identifies transition points applicable to all forest management plans by developing a carbon sequestration model based on consideration of the overall forest value and by thinking about the scope of the decision model management plan and the conditions leading to deforestation. To determine how much carbon dioxide a forest and its products can sequester over time, thus determining a forest management plan that is most effective in absorbing carbon dioxide and enables forest managers to consider both forest management and social interests in order to make the best use of forests.

2 Research Design In order to establish a more perfect forest management plan, we have taken into consideration the natural environmental factors such as climate, soil, tree age, tree species and human factors such as national policies, economic development and social needs. Through analysis of several countries, we have developed models for addressing forest carbon sinks in a variety of ways that are relatively complete and can help forest managers make the right decisions. Forest carbon sequestration is a complex interdisciplinary issue with international significance [5]. Related issues involve politics, economics, culture, ecology, geology and many other disciplines [2]. It is impossible to simulate every possible situation. So, we made a number of assumptions and simplifications, every one of which was reasonable. The data we collect from online databases is accurate, reliable and consistent with each other. Since our data sources consist of reputable websites from international organizations, it is reasonable to assume that their data is correct. In the model verification, we ignored the influence of national index data on weight calculation and results, so we made the following assumptions: (1) there is no significant national policy to change the weight of our indicators; (2) The country is an integrated unit, regardless of regional differences within the country. The above two assumptions are the premise of our further research.

Measurement and Prediction of Carbon Sequestration Capacity

139

3 An Empirical Study on Carbon Sequestration In order to explore the combination model that can obtain the maximum carbon sequestration, data of leaf litter, live carbon, large tree, small tree, dead wood, dead carbon and liana were selected. Firstly, a large number of selected data are preprocessed. The pretreated sample is shown in Fig. 1.

Fig. 1. Pretreatment sample

Python was used to define functions to process and eliminate outliers to ensure the accuracy and reliability of data. The processed sample is shown in Fig. 2.

Fig. 2. Sample after treatment

Then, the random forest algorithm is used to analyze the characteristics of the processed data, establish the random forest regression, and rank the importance of each variable. The sorting result is shown in Fig. 3.

140

J. Li et al.

Fig. 3. Order of importance of factors affecting carbon sequestration

Finally, the sorting results are shown in Table 1. Leaflitter has the highest carbon storage capacity, followed by living carbon, followed by large and small trees. So cutting down trees properly and turning them into wood products would be better for sequestration. Conversely, pursuing more forests is not necessarily the best way to increase carbon sequestration. Table 1. The sorting results. leaflitter

0.329930

Living carbon

0.243382

Big tree

0.199265

Small tree

0.101846

deadwood

0.049974

liana

0.032448

4 Application of the Model As shown in Fig. 4, Beijing, the capital of the People’s Republic of China, had a forest coverage rate of 43.8 percent in 2020. Affected by the warm temperate continental monsoon climate, the zonal vegetation in Beijing is warm temperate deciduous broad-leaved forest. Due to the complex terrain and diverse ecological environment, the vegetation types in Beijing are rich and diverse, with obvious vertical distribution rules [6].

Measurement and Prediction of Carbon Sequestration Capacity

141

Fig. 4. Forest vegetation coverage in Beijing

Table 2. Forest-related data of Beijing Time

Per hectare (m/ha)

Forest resource carbon storage (ten thousand tons)

Tree carbon storage (ten thousand tons)

Forest carbon storage (ten thousand tons)

1976

17.18

571.8

235.12

101.88

5.09

1981

22.41

510.8

210.11

79.28

5.51

1988

28.81

683.14

280.77

202.44

9.4

1993

30.67

1185.37

486.69

238.99

8.95

1998

33.21

1292.05

529.74

325.76

9.66

2003

35.87

1362.84

558.77

399.33

10.54

2008

29.20

1495.99

613.36

493.33

9.49

2013

33.22

2117.83

868.32

677.03

11.53

2018

39.21

3476.51

1425.38

1157.75

16.12

Note: The data are from the National Earth System Science Data Center.

Carbon density (ton/ha)

142

J. Li et al.

Using time series analysis of forest carbon storage and carbon density data from 1976 to 2018 in Beijing, we can predict the carbon sequestration for 100 years after 2022. The data is shown in Table 2. First, to determine how much carbon dioxide a forest is expected to sequester over a period of time, we use a traditional time series model to predict the future carbon sequestration of the forest, as shown in Fig. 5.

Fig. 5. Forest-related data analysis in Beijing

By using traditional time series prediction, we can predict the trend of Forest sequestration in Beijing in the next 100 years, in which forest carbon sequestration has decreased. By analyzing the data, we speculate that this may be due to forest fire. Fire in forest systems is not uncommon as a mechanism for self-renewal. But this time, in the dry season, there was little rain, high winds, and the fire spread so fast that no grass could reach it. As the area of forest destroyed grows, trees hold less water and the land becomes drier and drier. The drier the land, the less fire-resistant it is, the greater the risk of another fire. As a result, the forest is trapped in an endless cycle. If the numbers drop frequently, severe forest fires can break out, mainly because of hot, dry weather. Every Year in July and August, during the dry season, a large number of trees are cleared by burning trunks, branches and leaves to clear pastures or farms. In addition, disorderly mining, road building and housing construction due to lack of effective management is another cause of large-scale deforestation. After investigating a large number of data and it was found that in recent years, tree planting efforts are not effective, the survival rate is not high, and enthusiasm for tree planting has not been fully realized. Since the vast majority of existing tree forest rights from the forest rights restructuring are ascribed to individuals. The phenomenon of unplanned and random felling increases year by year. The amount of felling each year is far greater than the amount of newly planted trees. It is the main disease that causes the forest coverage rate to decline rather than increase. To this end, the following measures should be taken: 1. Carefully investigate, verify and truthfully report the existing forest coverage rate in each region; 2. Formulate detailed measures and plans, assign tasks and specific completion dates for improving forest coverage;

Measurement and Prediction of Carbon Sequestration Capacity

143

3. According to the level of forest coverage, the specific number of trees cut down in each region; 4. Improve the assessment system and strengthen administrative promotion.

5 Scalability and Adaptability Analysis In this part, we test the scalability and adaptability of the model by substituting data for amazon rainforest and Greater Khingan Mountains forest. Located in the Amazon plain of South America, amazon rainforest covers 5.5 million square kilometers [7]. Rainforests cross eight countries and account for one third of the world’s rainforests, 20% of the global forest area, and are the largest and most diverse tropical rainforests in the world [8]. It produces at least 20 percent of the world’s oxygen. The annual average temperature is 27–29°C. The Amazon rainforest is known as the “lungs of the earth” and the “green heart”. The Greater Khingan Mountains is the western part of the Khingan Mountains, located in the northeast of Inner Mongolia Autonomous Region and the northwest of Heilongjiang Province [9]. It is the best-preserved and largest virgin forest in China and the watershed between the Inner Mongolia Plateau and the Songliao Plain. First, we substitute the data into the model to get the trend of data indicators in recent years, and find that it is more consistent with the real world situation. Data in the Amazon rainforest are highly volatile due to frequent forest fires, with large fluctuations from year to year. However, Greater Khingan Mountains is a well-preserved primeval forest in China, so its index is more stable and keeps at a high level. There is only slight fluctuation between successive years. We chose to make time prediction for the Greater Khingan Mountains, and applied the time prediction model by optimizing it in terms of fairness and sustainability. We optimize the current forest management plan by changing the weight of each indicator. The trend of our fitted curve is also consistent with the current development of each forest. This shows that our model can be well adapted to the current system. This also demonstrates the stability of our model, which can be used in practical forest management plans.

6 Conclusions In this study, based on the principle of dynamic efficiency of natural resource development and utilization, we theoretically designed a set of management schemes for forest harvesting and regeneration to maximize the return per unit area of forest land and ensure that the return does not decrease with time. The model was applied to various forests. We can apply this idea to the analysis of forestry economic policy, and propose to establish the system of national purchase of ecological forest, forest tree compensation system and forestry subsidy system on the basis of the existing system of cutting prohibition and cutting restriction of ecological forest. We should encourage nongovernmental capital to participate in ecological forestry construction by setting reasonable purchase price, compensation and subsidy amount, and expand the comprehensive benefits of forestry economy and ecology through scientific management, so as to achieve more fruitful results in the process of building “green world”.

144

J. Li et al.

References 1. Li, Y., Mei, B., Linhares-Juvenal, T.: The economic contribution of the world’s forest sector. Forest Policy Econ. 100, 236–253 (2019) 2. Ganamé, M., Bayen, P., Dimobe, K., et al.: Aboveground biomass allocation, additive biomass and carbon sequestration models for Pterocarpus erinaceus Poir. Burkina Faso. Heliyon. 6(4), e03805 (2020) 3. Nzunda, E.F.: Forest management plan for implementation of a pilot REDD+ project for masito community forest reserve, Kigoma, Tanzania for 2012–2017: management directives. Int. J. Res. Granthaalayah 9(5), 30–40 (2021) 4. Ma, X.,Xiong, K., Zhang, Y., Lai, J., Zhang, S., Ji, C.: Advances and prospects of carbon storage in forest ecosystems. J. Northwest Forest. Univ. 34(05), 62–72 (2019) 5. Nzunda, E.F.: Forest management plan for implementation of a pilot REDD+ project for masito community forest reserve, Kigoma, Tanzania for 2012-2017: general description. Asian J. Environ. Ecol. 2021, 10–20 (2021) 6. Boukili, V., Bebber, D.P., Mortimer, T., et al.: Assessing the performance of urban forest carbon sequestration models using direct measurements of tree growth. Urban Forest. Urban Green. S161886671530145X (2017) 7. da Cunha, C.T., Pereira de, C.J.O., Gustavo, S., et al.: The continuous timber production over cutting cycles in the Brazilian Amazon depends on volumes of species not harvested in previous cuts Forest Ecology and Management. 490 (2021) 8. Zhao, Z., Guo, Y., Zhu, F., et al.: Prediction of the impact of climate change on fast-growing timber trees in China. Forest Ecol. Manage. 501 (2021) 9. Sasaki, N., Asner,G.P., Pan, Y., et al.: Sustainable management of tropical forests can reduce carbon emissions and stabilize timber production. Front. Environ. Sci. 4 (2016)

Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm Yuanchang Jin1(B) and Yufeng Li2 1 College of Biology and Agriculture (College of Food Science and Technology), Zunyi Normal

College, Zunyi 563006, Guizhou, China [email protected] 2 College of Agriculture and Food Engineering, Baise University, Baise 533000, Guangxi, China

Abstract. Microorganisms are a class of organisms with simple structures and the widest distribution in nature, among which viruses and bacteria are the two most common microorganisms. They are major players in the material cycle in nature and can have a huge impact on human activities. The growth of microorganisms is not only affected by the external environment such as temperature, dissolved oxygen, pH, etc., but also by the growth regulation mechanism of the microorganism itself, such as cell division regulation, essential gene expression regulation, programmed death, etc. The purpose of this paper is to study the identification and optimization system of microbial growth rate based on matrix factorization algorithm. Constructing a robust system with control functions and parameters best describes established microbial developmental processes and highlights some of the properties of this system. The system identification model takes the smallest rectangle of experimental data and statistical value as the function index. According to the developmental characteristics of microorganisms, the discretization method is used to transform the system identification model into a parameter identification problem. Finally, matrix factorization algorithms aim to obtain a satisfactory solution to the system identification model to solve the problem. Experiments show that the detection speed of the algorithm constructed in this paper is about 4–5 times that of the traditional automatic detection, and the detection accuracy reaches about 99%. Keywords: Matrix Decomposition · Microorganisms · Growth Rate · Fermentation

1 Introduction With the development of modern science, especially biochemistry, molecular biology, genetics, cell biology and other sciences, human research on macromolecules such as nucleic acids, proteins, polysaccharides, and thin films has reached new heights and Y. Jin and Y. Li—Co-first author. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 145–154, 2023. https://doi.org/10.1007/978-3-031-31775-0_16

146

Y. Jin and Y. Li

depths. Microbial growth rate detection methods vary, and division rates come in different forms. According to the size of automation, it can be divided into: simulated search, semi-automatic search and automatic search. Test methods include dry weight method, hyphal length measurement, tumor measurement and microbial morphology. These methods are traditional based on sports betting procedures and require staff to learn microscopic dilution of bacterial solutions., bacterial liquid coating, centrifugation, acid-base titration, solution preparation and other advanced technologies, the obtained sampling points are small in size, unable to distinguish between dead bacteria and live bacteria, easy to cause bacterial interference, reduced medium content, etc., so the search results are always inaccurate. Accurate, weak, cannot be applied online. Discovery is a manufacturing process. Therefore, the microbial growth rate identification and optimization system based on matrix factorization algorithm is one of the recent research directions of biological rate detection [1, 2]. Research on the microbial growth rate identification and optimization system based on matrix decomposition algorithm, many scholars have studied it and achieved good results. The maximum product objective at the terminal moment such as the optimization space is established, and the optimal control model is established, and the availability of the optimal control solution for most of the models is further proved. The finite differentiation technique is used. Based on the control parameters and swarm particle optimization, a global optimization algorithm for solving this problem is given, and the corresponding numerical results are obtained [3]. Based on the background of continuous microbial fermentation, Gerzen N studied the stability and optimal control of the system, balancing the lifespan and stability conditions of the system, and established an optimal control model constrained by the setting, and the solution of the model is the proof [4]. This article provides an overview of the established microbial growth process and describes some aspects of the system with better control and robust design. The system identification model records the minimum quarter of experimental data and statistical values as performance indicators. According to the growth characteristics of microorganisms, a discretization method is used to transform the system identification model into a parameter identification problem. Finally, matrix factorization algorithms aim to obtain a satisfactory solution to the system identification model to solve the problem.

2 Research on Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm 2.1 The Regulation of the External Environment on the Growth of Microorganisms Environmental factors affecting the growth of microbial cells mainly include temperature, pH value and oxygen distribution. Appropriate environmental conditions can promote cell growth and increase yield and yield, while harsh environmental conditions can have a corresponding effect on microbial cells. Part; inhibits cell division and growth, affects protein production, causes degeneration and accumulation, destroys cytoskeleton, inhibits cell proliferation, reduces cell proliferation, etc., ultimately leading to cell

Microbial Growth Rate Identification and Optimization System

147

death. For example, during microbial depletion, when oxygen is reduced to water, certain toxic compounds, such as hydrogen peroxide, peroxide anions, etc., are released, which can directly affect the formation and production of important genes in metabolic syndrome. Path, thereby affecting the human body. Adverse reactions lead to cell death. However, aerobic microorganisms contain enzymes that reduce these products, such as catalase, peroxidase, and superoxide dismutase, so cells are not destroyed [5, 6]. 2.2 Optimization Model and Algorithm The identification of microbiology is essentially based on the repeated response of microbial populations to environmental factors, using past observational experimental data to predict microbial behavior in the food environment through mathematical modeling, and using the experimental results to confirm that the model error is no greater than that of microorganisms. Laboratory measured value. Since the growth process of microorganisms can be roughly divided into three stages: adaptation period, logarithmic growth period and stationary period, in the adaptation period, due to the suspension of the substrate, the growth rate of microorganisms is very slow, and the number of growth rate microorganisms is in logarithmic growth. Period, gradually increasing [7, 8]. When it is large, it gradually becomes smaller, and in the stable period, the growth rate is close to zero due to the inhibitory effect of various products. According to this feature, the fermentation time period [0, t f] is inserted into 3 time points 0 < t 1 < t 2 < t 3 < t f, thereby dividing [0, t f] into 4 time periods. Where t 1 For the end of the adaptation period, t 2 is the time when the growth rate of microorganisms in the logarithmic growth period is the largest, and t 3 is the beginning of the stable period. In each time period u (t) is approximated by a linear function, namely a1 (t − t1 ); 0 ≤ t ≤ t1 a2 (t − t1 ); t1 ≤ t < t2 u(t) = a3 (t − t3 ); t2 ≤ t < t3 a4 (t3 − t); t3 ≤ t ≤ tf

(1)

In a3 =

a2 (t2 − t1 ) t2 − t3

(2)

2.3 Application of Matrix Decomposition in Identification and Optimization of Microbial Growth Rate With the development of the Internet, data and information are becoming more and more abundant, and big data scenarios have gradually become daily. In this case, the excessively high data dimension will make the calculation more complicated, which will result in the time and difficulty of image data processing. In order to solve this problem, data dimensionality reduction methods have gradually emerged in the fields of pattern recognition. As a common data dimensionality reduction method, matrix

148

Y. Jin and Y. Li

factorization can decompose the original feature matrix to form multiple different matrix multiplications. Some of these matrices retain most of the information of the original feature matrix. And because this matrix is a lower-dimensional matrix, it can be used as a new matrix for further processing of the data. This application can quickly detect the number of microorganisms, thereby identifying the growth rate of microorganisms [9, 10]. 2.4 Microbial Growth Model Microbial growth models can be divided into deterministic models and statistical models. The deterministic model refers to the functional relationship between factors obtained through mechanism analysis. A statistical model refers to the working relationship between variables obtained through statistical analysis directly from industry-standard test or measurement data. In this paper, the system identification model is established by taking the minimum rectangular error amount between the test data and the statistical value as the function index. According to the developmental characteristics of microorganisms, the system identification model is transformed into a parameter identification problem using a logical method. Finally, an advanced matrix factoring algorithm is built to solve this problem. The statistical results showed that the growth of microorganisms was slowed down due to the inhibitory effect of glycerol during the adaptation process. After the acclimation period, the microorganisms enter the logarithmic growth phase and the growth rate is accelerated. At this stage, due to the continuous development of the product, the microorganism is in a dual inhibitory effect of the substrate and the product. Using glycerol, after entering the stationary phase, the growth of microorganisms is initially inhibited by the product, and the rate remains constant [11, 12]. 2.5 Matrix Decomposition Model (1)

Given a transcription data matrix F1 ∈ RN ×G and microbial growth data matrix (2) F2 ∈ RN ×G . Each row of matrix F1 represents the expression level of all genes in a cell line, and each column represents the transcription level of a gene in all cell lines [13].Each row of matrix F2 represents the sensitivity level of all drugs in one cell line, and each column represents the drug response of one drug in all cell lines.The traditional iPad (1) algorithm decomposes the matrix F1 into the pathway activity level matrix f ∈ RN ×K and the gene-pathway association matrix B1, and decomposes the matrix F2 into the matrix f and the drug-pathway association matrix B2.So the model can be described as follows: F1 = fB1 + E (1) ; F2 = fB2 + E (2)

(3)

In Eq. (3), E(1) and E(2) represent the error matrix.The model can be optimized into the following problems: F1 − fB12H + FF2 − fB22H

(4)

A microorganism is usually associated with only a few pathways, so matrix B2 is sparse.In addition, the purpose of imposing constraints on the matrix f is to ensure the recognizability of the model.

Microbial Growth Rate Identification and Optimization System

149

3 Design Experiment of Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm 3.1 Introduction to the Development Environment Matrix factorization classifier and microbial growth rate identification system was developed in the open source environment codeblock, using MINGW, a GCC-like compiler [14, 15]. The compiler has high performance and a small footprint. Since the codeblock software is designed as open source, the code can be used across platforms. According to the requirements of codeblock, the code in this article can run on any platform. The development environment of this project, codeblock, runs under windows, but the recompiled code can run under linux. The library functions used in this article include OPencv. OPencv is an excellent software library developed by INTEL for graphic image and pattern recognition, especially in pattern recognition. Based on the advantages of the above two, the basic framework of the system can be built under win7 using codeblock and opencv, including preprocessing, feature extraction and optimization, and classifier recognition [16]. 3.2 Experimental Design This paper mainly adopts comparative experiments and objective experiments. In this paper, the identification and optimization system of microbial growth rate based on matrix decomposition algorithm constructed in this paper is compared with the traditional automatic identification system to identify the natural growth of the same microorganism [17, 18]. Objective experiments are mainly to verify the accuracy of recognition and the speed of experiments [19]. 3.3 Microbial Growth Rate Model The microbial growth rate detection unit is mainly composed of a CCD sensor, a stepper motor, a ball screw, and an ATmega128 single-chip microcomputer.The measurement parameters are mainly the number and growth of microorganisms within a certain period of time.During the measurement, the CCD sensor is used to determine the growth information of microorganisms. The microcomputer processes the information of the number of microorganisms and drives the stepper motor to drive the ball screw action. Finally, the growth rate of microorganisms is determined by calculating the step size of the stepper motor. The CCD sensor is greatly affected by light factors. In order to improve the stability and adaptability of the system, the system uses dynamic thresholds to binarize the output image information of the CCD sensor.In summary, the block diagram of the microbial growth rate detection unit is shown in Fig. 1.

150

Y. Jin and Y. Li

Image acquisition

Image information

CCD sensor

Mechanical transmission

Threshold acquisition

Quantity acquisition

Atmega128

Microbial growth rate Fig. 1. Microbial growth rate detection model

4 Experimental Analysis of Microbial Growth Rate Identification and Optimization System Based on Matrix Decomposition Algorithm 4.1 Growth Rate Identification and Comparison In this paper, the traditional automatic detection algorithm is compared with the identification algorithm based on the matrix decomposition algorithm constructed in this paper to identify the growth rate of microorganisms., and the identification speeds of the two algorithms are recorded as shown in Table 1. Table 1. Time for two algorithms to identify microbial growth rate

traditional algorithm Algorithm of this paper

1

2

3

4

14

19

twenty two

26

2

3

5

6

From Fig. 2, it can be seen that the identification speed of the algorithm based on matrix decomposition constructed in this paper is much faster than the traditional automatic detection speed. In the case of a short time, the two difference speeds have reached more than 5 times, and when the time becomes longer In the case of, the recognition speed of the two has reached a difference of about 4 times. This advantage of identification speed brings more accurate results and avoids errors in the number of microbial

Microbial Growth Rate Identification and Optimization System

151

30

recognition time

25 20 15 10 5 0 1

2

3

4

growth time traditional algorithm

Algorithm of this paper

Fig. 2. Time for two algorithms to identify microbial growth rate

growth due to timeliness. Because microorganisms continue to grow all the time, a faster and more accurate identification system is very necessary. 4.2 Identification Accuracy The most important thing for an identification system is its identification accuracy. In this paper, the detection results of the algorithm constructed in this paper and the traditional algorithm are compared with the correct value. The detection accuracy is shown in Table 2. Table 2. Microbial growth rate identification accuracy of the two algorithms 1

2

4

traditional algorithm

95.2

96.2

95.6

Algorithm of this paper

99.7

99.2

98.9

It can be seen from Fig. 3 that the two identification methods are relatively accurate, with an accuracy of more than 95%. However, the identification method based on matrix decomposition constructed in this paper is more accurate, with an accuracy of about 99%, so the algorithm in this paper has more advantages. From the model comparison, the model usually has a high prediction from the development period to the stable period. In

152

Y. Jin and Y. Li

101 100 99 98 percent

97 96 95 94 93 92 1

traditional algorithm

2 growth time

4

Algorithm of this paper

Fig. 3. Microbial growth rate identification accuracy of the two algorithms

addition, the model can predict microbial development and inactivity under more environmental conditions. As long as the number of factors affecting microbial development is infinite, the general landscape of the microorganism can be greatly improved, and microbial prediction models may have better application areas. It can help researchers analyze the potential of microbial development and predict the inefficiency of microbial development in complex environments.

5 Conclusions Identification of microbial growth rate involves many fields such as environmental protection, health, agriculture, food, medicine and national security of the national economy. As an interdisciplinary field, microbial exploration includes lasers, optical technology, imaging, biochemistry and medicine. The development of substrate feeding technology in the field of microorganisms requires effective communication and collaboration of experts in multiple fields to realize the practical application of automatic monitoring of microbial growth through visual systems. Judging from the exact location of the entire microbial detection point, although there are high research results, there is still a lot of room for improvement in terms of improving recovery quality, shortening recovery time, and reducing recovery costs. A new approach to microbial exploration that provides technical and scientific support for scientific research and practical applications.

Microbial Growth Rate Identification and Optimization System

153

References 1. Grossmann, I.E., Ye, Y., Pinto, J.M., et al.: Modeling for reliability optimization of system design and maintenance based on Markov chain theory. Comput. Chem. Eng. 124, 381–404 (2019) 2. Taibi, A., Ikhlef, N., Touati, S.: A novel intelligent approach based on WOAGWO-VMD and MPA-LSSVM for diagnosis of bearing faults. Int. J. Adv. Manuf. Technol. 120(5), 3859–3883 (2022) 3. Behtash, M., Alexander-Ramos, M.J.: A decomposition-based optimization algorithm for combined plant and control design of interconnected dynamic systems. J. Mech. Des. 142(6), 1–13 (2020) 4. Gerzen, N., Mertins, T., Pedersen, C.B.W.: Geometric dimensionality control of structural components in topology optimization. Struct. Multidiscip. Optim. 65(5), 1–17 (2022) 5. Kyo, K., Noda, H., Kitagawa, G.: Co-movement of cyclical components approach to construct a coincident index of business cycles. J. Bus. Cycle Res. 18(1), 101–127 (2022) 6. Son, S.W., Lee, D.H., Kim, I., et al.: A new design of the objective function for the optimal allocation of distributed generation with short-circuit currents. J. Electr. Eng. Technol. 17(3), 1487–1497 (2022) 7. Miranda, A.F.D.F., Bauerfeldt, G.F., Baptista, L.: Numerical simulation of the gas-phase thermal decomposition and the detonation of H2O2/H2O mixtures. React. Kinet. Mech. Catal. 135(2), 619–637 (2022) 8. Kozyrev, G.I., Kleimenov, Y.A., Usikov, V.D.: Method for concurrent identification of a linear dynamic measurement system based on preliminary nonlinear transformation of the input signal. Meas. Tech. 64(12), 949–953 (2022) 9. Balavand, A.: A new feature clustering method based on crocodiles hunting strategy optimization algorithm for classification of MRI images. Vis. Comput. 38, 1–30 (2021). https:// doi.org/10.1007/s00371-020-02009-x 10. Nonomura, T., Shibata, H., Takaki, R.: Extended-Kalman-filter-based dynamic mode decomposition for simultaneous system identification and denoising. PLoS ONE 14(2), e0209836 (2019) 11. Melingi, S.B., Mojjada, R.K., Tamizhselvan, C., et al.: A self-adaptive monarch butterfly optimization (MBO) algorithm based improved deep forest neural network model for detecting and classifying brain stroke lesions. Res. Biomed. Eng. 38(2), 647–660 (2022) 12. DeshchereVSkii, A., Sidorin, A.Y.: Iterative algorithm for time series decomposition into trend and seasonality: testing using the example of CO2 concentrations in the atmosphere. Izv. Atmos. Ocean. Phys. 57(8), 813–836 (2022) 13. Clercq, E.D., Hondt, S.D., Baere, C.D., et al.: Effects of various wound dressings on microbial growth in perfused equine musculocutaneous flaps. Am. J. Vet. Res. 82(3), 189–197 (2021) 14. Wang, K.-Y., Shao, J., Shao, L.-G.: A pressure-calibration method of wavelength modulation spectroscopy in sealed microbial growth environment. Chin. Phys. B 30(5), 54203–054203 (2021) 15. Delavar, M.A., Wang, J.: Modeling microbial growth of dynamic membrane in a biohydrogen production bioreactor. Int. J. Hydrogen Energy 47(12), 7666–7681 (2022) 16. Suganya, E.: Analysis of biodegradation and microbial growth in groundwater system using new the homotopy perturbation method. Turkish J. Comput. Math. Educ. (TURCOMAT) 12(1S), 606–614 (2021) 17. Thangamani, G., Raja, P., Rajkumar, P., et al.: Standardization of UV-C treatment, Ozonization and chlorination for reducing microbial growth in carrot under laboratory conditions. J. Pharmacogn. Phytochem. 9(6), 557–563 (2020)

154

Y. Jin and Y. Li

18. Hamad, A., Djalil, A.D., Saputri, E.Y., et al.: Bay leaf essential oils inhibited microbial growth and exerted potential preservation effects on tofu. Adv. Food Sci. Sustain. Agric. Agroind. Eng. 3(2), 46–52 (2020) 19. Blackburn, L., Acree, K., Bartley, J., et al.: Microbial growth on the nails of direct patient care nurses wearing nail polish. Oncol. Nurs. Forum 47(2), 155–164 (2020)

Construction Building Interior Renovation Information Model Based on Computer BIM Technology Tao Yang and Yihan Hu(B) Shenyang Jianzhu University, Shenyang, Liaoning, China [email protected]

Abstract. The reuse of industrial buildings can achieve energy conservation and environmental protection, and maximize social and economic benefits. However, the energy consumption of the construction industry is very high, which makes people pay more and more attention to the transformation and reuse methods to reduce demolition and the transformation of industrial buildings. Computer BIM technology can fully display the architectural design scheme, and the interior space that needs to be transformed can be adjusted in time to meet the requirements of indoor temperature and lighting. Therefore, this paper introduces computer BIM technology into the planning of building interior renovation projects, and builds an interior renovation information model. It is hoped that in the construction process, the use of this technology can make the design closer to the actual living needs of customers. After analyzing the changes of indoor lighting area and temperature area before and after the reconstruction, it is found that the areas with lighting coefficient greater than or equal to 5% account for 96.93% of the total workshop area. After the transformation, the area with indoor temperature less than or equal to 300K accounts for 86.2% of the total plant area. The basic conditions of the plant are improved, and the plant can be used again in other ways. Keywords: Computer BIM Technology · Interior Renovation · Information Model · Indoor Lighting and Temperature

With the acceleration of urbanization, many old industrial buildings in urban areas need to be transformed due to the adjustment of industrial structure. In some areas, these buildings can be reused through reconstruction and reuse, which not only responds to the national call to achieve carbon neutrality and achieve the purpose of energy conservation and emission reduction, but also retains the historical context of the city. However, due to insufficient consideration of the life cycle energy consumption and the impact on the environment, the reconstructed buildings show high energy consumption and low efficiency. Many scholars at home and abroad have carried out experimental research on the design method of BIM technology in the indoor reconstruction and reuse of old industrial buildings, and the research progress is also very smooth. Architect Jiang Nan, for example, has realized the importance of BIM technology and value, and transform the BIM technology is applied to actual projects, the function change as the main means © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 155–164, 2023. https://doi.org/10.1007/978-3-031-31775-0_17

156

T. Yang and Y. Hu

to scrap the old industrial buildings or idle for residential buildings, office buildings, cultural centers, etc., and gradually extend from monomer building to the overall transformation, promotes the development of old industrial building and reform [1]. In some less developed areas, for urban development and had to move to modify and reuse of old industrial building is becoming the trend of the development of the urban construction, the area but they are generalizations of reconstruction design elements and innovative research is not enough in-depth, especially the study of internal transformation of old industrial building in relatively lags behind, make the reconstruction result is not very desirable [2]. The research results of BIM technology in the interior renovation and reuse design of old industrial buildings are good. The use of BIM technology in the renovation plan of old industrial buildings in the future can improve the project planning efficiency of design architects and shorten the project process. This paper expounds the concept of the old industrial buildings, according to the principle of reform, the application of BIM technology to transform the industrial architectural design of the project process, and used in building renovation, comparing indoor daylighting area before and after the reform and the change of temperature area, verify the BIM technology is helpful for the modification of the construction project, so, How to make use of BIM technology and develop reasonable and effective renovation schemes according to different renovation requirements to meet the design requirements is the key problem for architects to solve. In the process of urbanization construction in China produced a large number of old industrial building, whether it is remodeling or demolition waste industrial building will produce a large amount of pollution to formation environment of construction waste, therefore, consider to tear down the old industrial buildings to build might form a more serious environmental pollution and resource waste, It is necessary to transform the effective space of old industrial buildings and put it into use again, which can greatly reduce the generation of construction waste, and also reduce the construction cost and energy consumption.

1 Overview of the Application of BIM Technology in Old Industrial Buildings 1.1 The Concept of Interior Renovation and Reuse of Old Industrial Buildings Based on BIM Technology BIM is the abbreviation of Building Information Modeling, that is, Building Information Modeling. It can integrate the engineering Information, process and resources in different stages of the whole life cycle of an engineering project into a model, which is convenient for the use of all project participants. As China’s mandatory standards for building heating energy consumption continue to improve, Beijing, Tianjin and other places in residential buildings have begun to implement the standard of 75% energy saving, namely the four-step energy saving. Some old building envelope thermal performance and hVAC system has been unable to meet the requirements of the current standard, must be based on the actual situation of the existing building analysis and research, determine the best renovation scheme. BIM technology is becoming more and more mature in the application of project planning, design, construction and operation and maintenance.

Construction Building Interior Renovation Information Model

157

The existing building reconstruction based on BIM technology provides a new idea for the existing building reconstruction in China [3, 4]. At the same time, with the continuous progress of urbanization, all buildings or structures built and used over time to serve industrial development have lost their original production and use functions due to various reasons, which can be said to be old industrial buildings. It generally shows the following characteristics: outdated, obsolete, cannot meet the production needs of new technology; Cessation of industrial production in the construction industry; With a long life and special value and importance, retrofit is the structural restoration of old buildings so that they can be reused. It is a kind of development method based on the cultural connotation and cultural deposits of the original building and on the premise of preserving the “original” architectural style. Reuse can also be called a retranslation of architectural functionality. The functional transformation enables us to record the value of the building in the past and translate it into the new vitality and vitality of the building [5, 6]. 1.2 Clustering Algorithm Reference Computer BIM technology mainly benefits from its excellent clustering algorithm, clustering algorithm can quickly help computer BIM technology to model the building interior renovation model. Clustering algorithm is a statistical method of clustering analysis in essence, that is, the initial data is classified to form a clustering center. In the cluster center, each cluster data has a high degree of similarity, which is the fundamental reason why it is divided into the same cluster center. The clustering algorithm will identify and analyze each data, and divide the data into the classification center with the most consistent distance after the analysis operation. However, there are great differences between each clustering center formed by the clustering algorithm. Therefore, there are great differences between each clustering center in order to facilitate the clustering algorithm to distinguish each clustering center, so as to carry out distance division and transmission of clustering data more efficiently. The distance formulas commonly used in cluster analysis mainly include Euclidean distance, square Euclidean distance and interregional distance. The Euclidean distance formula is as follows:  m (xac − xbc )2 (1) d (a, b) = c=1

where, xac is the c parameter value in the a data sample in the data space, and xbc is the c parameter value in the b data sample. The square Euclidean distance formula is as follows: m d (a, b) = (xac − xbc )2 (2) c=1

where xac is the value of the c parameter in the a data sample in the data space, and xbc is the value of the c parameter in the b data sample.The squared Euclidean distance is also called the squared Euclidean distance, which is mainly the sum of the squares of the differences of m variables between the two data samples xa and xb .

158

T. Yang and Y. Hu

The formula of interzone distance is as follows: m |xac − xbc | d (a, b) = c=1

(3)

Interzone distance, also known as absolute distance, is mainly the sum of the absolute values of the difference between the values of m variables of two data samples xa and xb . Where xac is the c parameter value in the a data sample in the data space, and xbc is the c parameter value in the b data sample. Although the calculation methods of the above formulas are different, their main variable parameters do not change. Therefore, when using the clustering analysis algorithm, the algorithm system will automatically judge and select the algorithm method according to the difficulty of the algorithm, so as to reduce the calculation workload of the algorithm, and finally achieve the purpose of rapid data clustering, and assist the computer BIM technology to quickly build the building interior renovation model. 1.3 Principles for the Renovation and Reuse of the Internal Space of Old Industrial Buildings Principle of integration with the historical context of the city: old industrial buildings bear too much historical memory of industrialists in the old industrial era. While redesigning and reusing old building Spaces, planners must also be targeted to protect and preserve the cultural features of the past. Therefore, architects should pay attention to the protection of historical culture when implementing reconstruction and reuse plans [7, 8]. Principle of sustainable development: The transformation of the interior of old industrial buildings must become a part of the road of sustainable development. During the transformation process, the vitality of the buildings should be protected, and the reconstruction method of continuous use should be followed, so that the old industrial buildings can maintain the maximum use value in a long time and continue the culture of the old buildings. In terms of renovation concept, destructive renovation thought should be abandoned to maintain the long-term survival of old building structures [9]. Dynamic conservation principle: any object that exists in space is alive, and buildings have a time limit. Once buildings reach the peak of their usefulness, people need to redefine their existence [10]. Reconstruction of buildings is to allow people to redefine the existence significance of old building space, maximize the use of resources, so that old industrial buildings can continue to support urban development and construction, which in turn can drive the prosperity and development of the overall economy [11, 12]. Figure 1 shows the distribution diagram of spatial reconstruction reuse principles. 1.4 Indoor Parameters Specified by Building Design Standards The design standards of internal thermal environment of architectural design rooms in China can be referred to Table 1. Indoor thermal parameters, including temperature, humidity and wind speed, should be considered when designing the indoor environment of a building. The heating temperature of civil buildings can be between 15 °C–20 °C, summer temperature can be in the range of 22 °C–25 °C, humidity can be between

Construction Building Interior Renovation Information Model

159

The principle of integration with the historical context of the city

Principles of Rebuilding and Reusing the Internal Space of Old Industrial Buildings

Principles of Sustainable Development

Dynamic protection principle

Fig. 1. Principles of space reconstruction and reuse Table 1. Design reference range of architectural design indoor thermal environment Design code for heating, Energy-saving design standards ventilation and air conditioning of for public buildings civil buildings Heating temperature

15 °C–20 °C

16 °C–25 °C

Summer Temperature 22 °C–25 °C

23 °C–25 °C

Winter

Humidity

30%–60%

35%–65%

Wind speed

0.18 m/s–0.3 m/s

0.15 m/s–0.3 m/s

Temperature 20 °C–24 °C

19 °C–22 °C

Humidity

30%–60%

30%–60%

Wind speed

0.15 m/s–0.2 m/s

0.1 m/s–0.25 m/s

30%–60%, wind speed should be 0.18 m/s–0.3 m/s; Winter temperature can be in 20 °C– 24 °C, humidity can be in 30%–60%, wind speed can be in 0.15 m/s–0.2 m/s. The heating temperature of the public building is between 16 °C and 25 °C. The temperature, humidity and wind speed in summer and winter are shown in the table. According to this standard, the design of indoor thermal environment can create a comfortable indoor environment.

2 Design Strategy of BIM Technology in Interior Renovation and Reuse of Old Industrial Buildings In this paper, the transformation of a workshop as an example, through field measurement of lighting area, indoor temperature less than or equal to 300K area of the area and the total construction area of the workshop to compare and analyze the changes before and after the transformation.

160

T. Yang and Y. Hu

2.1 Design of BIM Technology in the Interior Renovation and Reuse of Old Industrial Buildings

Field research, search for information Feasibility assessment, input into BIM platform Prepare preliminary design plan Confirm the plan and drawings for approval Using BIM construction simulation

Officially entered the construction phase Project acceptance and post-service Fig. 2. The design process of BIM technology in the interior renovation and reuse of old industrial buildings

Figure 2 shows the project design process of introducing BIM technology in the process of interior renovation and reuse of old industrial buildings. When it is necessary to carry out interior renovation of an old industrial building, the designer should go to the building site for on-site survey, record various data information of the building, and input the data information into the BIM platform. Then for the building renovation plan to develop a preliminary renovation program, and then program evaluation, and finally determine the feasibility of the program. Once the scheme is determined, BIM technology can be used to formally carry out the construction plan. After the completion of the construction project, the acceptance project and the follow-up work of the service project are also needed.

Construction Building Interior Renovation Information Model

161

2.2 The Practical Application of BIM Technology in the Indoor Renovation and Reuse of Old Industrial Buildings This experiment takes an old workshop as an example. The workshop was abandoned due to insufficient light and frequent damp in the room. Therefore, the workshop was reconstructed to improve the lighting and temperature of the workshop and realize the reuse of the workshop. (1) Daylighting situation after renovation

Table 2. Distribution of indoor daylighting coefficient before and after renovation

Daylighting factor(%)

Transformation of the former

After transformation

Percentage(%)

Percentage(%)

5~10

14.68

6.65

10~15

21.84

32.16

15~20

19.17

23.89

20~25

14.63

17.48

25~30

10.36

9.40

30~35

3.45

3.13

35~40

0.26

2.68

40~45

0.18

1.54

>= 5

84.57

96.93

In the calculation of lighting, the influence of surrounding buildings on plant lighting and shading components on indoor lighting is not considered. As shown in Table 2 and Fig. 3, the areas with lighting coefficients greater than or equal to 5% in the building plane before reconstruction account for 84.57% of the total plant area. Generally speaking, the lighting area of industrial buildings should reach at least 90% of the building area, so the lighting area before transformation does not meet the requirements of the code. After the transformation, the area with lighting coefficient greater than or equal to 5% accounts for 96.93% of the total plant area, which meets the lighting requirements. Due to the addition of the atrium in the reconstruction of the interior space of the building, the lighting area in the whole plant has been significantly improved. The lighting coefficient in most areas is between 10% and 25%, while the maximum area ratio of indoor lighting area before the reconstruction is only 21.84%. It can be seen that after the transformation, more areas of the workshop have increased natural lighting. (2) Indoor temperature after renovation When the indoor temperature of old industrial buildings is reformed, the method of implanting wind capture tower is usually used to realize the indoor natural ventilation, so

162

T. Yang and Y. Hu

Transformation of the former

40

After transformation

35

Area ratio

30 25 20 15 10 5 0 -5

5~10

10~15

15~20

20~25

25~30

30~35

35~40

40~45

Daylighting factor Fig. 3. Comparison of lighting area before and after renovation

Table 3. Area ratio of indoor temperature less than or equal to 300K Area ratio(%) Transformation of the former

77.8

After transformation

86.2

that the ventilation will affect the indoor thermal environment. Therefore, the proportion of the building area with indoor temperature lower than 300K to the building area of the whole factory before and after reconstruction is measured, and the natural ventilation is taken as the influencing factor of indoor temperature. As the area ratio increases, it indicates that the indoor temperature in more areas is less than or equal to 300K, which indirectly reflects that ventilation treatment improves the indoor temperature. As shown in Table 3, the area with indoor temperature less than or equal to 300K accounted for 77.8% of the total area of components before renovation, and 86.2% after renovation, an increase of 8.4%. It can be seen that ventilation measures improve the indoor thermal environment, but also reduce the energy consumption of air conditioning heating and cooling operation in hot and cold seasons, to achieve the effect of energy saving. (3) Waste discharge after transformation Table 4 shows the comparison of “three wastes” emissions generated by the plant in industrial manufacturing before and after the transformation. Among them, after the transformation, the emission of waste gas is reduced by 22%, the emission of waste water is reduced by 17%, and the emission of waste residue is reduced by 8%, indicating that the transformation of old industrial buildings can achieve energy conservation and

Construction Building Interior Renovation Information Model

163

Table 4. Comparison of “three wastes” emissions before and after renovation Transformation of the former(%)

After transformation(%)

waste gas

73

51

waste water

65

48

waste residue

70

62

People

emission reduction. The following is the satisfaction data survey of the information model of building interior renovation, as shown in Fig. 4:

10 9 8 7 6 5 4 3 2 1 0 Group 1

Group 2

Group 3

Group 4

Group 5

Research team Satisfied number

Number of dissatisfied

Fig. 4. Data survey of satisfaction of information model of interior renovation of buildings

There are 50 people in this survey, who are divided into five groups. Through the survey of these 50 people, it is found that most of them are satisfied with the information model of interior renovation of buildings in this paper.

3 Conclusion In this paper, the general situation of indoor renovation of old industrial buildings is studied. Taking the lighting, indoor temperature and the discharge of “three wastes” in industrial production of a workshop as an example, the workshop is redesigned by combining relevant theories of BIM technology. It is found that the indoor lighting area of the workshop increases under a reasonable lighting coefficient after renovation. The

164

T. Yang and Y. Hu

ventilation condition was improved and the indoor temperature was raised by implanting wind catching tower. After the transformation, the proportion of “three wastes” emissions is reduced a lot, in response to the national emission reduction target, which shows that it is of certain practical significance to use BIM technology for transformation to realize the reuse of old industrial buildings. Compared with the traditional design scheme, this paper add BIM technology in architectural design, construction personnel can be observed by information technology such as computer or CAD comprehensive spatial structure construction drawings, according to construction characteristics of indoor indoor to adjust the existing conditions, make the function of the building more accord with people demand, this retrofit scheme can reduce costs at the same time, Create greater economic benefits.

References 1. Pawowicz, J.A.: Computer-aided design in the construction industry – BIM technology as a modern design tool. Budownictwo o Zoptymalizowanym Potencjale Energetycznym 10(2/2020), 81–88 (2020) 2. Abed, H.R., Hatem, W.A., Jasim, N.A.: Adopting BIM technology in fall prevention plans. Civil Eng. J. 5(10), 2270–2281 (2019) 3. Solov, D., Kopotilova, V., Katyuk, D., et al.: Comparison of CAD and BIM technology efficiency with the use of a mathematical model. Constr. Mater. Prod. 4(1), 18–26 (2021) 4. O’Brien, W.J., et al.: Benefits of three- and Four-dimensional computer-aided design model applications for review of constructability. Transp. Res. Record 2268(1), 18–25 (2018) 5. Hedges, S.: Interior decoration to exterior surface: the beleaguered relief. Interiority 2(1), 79–93 (2019) 6. Kim, S., Chin, S., Kwon, S.: A discrepancy analysis of BIM-based quantity take-off for building interior components. J. Manag. Eng. 35(3), 05019001.1–05019001.12 (2019) 7. Shin, J., Lee, J.K.: Indoor walkability index: BIM-enabled approach to quantifying building circulation. Autom. Constr. 106, 102845.1–102845.14 (2019) 8. Talatahari, S., Azizi, M.: Optimum design of building structures using tribe-interior search algorithm. Structures 28(1), 1616–1633 (2020) 9. Sadeghi, M., Elliott, J.W., Porro, N., et al.: Developing building information models (BIM) for building handover, operation and maintenance. J. Facil. Manag. 17(3), 301–316 (2019) 10. Goodman, J.H.: Building interior tubes and nonimaging reflectors (BITNR) studies. J. Green Build. 15(2), 213–248 (2020) 11. Goodman, J.H.: The building interior evacuated long-span study: atrium hotel. Solar Today 32(1), 28–31 (2018) 12. Shinohara, T., Matsuoka, M., et al.: Statistical analysis and modeling to examine the exterior and interior building damage pertaining to the 2016 Kumamoto earthquake. Earthq. Spectra 38(1), 310–330 (2022)

Data Acquisition Control System Applying RFID Technology and Wireless Communication Xiaohong Cao(B) , Hong Pan, Xiaojuan Dang, and Jiangping Chen School of Information Engineering, Shaanxi Fashion Engineering University, Xi’an 710000, Shaanxi, China [email protected]

Abstract. With the rapid development of the electronics industry, RFID technology is widely used in various fields, and the RFID card is the most typical representative of the production process. RFID technology uses the principle of electromagnetic induction, through the sensors installed in different locations to obtain the information of the measured object, and it automatically collects the corresponding data to achieve the target object detection, tracking control and alarm functions. This paper proposes and designs a general wireless data acquisition control system based on RFID technology, the main function of this system is to use STM32 microcontroller as the control core, it transmits the collected data to the upper computer through the wireless data acquisition module, and uses STM32 chip for system software programming to control the information transmission between the RF card module and the wireless transceiver module. Finally, the collected information is sent to the user, thus realizing the functions required by the RFID system. In addition, the data acquisition system also realizes the acquisition and synchronization of images, which can not only ensure the clarity of the images, but also realize the recording of the operation condition of the field equipment. This can ensure the system operation process for the emergence of problems can be corrected in a timely manner, the final realization of data transmission is not lost and not chaotic, which also improves the reliability and security of data acquisition. Keywords: RFID technology · Wireless communication · Data acquisition control systems

1 Introduction In modern industrial production, automation technology is widely used, and electronic data acquisition control system has become an indispensable part of modern industrial production process control. The control system takes monitoring and collecting parameters as the main goal, and uses various sensors to convert the collected signals into electrical signals, which are transmitted to the production control room through communication cables to realize the collection of various data parameters [1]. In the production process, the data acquisition control system plays a vital role, in which data acquisition refers to the conversion of analog quantities in the physical space into digital © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 165–175, 2023. https://doi.org/10.1007/978-3-031-31775-0_18

166

X. Cao et al.

quantities and the processing of the collected digital quantities, so as to ensure that the production control parameters operate on a stable and reliable basis and realize the realtime data acquisition [2]. In actual production, due to various uncertain factors (such as equipment failure, environmental changes, etc.) or the lack of effective monitoring of control parameters, resulting in the system can not work properly, so a reliable and stable transmission and processing of information is required. Data control means that the computer or processor sends out commands, that is, digital quantities, and then the data processing system calculates the control parameters according to the received digital quantity information, so as to realize real-time monitoring of the production process, which can improve the reliability and economy of industrial equipment [3]. This paper focuses on the data acquisition control system based on RFID technology, which is a new type of industrial automatic data acquisition system that collects, processes and transmits information in one, and it is a non-contact automatic acquisition technology. Traditional data collection technology is done by human beings, and there are many drawbacks in this data collection method, such as easy to cause data loss, omission and even errors. Compared with traditional data collection technology, RFID technology has the characteristics of simple structure and high reliability, and it can collect information more truly and comprehensively, and it can control the system as needed [4]. In the context of the development of information automation, data acquisition and control systems are used in many fields, such as military weaponry, aerospace, radar communications and other defense fields. In these fields, data acquisition and control systems play an important role. In foreign countries, such as the United States, Germany and other countries widely used data acquisition and control systems, they have complete performance and network communication functions, it can collect a large amount of realtime data and will be stored, managed and applied. However, this system is inflexible, it has a unified interface, and it needs to solve the problem of information exchange and data sharing between data acquisition and control system in the application. There are also many data acquisition and control system manufacturers in China, but most of them are based on industrial field data acquisition and control system [5]. For example: Moving camera, digital lens, automatic gain control, etc. These systems require sensors and computers and other equipment to achieve the tracking and monitoring of moving targets, positioning monitoring, recording display and other functions of the operation, so as to achieve the overall function of the system. However, due to the high hardware design requirements of these systems, and easy aging, failure rate is also relatively large, resulting in poor stability and reliability of the data acquisition and control system, which can not meet the requirements of modern industrial production management on the reliability, stability and real-time control system. At present, the most widely used data acquisition and control system in China is the industrial site real-time monitoring system. Based on the above situation, this paper mainly studies the data acquisition control system based on RFID technology, which is a new type of industrial automatic data acquisition system that collects, processes and transmits information in one, and it is a non-contact automatic acquisition technology. Traditional data collection technology is done by human beings, and there are many drawbacks in this data collection method, such as easy to cause data loss, omission and even errors. Compared with traditional data collection technology, RFID technology has the characteristics of simple structure

Data Acquisition Control System Applying RFID Technology

167

and high reliability, it can collect information more realistically and comprehensively, and can control the system as needed [6].

2 Design of RFID-Based Data Acquisition Control System The design requirement of wireless communication data acquisition control system is to monitor and detect network abnormality in real time, and it should be able to analyze, process and display monitoring data quickly. The advantages of wireless communication are fast transmission speed and strong anti-interference, and it can realize the monitoring and automatic processing of various unexpected situations in the network communication system. This system mainly uses STM32 as the main control chip, based on the communication protocol, it uses wireless transceiver module, data collection and transmission network, using the microcontroller to control the external optical terminal to send and receive signals. In the monitoring system, the STM32 chip has high reliability and it can realize the transmission of real-time monitoring data. In addition. This system uses RFID technology, which not only can effectively collect RF transceiver data, but also can display it on the LCD screen in real time, so that it is easy to use and manage [7]. The wireless data acquisition control technology is based on communication protocols, and the communication link is established between the serial port and the microcontroller to control the real-time and accuracy of RF data acquisition, which can effectively improve the efficiency of the monitoring system. 2.1 Device Selection of System Hardware Circuit The device selection of the hardware circuit of the system can be divided into two parts: the selection of wireless communication devices and the selection of microcontroller devices. 2.1.1 The Selection of Wireless Communication Devices In this design, RFID technology is used for data processing. The data collection system needs to process and analyze a large amount of data information, so RFID technology is chosen as one of its core technologies. It also has good anti-interference performance and low power consumption, which provides favorable conditions for the subsequent data analysis. In addition, the use of RFID technology can also effectively reduce the cost and reduce the post-maintenance cost. Compared with traditional methods, the IOT data collection system proposed in this paper is more simple and convenient, easy to operate, and has stable and reliable performance for remote monitoring. Specifically, it mainly uses RF communication to send data signals to the top of each sensor module, and then transmits these data to the server side for storage, so as to achieve the purpose of real-time monitoring. Of course, if we want to further improve the speed of data transmission, then we have to add antennas or other devices. Here, the communication scheme is finalized by comparing the Bluetooth module and the RF nRF module. The performance characteristics of the two are compared as shown in Table 1:

168

X. Cao et al. Table 1. Comparison of the characteristics of Bluetooth modules and RF nRF modules

Product

Hardware design

Bluetooth module

It is composed of Complex, strict multiple chips, timing such as transmitting and receiving processing, baseband processing, etc.

nRF module High frequency inductors and filters are all built in, requiring few external components

Interface mode

Programming

Communication rate

Communication 300 to 400 kbps protocols and software stacks are complex and take a long time to get familiar with

Simple, just More convenient connect with programming single chip VO or SPI

1 Mbps

2.1.2 Selection of Microcontroller Devices From the above, it is clear that in order to realize the effective management of the whole data acquisition system, it is necessary to use the corresponding microcontroller. And with the continuous improvement of science and technology, various new microcontrollers have been gradually developed and widely used in various industries, the most typical of which is RFID technology. RFID is a kind of non-contact automatic identification technology, it has strong anti-interference ability, long communication distance and low cost, so it has received more and more attention and favor. At present, the market mainly has the following types of microcontroller products available: STM32F103RCT6/T5S2C8A1-DC026; ST company’s STC89C51 series; Freescale company’s MSP430 series. In order to make this data acquisition system better meet user requirements, it is necessary to select the appropriate microcontroller to complete the data processing work, specifically, it can be considered from the following points: (1) Determine the required functional modules and interface types according to user requirements. In general, the use of RFID technology for data transmission used in the module are relatively simple, so as long as it can be installed in the computer host, but these products are relatively expensive some, while there is also a certain degree of limitation. (2) The system structure should be reasonably designed with the actual situation. For example, when using wireless communication mode, it is necessary to choose the suitable radio frequency identification or Bluetooth communication equipment. In addition, attention should also be paid to whether the power supply mode of the device is applicable to the battery power supply mode involved in this experiment, if not, then it will lead to normal operation or even failure problems.

Data Acquisition Control System Applying RFID Technology

169

2.2 System Architecture The main function of the wireless communication system is to send commands to the industrial control room, the on-site detection module and the remote user terminal through the data collector in real time. The application is based on the combination of RFID technology and sensors to realize the environmental parameters in the industrial control room, including temperature, humidity and other monitoring indicators, and it can compare the monitoring range with the set value for processing, so as to meet the requirements of the upper computer control system, and complete the centralized analysis and judgment of the controlled object current size and voltage fluctuations and alarm. At the same time, it can also receive real-time feedback information through relevant software and send it to the wireless data transmission module, so as to realize the communication working platform among remote users. In the data acquisition and control system, the standard analog current signal output from the sensor is collected, converted into digital quantity through A/D conversion module and transmitted to the microcontroller, which controls the peripheral devices to sample, calculate and convert the collected analog current signal after processing. For example, I is the collected current signal, let the input current is the smallest, then I = 4 mA, then U + = 0.48 V, the op-amp inverting input voltage as shown in the formula (1), and then through the amplification of the inverting proportional operational amplifier circuit, the output voltage of the I/V conversion circuit can be obtained, as shown in formula (2). In the process of data transmission, multiple measurement modules are connected as a whole through serial communication. The main functions of the data acquisition control system include multiple analog input channels, multiple digital input channels, multiple digital output channels, multiple network communication interfaces, multiple data communication and test interfaces, etc. [8]. These functions allow the system to provide corresponding functions in different environments, thus meeting the monitoring requirements. These functions allow the system to provide corresponding functions in different environments, thus meeting the monitoring requirements in Fig. 1. U− = U+ = 0.48 VOUT = (

R1 + 1) · U− R2

(1) (2)

2.3 Design of Data Acquisition Control System The data acquisition system is designed to apply the sensor nodes in the wireless sensing network to collect and process various types and quantities, and obtain real-time information through corresponding algorithms to realize wireless communication and data acquisition control. This system includes data acquisition module, data storage module, A/D conversion module, RF transceiver module and FPGA control module.

170

X. Cao et al.

Fig. 1. General block diagram of system hardware

The data acquisition module is the core of the wireless data acquisition control system, which transmits all the information obtained from the sensors to the microcontroller after processing, and then the wireless communication module uses RFID-based data acquisition control technology to process, analyze and integrate the information obtained from the sensors. The data acquisition module is shown in Fig. 2. Data storage module is the core module of wireless data acquisition system, which is mainly used to store the real-time transmission information from the user’s sending command to the receiver’s sending and receiving end, and the receiver needs to collect the transmission information in real time. Wireless data communication is widely used in today’s society, but it has its own limitations to a certain extent, so there are still some shortcomings in wireless communication. For example: limited data storage space, slow storage rate and easy to be interfered by the external environment and other shortcomings, but also because of the expensive and power consumption problems limit its normal operation, so the development of wireless data communication has been limited to a large extent. A/D conversion module is a widely used RF RFID technology, which mainly consists of data acquisition part, wireless transmitter and receiver head. After the signal is encoded and decoded, it is processed by the microcontroller and then transmitted to the data receiving end through the serial port. RF transceiver module is a device that transmits data from one sender to several other receivers through wireless communication technology, the module consists of RF transceiver software and some peripheral chips, this design mainly introduces the application of RFID technology, wireless communication principle to realize the signal transmission, through the application of wireless communication technology to achieve the data collection of RFID system and transmit it to the monitoring platform, at the same time, this design can also monitor its data collection, and through the display module to grasp the RFID system operation in real time, so as to regulate the control

Data Acquisition Control System Applying RFID Technology

171

Fig. 2. Data acquisition module

part. FPGA control module is the core module in the wireless data acquisition system, it is controlled by microcontroller, the main function is to complete the transmission process of information such as RF signal, modem and communication channel, received from the acquisition module to the data processing unit [9]. The RF signal in the wireless communication system uses RFID chip to transmit directly to the radio chip transmitter, and after the modem converts it into a standard high-frequency square wave, and then sends it to the microcontroller control circuit for modulation, and after decoding, sends it to the upper computer software program to complete the display and alarm functions; in the data collection part, multiple sensor nodes from the PC side are sent through the wireless transceiver to achieve the collection and storage of valid In the data collection part, multiple sensor nodes from PC are sent through wireless transceiver to collect and store valid information. In this system, FPGA is used to control the A/D converter module and D/A converter module, which obviously improves the speed and has more application value compared with the control of microcontroller. The internal structure of the FPGA control module is shown in Fig. 3.

3 Test Research of RFID Data Acquisition Control System During the testing process, the wireless communication data acquisition system needs to be debugged. In the whole experimental environment, the system is firstly divided into two modules. One is the transmission part of the test, which is mainly used to measure whether there is interference in the signals of the wireless transceiver and network access end, and whether there is a reliable channel connection between the information transmission of each node. The other is the network part of the test, which is mainly

172

X. Cao et al.

Fig. 3. Internal structure of FPGA control module

used to measure whether there is interference in the signal transmission of the wireless transceiver, including the network access side, and whether there is interference in the signal of the network access side, as well as to measure the information transmission status of each node and the delay time generated during data transmission. Secondly, the performance parameters and indicators related to the wired receiver head module are tested at the user end to see if they meet the design requirements, including the stability of the communication protocol, the network delay time and the data rate received at the wireless transceiver [10]. Finally, the performance of the wireless communication data acquisition system is verified by software programming simulation, that is, the stability of data acquisition, the delay and the success rate of signal transmission, and the performance of the wireless communication transceiver in practical applications. Table 2. Equipment configuration Devices

Wireless NIC IP Address

Ethernet IP

Essid

Mode

Network Node A

10.0.0.15

192.168.0.231

Mesh

Ad-hoc

Network Node B

10.0.0.16

192.168.0.232

Mesh

Ad-hoc

Gateway

10.0.0.17

192.168.0.233

None

Master

There are two main testing methods for the network part, including single-hop network node testing and multi-hop network node testing. Single-hop network node test is a test method proposed on the basis of network communication, which can detect whether a single hop is stable or not, and it can also be used for wireless data transmission. The single-hop network node test model is shown in Fig. 4. Two network nodes, clients, and gateways are selected to form a simple network model, and the networks cover each other, and the configurations of the two nodes and gateways are shown in Table 2.

Data Acquisition Control System Applying RFID Technology

173

Fig. 4. Single-hop network node test model

The purpose of the multi-hop network node test is to verify the stability of the data collection of the application through the test, and this also enables the multi-hop network to be monitored effectively. The multi-hop network node test model is shown in Fig. 5, in which nodes A and B are far away from each other, and their communication range cannot cover each other, thus they cannot communicate directly, nodes C and D are located in the middle of nodes A and B, and their transmission distance can cover both nodes A and B. These four nodes are configured according to Table 3. Table 3. Equipment configuration Devices

Wireless NIC IP Address

Ethernet IP

Essid

Mode

Network Node A

10.0.0.15

192.168.0.231

Mesh

Ad-hoc

Network Node B

10.0.0.16

192.168.0.232

Mesh

Ad-hoc

Network Node C

10.0.0.25

192.168.0.241

Mesh

Ad-hoc

Network Node D

10.0.0.26

192.168.0.242

Mesh

Ad-hoc

174

X. Cao et al.

Fig. 5. Multi-hop network node test model

4 Conclusion With the increasing demand for networks, wireless communication has become an essential way of life for people. RFID technology, as a new type of technology, is implementable, easily scalable, and low cost. As a way of wireless communication, RFID has been developed rapidly in recent decades. Wireless data acquisition control system is a kind of wireless data acquisition system based on to RFID technology and wireless sensing technology, which is mainly composed of RFID technology, microcontroller minimum control and communication module. This paper introduces the wireless data acquisition control system with RFID technology, focusing on the design of communication module, control circuit and driver unit. The hardware part uses STM32 microcontroller as the main control chip, which uses the wireless communication module as the receiving part of data transmission and reception, processes the corresponding instructions through the STM32 chip, and then transmits the collected signals to the microcontroller, which sends signals after processing according to the corresponding control requirements. The software uses C language to implement each function of the system. The results show that the application of wireless data acquisition technology can realize the automatic acquisition of various parameters in the control area, and it can be transmitted to the communication terminal through the corresponding module to realize the long-distance transmission of communication data and ensure the accurate reception of signals.

References 1. Zhao, M., Chen, Y., Niu, H.: Design of intelligent warehouse management system based on RFID technology. In: Proceedings of 2019 2nd International Conference on Intelligent

Data Acquisition Control System Applying RFID Technology

2.

3. 4.

5. 6. 7. 8.

9. 10.

175

Systems Research and Mechatronics Engineering (ISRME 2019), no. 2, pp. 339–345. Francis Academic Press (2019) Juránková, P., Švadlenka, L.: The reading of passive UHF tags by RFID technology using various combinations of antennas motorola AN480. In: Proceedings of 2015 2nd International Conference on Modelling, Identification and Control (MIC 2015), no. 5, pp. 52–56 (2015) Yeole, Y.G., Lachhvani, L., Bajpai, M., et al.: Data acquisition and control system for SMARTEX – C. Fusion Eng. Des. (2), 817–818 (2016) Fujishima, M.: Terahertz wireless communication using 300GHz CMOS transmitter. In: 2016 13th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT) Proceedings, no. 1, pp. 731–734 (2016) Haas, H., Elmirghani, et al.: Optical wireless communication. Philos. Trans. Roy. Soc. AMath. Phys. Eng. Sci. (3), 16–18 (2020) Boruchinkin, A., Tolstaya, A., Zhgilev, A.: Cryptographic wireless communication device. Procedia Comput. Sci. 02, 110–111 (2018) Saeidi, T., et al.: Ultra-wideband antennas for wireless communication applications. Int. J. Antennas Propag. (2), 18–25 (2019) Serrano, J.R., García-Cuevas, L.M., Samala, V., et al.: Boosting the capabilities of gas stand data acquisition and control systems by using a digital twin based on a holistic turbocharger model. In: ASME 2021 Internal Combustion Engine Division Fall Technical Conference, no. 2, pp. 22–25 (2021) Daniel, J., Ryan, K.: ABE-VIEW: android interface for wireless data acquisition and control. Sensors 18, 2647–2649 (2018) Serrano, J., et al.: Boosting the capabilities of gas stand data acquisition and control systems by using a digital twin based on a holistic turbocharger model. In: ASME 2021 Internal Combustion Engine Division Fall Technical Conference, no. 8, pp. 22–24 (2021)

Modified K-means Algorithm in Computer Science (CS) Accurate Evaluation Platform Jinri Wei, Yi Mo, and Caiyu Su(B) Guagnxi Vocational and Technical Institute of Industry, Nanning, Guangxi, China [email protected]

Abstract. With the progress of the times, CS assessment skill is also developing continuously. At this stage, there is a problem that the veracity of CS assessment is not high, and relevant scholars need to propose the utilize of modified K-means equations to optimize the performance of CS evaluation platforms and modified the veracity of evaluation platforms. In the actual application course, when the CS evaluation platform fails or the amount of datum is too large, the outcomes are easy to obtain veracity. This article studies the modified of K-means equation in CS accurate assessment platform, and explains the relevant knowledge and theory of CS accurate assessment. Datum analysis reveals that the research of modified Kmeans equation in the CS accurate evaluation platform has an efficient performance in the CS accurate evaluation platform. Keywords: Modified K-means Equation · Computer Science · Accurate Evaluation · Platform Research

1 Introduction In the Internet evaluation, the CS evaluation platform evaluates the teaching as a whole [1]. At this time the amount of evaluation is too much, the computer evaluation platform will appear deviations, outcome in inaccurate evaluation outcomes. In this text, the modified K-means equation is proposed to modified and optimize the CS evaluation platform and modified the performance of the evaluation platform [2]. The research of improving K-means equation in CS accurate evaluation platform is beneficial to the progress of CS accurate evaluation. Many scholars at home and abroad have studied the modified K-means equation. In foreign studies, El-Khatib S A proposed to study the application of ACO -K-means and GrubCut IS equation in MRI image segmentation (IS). The proposed iatrology IS equation and its platform have been implemented. Laboratory findings reveal that this equation has better veracity than Grub cutting [3]. Kollem S proposed an efficient total variational denoising mode based on partial differential equations and possibility fuzzy C-means aggregation equation for division. These modes can provide more minute message of MRI iatrology videos than heritage modes [4]. Jawad T M proposed to utilize the K-means aggregation equation to organize the transducer knots into aggregations. The K-means equation was utilized for aggregation, and each organize was deemed © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 176–184, 2023. https://doi.org/10.1007/978-3-031-31775-0_19

Modified K-means Algorithm in Computer Science (CS)

177

as PEGASIS. The leader of the chain is choosed by searching the Euclidean range from the transducer knot to the base station and the remaining energy of the transducer knot. Emulation outcomes reveal that the equation has some changes compared with the original PEGASIS equation [5]. In the wake of the fast blossom of message skill and Internet skill, the CS evaluation platform has been constantly optimized and made breakthroughs at the technical level [6, 7]. Modified K-means equation to modified the equation module of the evaluation platform from the inside, modified the evaluation veracity of the platform, and modified the utilize value of the evaluation platform. The research of improving K-means equation in CS accurate evaluation platform is conducive to the modified of accurate evaluation research.

2 Design and Exploration of Modified K-means Equation in CS Accurate Evaluation Platform 2.1 Modified the K-means Equation K-means aggregation equation is a aggregation analysis equation for iterine scheme, as revealed in Fig. 1. The details are as belows:

Fig. 1. K - means clustering algorithm working process

First, the datum is split into K organized in advance, and K substances are choosed as the starting aggregation core [8, 9]. Second, Figure up the range via each one and each seed aggregation core, and appoint each substance to the aggregation core closest to it. Third, aggregation cores and the substances appointed to them is a aggregation. Everytime a swatch is appointed, the aggregation core of the aggregation is recomputation on the grounds of the substances in the aggregation. This course is reusable up to some cease criteria is satisfied. Fourth, the cease criteria is capable that no (or least value number) substances are reappointed to different aggregations, no (or least value number) aggregation core alters again, and the deviation quadratic and part least value. 2.1.1 Modified of k-means Equation K-means + + equation adds two + symbols after the classical K-means equation, whose meaning is an modified equation that partially adjusts factors after the foundation of the

178

J. Wei et al.

classical equation so as to achieve better outcomes [10, 11]. The first step of the modified equation is to choose K aggregation cores and start them. At this time, different from the classical equation, it is no longer to choose K individuals in the whole datum set to start the aggregation, but to choose the aggregation cores as far away from the aggregation cores as possible. The basic idea is that, assuming that N aggregation cores are obtained and started, the points far from the first aggregation cores will be choosed as much as probable when the n + 1 aggregation cores are choosed, so as to modified the probability of being choosed. The course is as belows: Step 1: Arbitrarily choose a swatch as the first aggregation core C1; Step 2: Figure up the shortest range via each swatch and the aggregation core (that is, the range via each swatch and the nearest aggregation core), which is represented by D(x); The greater the cost is, the cost the probability of being choosed as the aggregation core is. Finally, the next aggregation core is choosed by wheel mode. Step 3: Reuse Step 2 to choose K aggregation cores. 2.2 Research on Modified K-means Equation in CS Accurate Evaluation Platform CS assessment is a collection of activities that make decisions on the value judgment of CS course and outcomes on the grounds of CS and provide services for CS decisionmaking. It is also a series of courses that study the value significance of CS for teachers and students’ learning [12, 13]. Its goal is to evaluate the effectiveness of CS, ensure the quality of CS, CS evaluation is to test the completion of CS, the specific content is split into the following points: 1. Diagnostic evaluation Diagnostic evaluation is a kind of evaluation mode to ensure the smooth progress of CS teaching plan before the CS teaching activity, so it is also called pre-evaluation. The main content involved in this evaluation mode includes: the quantity and quality of knowledge mastered by students in the learning course of the first stage, students’ physical criteria and family economic criterias, students’ learning modes, learning ability and attitude as well as their interest in the subject. In general, teachers’ diagnostic evaluation modes for students are nothing more than emotional communication, practical investigation, daily observation, random interview, and previous performance records, intelligence tests and diagnostic tests. 2. Course assessment Course evaluation, also known as formative evaluation, is an evaluation carried out in the course of CS teaching so as to continuously modified the teaching effect of CS. Its goal is to judge whether the early CS teaching work is up to standard. This evaluation mode can make CS teachers understand the effect of each CS teaching activity and the learning pace of students in the course of CS teaching, so as to timely feedback and adjust the CS teaching course, so as to achieve the expected CS teaching and CS

Modified K-means Algorithm in Computer Science (CS)

179

teaching effect. The evaluation in CS teaching design is mainly course evaluation, and diagnostic evaluation is carried out more frequently, generally related to the number of course chapters. 3. Summative assessment Summative evaluation is also known as summative evaluation. Compared with diagnostic evaluation, it is also known as post-facto evaluation [14, 15]. This type of assessment is usually scheduled after the CS lecture is complete, and the goal is to check whether the desired teaching have been achieved. Compared with the course evaluation, the comparison frequency is less, usually the whole course of CS is only two or three times, the main goal of the summative evaluation is to prove whether students achieve the learning of CS teaching and the evaluation outcomes given by the students to the degree of mastery of CS content. These tests cover a wide range of topics, including almost all the basic knowledge of CS. 2.3 The Basic Flow of K-means Clustering Algorithm Equation As a relatively classic clustering algorithm formula in the algorithm field, K-means algorithm was first proposed by Hartigan and recognized by the scientific field, and has been used until now. K-means clustering algorithm belongs to the unsupervised clustering algorithm. The main function of this algorithm is to automatically classify all data according to the set rules, so that all kinds of internal highly similar, and each classification has obvious differences. The core steps and processes of K-means algorithm are as follows: First, the user needs to import the initial data that needs to be classified into the system algorithm, and the algorithm will randomly classify the algorithm according to the initial data imported and create the initial clustering center. After the creation of the initial clustering center, the algorithm will carry out algorithm calculation according to the distance between each data and the initial clustering center, and allocate the calculated data to the nearest matching clustering algorithm center after the completion of calculation. When the input data does not match the initial clustering center, the K-means algorithm will automatically generate a new clustering center and match the data again. If the matching fails, it will return to the sample data clustering center distance calculation and wait for the matching generation again. If the matching succeeds, the data will be imported into the clustering center for complete data clustering. The following is the flow chart of K-means algorithm, as shown in Fig. 2. Under normal circumstances, to judge whether the clustering method is appropriate, mainly look at the following principles. Firstly, the first point is whether the data needed for clustering is easy to distinguish and understand. The clustering data set obtained by the clustering algorithm must be highly similar inside the cluster, and outside the cluster, that is, there must be a big difference between each cluster center, so as to distinguish the cluster center. The second point is to know whether the clustering data is valid in each cluster center. In the cluster, variables in the cluster should have the same characteristics, while in different clusters, there are obvious differences, which means that the same classification can be carried out according to different classification criteria. The distances between the data are close, but within the same category, the

180

J. Wei et al.

User

Sampled data

Calculation and allocation

Initial clustering center

New clustering center

Function judgment

Algorithm feedback

Fig. 2. Flowchart of K-means algorithm

distances between the data are relatively large. Finally, there must be no fluctuation in the calculation result of clustering data, that is to say, each data can be accurately classified into each clustering center in the end, and no algorithm operation is carried out in the backflow algorithm.

3 Research Effect of Modified K-means Equation in CS Accurate Evaluation Platform Datum sets are arbitrarily choosed from the swatches D = {X1 , X2 , . . . Xn } of the CS assessment platform, and individuals are arbitrarily choosed. The equation expression can be utilized to give the value Xi = {Xi1 , Xi2 , . . . Xim }, 1 ≤ i ≤ n of the dimension of the swatch individuals as M [16, 17]. The weight of each individual in the datum swatch is: wid =

1 n

1 n

Xid n

i=1 Xid

(1)

the above equation, Xid is the specific component value of the datum of item I; In n i=1 Xid is the mean of the components of the swatch whose sequence number is D. The density of swatch point I in the datum set is: n p(i) = f [dwij − MeanDist(D)] (2) j=1

Modified K-means Algorithm in Computer Science (CS)

 f (x) =

1, x < 0 0, x ≥ 0

181

(3)

where, j is the subscript of the swatch point, and the Total Euclidean range via a datum individual Xi and a datum individual Xj in the swatch space of dimension M is called dwij . MeanDist(D) is the average swatch range of set D. The average Silhouette indicator value: KAVG−sil =

bi − ai 1 n i=1 max{bi , ai } n

(4)

where, bi is the least value of the average range via the ith swatch and swatches in each other class, and ai is the average range via the ith swatch and all other swatches in the class. The course of transforming datum into a specific format and then placing the datum in a certain space is called normalization, and the expression is as belows: 



xip =



xip − xpmin 

(5)



xpmax − xpmin



The equation above, xip said datum to a specific format transformation, then the 

individual datum Xi sequence number is in the p, xip said after a particular format 

conversion sequence number is in the individual xi p datum, xpmin said the sequence 

number is n the sequence number is the least value value of p, xpmin said the sequence number is n, then the first p d format conversion in the swatch so much. 3.1 Blossom Environment of Accurate CS Assessment Platform Choosing the blossom environment of the platform is very significant to the design of the platform, which will determine the scale, performance, maintainability and ease of utilize of the platform to be developed. Therefore, before the platform is developed, the choice of platform blossom environment should be decided on the grounds of the platform requirements analysis specification, such as whether it is economically feasible, whether it is technically feasible and so on. 1. Hardware environment. The choice of website platform server should be based on a series of software required by website operation. If the server configuration is too low, although the platform can basically run, but its speed of course website interaction datum is very slow, outcome in a significant decline in platform performance. Therefore, it is necessary to configure the website server to be higher so as to be able to handle the crash phenomenon when the web page can be accessed concurrently at any time and always keep the platform website running normally. Since the client mainly browses through the web browser and occasionally exchanges datum with the database server, the hardware configuration of the client can be appropriately low, as long as the compatible computer operating platform is ok.

182

J. Wei et al.

2. Software environment. The software environment required by platform blossom should generally consider how to better manage the code and documents generated in the course of platform blossom, especially the version control of platform blossom. From the perspective of software, this text chooses IIS server which can provide Web service for the course evaluation platform. The server provided by Microsoft is adaptable, efficient, secure, easy to manage and quick to start. It can be utilized to integrate applications as well as real-time Web applications.

4 Investigation and Research Analysis of Improving K-means Equation in CS Accurate Evaluation Platform Datum sets The UCI has several datum sets, named datum sets Lisk, Wond, and Zris. Among this test, the classical K-means equation, NBC equation, LR equation, ID3 equation and the modified K-means equation in this text were utilized to perform datum tests. The number of tests was 50, and all final outcomes were averaged. All equations in the test course should follow the following principles: (1) Weight course is carried out on dimensions to solve Euclidean range, weaken the role of outliers and enhance the mutual differentiation of datum. (2) arbitrarily choose the aggregation core and extract the point with the maximum density. The next operation is based on the range of the previous aggregation and the weights of other points. The rule for all points is to meet the density peak value, but the weights of some noises are too small to be choosed as the aggregation core, thus preventing part optimization. (3) Follow the theoretical time complexity. The test veracity of the CS Precision Assessment platform is revealed in Table 1. Table 1. CS accurately evaluates platform veracity values Name

Lisk

Wond

Zris

modified K-means equation

9

13

11

NBC

7

13

9

LR

8

12

11

ID3

7

12

11

K-means

5

8

6

The table above is the datum of CS evaluation platform after six equation tests. From the veracity value, it is obvious that the modified K-means equation is the highest in veracity value, so it can be seen that the equation is the most efficient. The Fig. 3 above is the video form of the veracity value of the CS evaluation platform. It reveals from Fig. 3 that the height of the three datum points of the modified K-means equation is the highest, that is, the equation is the most efficient.

Modified K-means Algorithm in Computer Science (CS)

183

14 12 10 8 6 4 2 0 Improve kmeans algorithm

NBC

Lisk

LR

Wond

ID3

K-means

Zris

Fig. 3. CS precision measurement platform veracity numerical chart

The datum analysis reveals that the modified K-means equation in the study of CS accurate evaluation platform performs very well in the veracity of CS accurate evaluation.

5 Conclusions This text mainly discusses the CS in the ordinary CS application course of evaluation, and respectively from two angles put forward the corresponding evaluation platform. From the test outcomes of the school district, the platform has been able to be utilized in the actual CS, to achieve the expected goal of the research, the realization of Word version and Web version of the two evaluation platform. The research of modified K-means equation in CS accurate evaluation platform has a good performance in CS accurate evaluation.

References 1. Chodey, M.D., Shariff, C.N.: Neural internet-based pest detection with k-means division: impact of modified dragonfly equation. J. Message Knowl. Manag. 20(3), 2150040–2150040 (2021) 2. Navia, L., Defit, S., Muhammad, L.J.: Decease of student subjects in higher education using hybrid datum mining mode with the K-means equation and FP growth. Int. J. Artif. Intell. Res. 5(1), 91–101 (2021) 3. El-Khatib, S.A., Skobtsov, Y.A., Rodzin, S.I.: Comparison of hybrid ACO-k-means equation and grub cut for MRI videos division. Procedia CS 186(11), 316–322 (2021) 4. Kollem, S., Reddy, K.R., Rao, D.S.: An optimized SVM based possibilistic fuzzy c-means aggregation equation for tumor division. Multimedia Tools Appl. 80(12), 1–29 (2021)

184

J. Wei et al.

5. Jawad, T.M., Ali, N.A.: Using K-means aggregation equation with Power 1. Int. J. CS Eng. Message Skill 6(1), 9–13 (2021) 6. Al, R.: An optimized discretization approach using k-means bat equation. Turk. J. Comput. Math. Educ. (TURCOMAT) 12(3), 1842–1851 (2021) 7. Ibrahim, S., Devi, S.S., Anto, S.: An efficient document aggregation using hybridised harmony search K-means equation with multi-view point. Int. J. Cloud Comput. 10(1/2), 129 (2021) 8. Sardar, T.H., Ansari, Z.: An analysis of distributed document aggregation using MapReduce based K-means equation. J. Inst. Eng. (India) Ser. B 101(2), 1–10 (2020) 9. Singh, T., Mishra, K.K., Ranvijay, N.A.: k-Means aggregation-based evolutionary equation for solving optimisation problems. Int. J. Forensic Eng. 1(1), 1 (2021) 10. Malli, S., Nagesh, H.R., Rao, B.D.: Approximation to the K-Means aggregation equation using PCA. Int. J. Comput. Appl. 175(11), 43–46 (2020) 11. Ahmed, M., Seraj, R., Islam, S.: The k-means equation: a comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020) 12. Priyanka, G., Jayakarthik, R.: Road safety analysis by using k-means equation. Int. J. Pure Appl. Math. 119(10), 253–257 (2020) 13. Seta, P.T., Hartomo, K.D.: Mapping land suitability for sugar cane production using Kmeans equation with leaflets library to support food sovereignty in central Java. Khazanah Informatika Jurnal Ilmu Komputer dan Informatika 6(1), 15–25 (2020) 14. Al, D.: Analysis and blossom of Bina Nusantara university’s BNPCHS school of CS competition website using the laravel framework. Turk. J. Comput. Math. Educ. (TURCOMAT) 12(3), 4123–4128 (2021) 15. Pacetti, E., Soriani, A.: Online CS workshops for educators in higher education during Covid19: challenges and opportunities of a forced range learning. Res. Educ. Media 12(1), 93–104 (2020) 16. Ohashi, T., Gerrett, N., Shinkawa, S., et al.: Fluidic patch device to swatch sweat for accurate measurement of sweat rate and chemical composition: a proof-of-concept study. Anal. Chem. 92(23), 15534–15541 (2020) 17. Azil, K., Altuncu, A., Ferria, K., et al.: A faster and accurate optical water turbidity measurement platform using a CCD line transducer. Optik Int. J. Light Electron Optics 231(5), 166412 (2021)

The Application of Decision Tree Algorithm in Psychological Assessment Data Ping Li(B) Yan’an Vocational and Technical College, Yan’an 650106, Shaanxi, China [email protected]

Abstract. Nowadays, the mental health of students has attracted more and more attention from the society. For example, the cases of students committing crimes and suicide due to abnormal mentality often arouse heated discussions in public opinion. The purpose of this paper is to study the application of decision tree algorithms to psychometric data. After studying the decision tree algorithm and the construction of the psychological correlation analysis system for college students, the data was collected through the student psychological test system, and after analyzing the data, the relationship between different latent variables and depression was verified. Depression was mainly classified by algorithm C4.5 in decision tree. According to the tree structure, the classification accuracy of the pruning algorithm C4.5 is about 8% higher than that of the non-pruning algorithm C4.5. Through the application of decision tree algorithm, the analysis of psychological problem data has been significantly improved, which verifies the reliability of data mining applied to evaluation system. Keywords: C4.5 Algorithm · Psychological Assessment · Assessment Data · Anxiety Mood

1 Introduction Campus life is an important period of rapid psychological development and maturity of students, and an important milestone in shaping students’ mental health. During this period, school teachers and parents should focus on guiding and educating students to help them develop a healthy psychology [1, 2]. Unhealthy campus life has many negative effects on students’ psychology and their ability to properly integrate into future social life [3]. Today’s college students are burdened even if they find that their psychology is abnormal, and they dare not confess to others, thinking that the existence of mental illness will make others taboo [4, 5]. Psychological assessment has become part of the history of psychology [6]. Testbased medicine, considered a symptom-based activity, is a special type of psychiatric assessment that is based on patient-reported strategies to monitor treatment progress in a therapeutic setting. Resnick S G presents new research on mindfulness and mindfulnessbased measurements. Discuss how knowledge in these different but related fields can work together to improve mental health. Implementing care-based care in the general © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 185–194, 2023. https://doi.org/10.1007/978-3-031-31775-0_20

186

P. Li

population using a practical approach to care-based care and developing a framework for psychological assessment to support care-based care activities [7]. Rosa H R reflects on the interface between psychoanalytic clinics and psychological assessments and demonstrates that scientific research data also contributes to clinical understanding of people with emotional difficulties. This is done by presenting some quantitative scientific research using the Human Figure Drawing Test (DFH) and a brief introduction to case studies using the test. Results and Discussion: The different studies presented discuss how the interface between psychological assessments and psychoanalytical clinics suggests, and how supports understanding and action in children’s psychotherapeutic activities for children and families to think [8]. Therefore, studying the important factors of depression in modern students is of great significance for further exploring effective strategies for depression prevention and intervention, and helping students cope with the damage caused by psychological problems [9]. This paper fully studies the correlation between students’ psychology and data, and determines that the data object to be mined is the admission psychological evaluation data of 2020 students in a certain university. From the information in the mental health assessment system, construct the psychological characteristics of the vector. Now, use the decision tree of the algorithm to find the relationship between the psychological states of college students, build the decision tree and cut it, and analyze and find the law of students’ problems. Some of the most important activities are provided.

2 Research on the Application of Decision Tree Algorithm in Psychological Assessment Data 2.1 Decision Tree Decision trees are a guided learning method that can deeply analyze classification problems. It builds a generative decision tree according to the probability of occurrence in various situations. It is a graphical method that directly uses probabilistic analysis to evaluate and analyze the operability of project risk decision-making methods [10, 11]. The decision tree is a graphical method for intuitively applying probability analysis to determine the likelihood that the expected value of the net present value is greater than or equal to zero, assess the project risk, and determine the project’s viability. It is based on the known probabilities of occurrence of various situations. This type of decision branch is referred to as a decision tree since it is drawn to resemble a tree trunk. Decision trees are prediction models used in machine learning that show the mapping between object properties and object values. Each leaf node in the tree refers to the value of the object, whereas each fork path represents a potential attribute value of the object represented by the path from the root node to the leaf node. There is only one output from the decision tree. You can create a separate decision tree to deal with various outputs if you want to have several outputs. Data mining typically employs decision trees, which may be used to both examine data and create predictions. Data from the psychological assessment can be evaluated. Figure 1 depicts the procedure for applying the decision tree algorithm to assess data from psychological evaluations.

The Application of Decision Tree Algorithm

187

Psychological assessment data

Psychological evaluation results

Fig. 1. Process diagram of psychological evaluation data analysis using decision tree algorithm

In Fig. 1, the process of using decision tree algorithm to analyze psychological evaluation data is described. Multiple decision trees are used to analyze psychological evaluation data. By comparing different decision results, psychological analysis results with evaluation value are formed. One strategy to stop decision trees from branching is pruning. Pre-pruning and postpruning are two categories for pruning. Pre-branching is used to set an index as the tree grows. The index will stop climbing once it is attained. It is simple to create “horizon limitation” in this way. This means that all chance for the branch’s succeeding nodes to do “good” branch operations will be eliminated once the branch is halted, turning node N into a leaf node. In a broad sense, these stopped branches will trick the learning process, causing the resultant tree’s maximum purity decrease to be too near the root node. In order to overcome the “horizon limitation,” the tree must first grow entirely till the leaf nodes have the lowest impurity value. Then decide if all of the nearby paired leaf nodes should be removed. Perform the elimination and transform their common parent node into a new leaf node if the elimination may sufficiently increase impurity. The process of node branching is the opposite of this way of “merging” leaf nodes. After trimming, leaf nodes are frequently dispersed widely and the tree loses its equilibrium. Post-pruning technology has the advantage of overcoming the “horizon limitation” effect and eliminating the requirement to keep certain samples for cross-validation, so it can make full use of the information of all training sets. However, the computational cost of post-pruning is much higher than that of pre-pruning method, especially in large sample sets. However, for small samples, post-pruning method is better than pre-pruning method.

188

P. Li

Using the ID3 algorithm and the C4.5 algorithm to generate decision trees for studying mental health problems is a common method. Decision tree algorithms are used to predict and analyze student mental health data. The general idea is to first analyze and calculate which feature is most relevant to psychological problems, and then use the iterative backtracking method to rank other features in the same way to form a decision tree and create a classification tree model for predictive analysis [12, 13]. Usually the C4.5 algorithm replaces the gain data with the gain data of the feature data, and sets the maximum value of the benchmark that calculates the gain rate of all the feature data. This creates decision tree branches, which are created with different values, each branch representing a subdivision of the model [14, 15]. In the process of constructing and reproducing decision trees using the C4.5 algorithm, the separation of nodes is determined by the characteristic of the maximum information gain rate. By computation, the information gain is gradually decreased, and the features of the information and the large gain are presented to the offspring [16]. 2.2 Construction of the Psychological Correlation Analysis System for College Students (1) Adaptable Adaptive psychological problems mainly refer to the lack of students in the new learning and living environment, leading to psychological imbalance and causing many psychological problems [17]. (2) Learning type Students who are admitted to university generally have relatively good grades, on the one hand, it is difficult to stand out from a group with good grades, and on the other hand, their grades drop due to personal reasons. It is easy to cause psychological discomfort [18]. (3) Interpersonal relationship College life is no longer monotonous like in middle school. In the process of interpersonal communication, a large number of classmates may give to the students. Bring about psychological obstacles. (4) Love emotional type Falling in love has become a very common thing on college campuses, and a large number of students are eager to have a good relationship. But the issue of love is a relatively complex issue. For relatively naive college students, if they cannot handle the relationship with their lover well, they may bring some trouble and pain to themselves.

The Application of Decision Tree Algorithm

189

(5) High employment pressure When college students are faced with going to social positions, employment pressure will also come. When their expectations are in sharp contrast.

3 Investigation and Research on the Application of Decision Tree Algorithm in Psychological Assessment Data 3.1 Data Sources The data source selected in this article is based on the data in the school’s mental health evaluation system, and the evaluation form “Symptom Self-Assessment Scale SCL-90” is used to evaluate the mental health of freshmen enrolled in the 2020 class of a college, covering the Department of English, Department of Pharmacy, There are a total of 2185 students in the 5 departments of the Department of Computer Science, the Department of Public Affairs and the Department of Clinical Medicine, including 596 boys and 1589 girls. For 2020 freshmen, the evaluation data of “Self-rating Symptom Scale SCL-90’” was selected as the research data, and the C4.5 algorithm was used to mine the potential depressive symptoms and basic personal information shown in the “Self-rating Symptom Scale SCL-90” relation. The college’s psychological management database system uses SQL Server 2008 to store and manage, in which the student’s personal basic information table is synchronously imported from the educational administration system, mainly including student number, ID number, name, gender, ethnicity, date of birth, place of origin, household registration, department, Fields such as major, academic record, contact number, etc. The student’s personal psychological assessment form is produced by the psychological assessment system, and reflects the students’ personal psychological problem tendency through the relevant psychological dimensions. 3.2 C4.5 Algorithm Information entropy: a parameter used to measure the orderliness of data in a data system. When there are various data in a data system, and the proportion of various types of data is not low, then this sample set is disordered, impure. The formula for information entropy is: |y| pk log2 pk (1) Ent(D) = − k=1

D is the data set, pk is the proportion of k data types, |y| represents the number of categories, the lower the information entropy, the clearer the sample data set. The minimum value is 0, indicating that there is only one data type in the sample dataset. Information gain: The difference in information entropy after the original dataset is divided into data samples according to a specific attribute dimension. The formula to obtain the information is as follows: V |Dv | Ent(Dv ) (2) Gain(D, a) = Ent(D) − v=1 |D|

190

P. Li

Among them, a is the selected attribute, and the entropy of the data set V is calculated separately and the weighted average is calculated. Finally, the difference from the information entropy of dataset D is the information gain.

4 Analysis and Research on the Application of Decision Tree Algorithm in Psychological Assessment Data 4.1 Decision Tree Training Results The model will create a decision tree for depression based on mental health to predict student depression. After training, a tree structure is produced, and the upper and lower thresholds of confidence are set for the decision tree to perform post-pruning operations. Check and judge all the child nodes of the same parent node. When the proportion of the leaf node value in the subtree sample is greater than the confidence value, the subtree is deleted, and then the subtree is replaced by the number of child nodes with the highest probability. The pruned decision tree is shown in Fig. 2. The rules extracted from the depression emotion decision tree training model are: (1) When the learning stress score is less than 10 and the social support score is less than 30, depression. (2) When the learning stress score is > 10, the coping style score is > 40, the learning stress score is between (8, 10), and the economic stress score is < 5, it is not depression. When learning and employment pressures are high, social support and coping styles are poor, and most people experience depression and anxiety. 4.2 Algorithm Comparison This paper uses the initially generated decision tree to classify and predict the test set, compares the existing categories with the predicted classification results, and uses Naive Bayes and neural networks for comparison. The accuracy is shown in Table 1: The accuracy rate of the unpruned decision tree is 80.1%; the classification result of the test data set predicted by the pruned decision tree is compared with the known categories, and the accuracy rate is 89.2%. By comparison, it is found that the classification prediction accuracy of the decision tree model without pruning is lower than that of the pruned classification model. Therefore, it can be seen that the C4.5 decision tree algorithm is used to classify and construct a decision tree and prune based on the PEP algorithm. The results obtained by classifying and mining the psychological assessment data have certain reference value for psychological prevention and intervention (in Fig. 3). The training sample set is used by the classification algorithm to create the classification model (classifier), which is used to categorize the samples in the data set. In order to determine which category the new sample will fall under, the classification model learns the potential relationship between the attribute set and the training sample’s categorization.

The Application of Decision Tree Algorithm

191

study-induced stress Solution

study-induced stress

social support

non-depressed

Employment pressure

4/5 non-depressed 1/5 depression

non-depressed

depression

economic pressure

1/2 non-depressed 1/2 depression

non-depressed 1/3 non-depressed 2/3 depression

Fig. 2. Decision tree after C4.5 algorithm pruning

Table 1. Accuracy obtained by different classification methods Classification

Training dataset

Test dataset

Naive Bayes Classification

76.8%

77.7%

Neural Networks

75.2%

76.1%

Unpruned decision tree

80.6%

80.1%

pruned decision tree

88.7%

89.2%

A typical machine learning technique used in data mining is the cedar-tree algorithm. The decision tree algorithm is also quite natural and simple to comprehend. In general, using mathematical techniques to continuously separate data is an iterative process. Because the decision process for each step is based on all data, decision trees are a non-parametric supervised learning technique. Many more intricate basic algorithms are built on the decision tree algorithm. Other more sophisticated algorithms are easier to understand when one is familiar with the decision tree algorithm.

192

P. Li

Fig. 3. Algorithm comparison

A tree-like map serves as the decision tree. You can choose a feature dimension and post queries at each node. The response to the node inquiry is represented by each edge. The bottom leaf of the decision tree, which is the root node that has no more child nodes, reflects the decision tree’s final judgment or categorization outcome. The decision tree’s final output is nonlinear, and it is a nonlinear interface made up of numerous piecewise functions. This paper uses the initially generated decision tree to classify and predict the test set, compares the existing categories with the predicted classification results, and uses naive Bayesian and neural network to compare. The classification time is shown in Table 2: Table 2. Classification time of different classification methods Serial number

Classify

Classification time (s)

1

Naive Bayesian classification

2.82

2

Neural network

1.46

3

Uncrimmed decision tree

0.88

4

Pruning decision tree

0.92

In Table 2, the classification time of different classification methods is described, in which the shortest classification time of untrimmed decision tree method is 0.88 s, the

The Application of Decision Tree Algorithm

193

shortest classification time of pruned decision tree is 0.92 s, and the longest classification time of naive Bayes is 2.82 s. The classification time of decision tree algorithm analysis is relatively short, and pruning decision tree algorithm needs extra time because of pruning operation, but the overall classification time is less than naive Bayesian classification and neural network classification time.

5 Conclusions At present, most of the students have insufficient understanding of mental illness, and even have an attitude of neglect and indifference, which leads to the failure of timely detection and effective treatment of these mentally abnormal students. This paper applies the C4.5 algorithm to the exploration and research of college students’ psychological correlation analysis, which has certain practical significance. However, there are still some problems in the algorithm application analysis that need to be further studied. For the psychological correlation analysis research of college students, this paper uses the SCL-90 symptom self-rating scale to limit the relevant factors. In fact, this research method of simplifying the complex will also ignore the process of simplifying psychological problems to a certain extent. Due to many other factors, it is possible to ignore a lot of deeply hidden information.

References 1. Editor. Who’s Running the World? Psychological Assessment of Political Leaders. Int. Bull. Polit. Psychol. 18(2), 2 (2018) 2. Yılmaz, T.: Victimology from clinical psychology perspective: psychological assessment of victims and professionals working with victims. Curr. Psychol. 40(4), 1592–1600 (2021). https://doi.org/10.1007/s12144-021-01433-z 3. Byrd, D.A., Mindt, M., Clark, U.S., et al.: Creating an antiracist psychology by addressing professional complicity in psychological assessment. Psychol. Assess 33(3), 279–285 (2021) 4. Goldenson, J., Josefowitz, N.: Remote forensic psychological assessment in civil cases: considerations for experts assessing harms from early life abuse. Psychol. Injury Law 14(2), 89–103 (2021). https://doi.org/10.1007/s12207-021-09404-2 5. Riley, J., Gleghorn, D., Doudican, B.C., Cha, Y.-H.: Psychological assessment of individuals with Mal de Débarquement Syndrome. J. Neurol. 269(4), 2149–2161 (2021). https://doi.org/ 10.1007/s00415-021-10767-4 6. Delcea, C., Dan, O.R., Matei, H.V., et al.: The evidence-based practice paradigm applied to judicial psychological assessment in the context of forensic medicine. Rom. J. Legal Med. 3(3), 257–262 (2020) 7. Resnick, S.G., Oehlert, M.E., Hoff, R.A., et al.: Measurement-based care and psychological assessment: using measurement to enhance psychological treatment. Psychol. Serv. 17(3), 233–237 (2020) 8. Rosa, H.R., De La Plata Cury Tardivo, L.S., Junior, A., et al.: Interfaces between psychological assessment and the psychoanalytic clinic. MudançasPsicologia da Saúde 28(1), 27–33 (2020) 9. Everhart, J.S., Harris, K., Chafitz, A., et al.: Psychological assessment tools utilized in sports injury treatment outcomes research: a review. J. Sports Sci. Med. 19(2), 408–419 (2020) 10. Vilario, M., Amado, B.G., Martin-Pea, J., et al.: Feigning mobbing in the LIPT-60: implications for forensic psychological assessment. Anuario de Psicología Jurídica 30(1), 83–90 (2020)

194

P. Li

11. Ong, C.H., Ragen, E.S., Aishworiya, R.: Ensuring continuity of pediatric psychological assessment services during the COVID-19 pandemic. Eur. J. Psychol. Assess. 36(4), 516–524 (2020) 12. Barbosa, E., Peres, V.: The discursive practice as a methodological resource for psychological assessment. Revista Avaliação Psicológica 19(2), 198–204 (2020) 13. Elzakzouk, A., Elgendy, S., Afifi, W., et al.: Psychological assessment in children with chronic kidney disease on regular hemodialysis. GEGET 15(2), 48–59 (2020) 14. Ma Yers, I., Charland-Verville, V., Roover, A.D., et al.: Pregastroplasty psychological assessment at the CHU of Liege using the BIPASS. Rev. Med. Liege 75(11), 738–741 (2020) 15. Burdeus-Domingo, N., Brisson, A., Leanza, Y.: Interpreter mediated psychological assessment: a three-step practice (before-during-after). Santé mentale au Québec 45(2), 61–78 (2020) 16. Millerc, M.R., Hsbb, B., Hottonc, D.M.: A systematic review of the use of psychological assessment tools in congenital upper limb anomaly management. J. Hand Ther. 33(1), 2–12 (2020) 17. Thilges, S.R., Bolton, C., Mumby, P.B.: Pretransplant psychological assessment for stem cell treatment. J. Health Serv. Psychol. 44(3), 117–124 (2018). https://doi.org/10.1007/BF0354 4671 18. Riddle, M.P.: Psychological assessment of gestational carrier candidates: current approaches, challenges, and future considerations. Fertil. Steril. 113(5), 897–902 (2020)

DT Algorithm in Mechanical Equipment Fault Diagnosis System Zijian Zhang1(B) , Jianmin Shen2 , Zhongjie Lv1 , Junhui Chai1 , Bo Xu2 , Xiaolong Zhang1 , and Xiaodong Dong1 1 Ningbo Labor Safety and Technology Services Co., Ltd., Ningbo 315048, Zhejiang, China

[email protected] 2 Ningbo Special Equipment Inspection and Research Institute, Ningbo 315048, Zhejiang, China

Abstract. With the rapid development of China’s economy, all kinds of machinery and equipment in the industrial field are developing in the direction of high concentration and refinement. The precise cooperation between a variety of mechanical equipment makes the entire mechanical system run safely and smoothly. Therefore, the importance of the safe operation of each equipment is self-evident. The purpose of this paper is to study the application of decision tree algorithm(DTA) in machinery equipment fault diagnosis(FD) system. The analysis principle and construction process of the DTA are introduced. On this basis, the optimization of the DTA model is proposed. Tested on the Weka machine learning platform, compared with the traditional ID3 decision tree (DT) construction algorithm, the DT structure constructed by the algorithm in this paper is simple, which improves the generalization ability of the DT, and also has a certain ability to suppress noise. When β = 0.58, the classification accuracy of the algorithm in this paper is above 90%. Using the improved DTA proposed in this paper, a set of mechanical equipment FD system is constructed, and the historical data of the motor is analyzed by the DTA. Keywords: DT Algorithm · Mechanical Equipment · Fault Diagnosis · Diagnostic System

1 Introduction With the rapid development of science and technology in the world today, with the continuous advancement of industrial modernization, the level of productivity has also been greatly improved, and the proportion of high-efficiency machinery in enterprise production is also increasing. The traditional employee structure with manual labor as the core has gradually transformed into a technology-based and technical-based employee structure [1, 2]. Especially in recent years, the rapid upgrading of machinery and equipment has increasingly tended to be large-scale automation and the complexity has also increased. This requires researchers to carry out in-depth research on various situations in the use of equipment, including equipment maintenance and FD, while innovating technology, that is, effectively improving the relevant parameters and quality of mechanical equipment [3, 4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 195–203, 2023. https://doi.org/10.1007/978-3-031-31775-0_21

196

Z. Zhang et al.

Therefore, domestic and foreign attach great importance to the research on FD technology of mechanical equipment, and widely apply various FD technologies to daily production [5]. Olakunle OR calculates the reliability and risk levels of mechanical equipment to determine the criticality of the equipment. The risk level is calculated as the product of the probability of equipment failure and its failure consequences for the entire water treatment line. Then from the system analysis, a comprehensive maintenance plan for the mechanical equipment is obtained. Studies have shown that mechanical failures in water treatment plants range from 5% to 49%. The highest critical index for intermediate service pumps is 81 [6]. To sum up, in today’s big data and information age, high-reliability intelligent FD methods have extremely high practical value [7], and the FD scheme based on machine learning algorithm can be completely used as a feasible scheme to improve the intelligent level of mechanical equipment FD [8, 9]. This paper focuses on the DTA. By building a DT, and comparing and analyzing the attributes of the commonly used ID3 algorithm, it provides a reference for the optimization of the subsequent algorithm. These three aspects are improved, and after verification, the optimized DTA has better performance than the traditional ID3. Finally, the data mining (DM) technology is used in the mechanical equipment diagnosis system, the knowledge expression of the system uses if-then, and the combination of knowledge expression and acquisition is realized through the optimized DTA. On this basis, simpler and clearer reasoning is realized. The process can effectively improve the FD performance of mechanical equipment. Since the rules of the algorithm can be automatically generated, it can greatly reduce the working time and improve the FD efficiency of mechanical equipment.

2 Research on Application of DTA in Mechanical Equipment FD System 2.1 FD of Mechanical Equipment There are three basic states of mechanical equipment, namely normal state, abnormal state and fault state. The normal state means that the mechanical equipment has no defects or the defects that occur are within the allowable range when performing the specified actions; the abnormal state means that the equipment has begun to cause defects or the defects have expanded to a certain extent and the equipment status signals (such as vibration, speed, temperature, etc.) changes accordingly, and its performance gradually deteriorates, but it can still continue to work. The error state means that the performance index of the equipment has been lower than the minimum limit of normal requirements, and the equipment cannot operate normally [10]. According to the nature of failure: (1) Temporary failure This type of error is that under certain conditions, the system loses some functions in a short period of time, but the system can only continue to operate normally by adjusting or debugging operating parameters without replacing equipment components. This type of error is intermittent [11].

DT Algorithm in Mechanical Equipment Fault Diagnosis System

197

(2) Permanent failure This kind of failure is generally due to the damage of some parts of the equipment, which needs to be repaired or replaced to eliminate the failure; such failures can be divided into failures that completely lose all functions and failures that completely lose functions. 2.2 DM The process of DM is usually complicated. In the field of engineering, its core is to process and analyze data. The processing and analysis process of DM for problems is usually carried out according to the classic model, which can not only achieve the purpose of macroscopic control of the overall analysis, but also integrate various methods in engineering [12, 13]. A reasonable DM process model can integrate each relatively independent link of knowledge mining, and can also predict various problems in practical problems [14]. 2.3 DT Method (1) DT ID3 algorithm The basic idea of ID3 is to use the greedy algorithm to cross-train the sample set from top to bottom, test each attribute on each node, select the conditional attribute with the highest information gain as the node, and continue to rely on separating its values until it reaches a state where it stops splitting, thereby creating a DT. The idea of spanning tree is to find an evaluation function f(S, C) for the training sample set S and classification C, and use the function f(S, C) to select the attribute classification that contributes the most to the classification as the root node, and use the attribute value as the root node. As a basis, for each value, a branch is created under the root node, thus dividing s into different subsets, and the tree is created retroactively in ink until a fixed state is satisfied. To select the optimal branch features as a whole, ID3 uses information gain as a measure for selecting branch attributes [15, 16]. (2) DT pruning DTs created in the tree building phase rely on training samples, so there may be an overalignment problem, i.e., the DTs created may fit the training samples well, but have low prediction accuracy on new data. To avoid this error, the DT must be pruned to remove unnecessary branches. There are two ways to prune DTs: pre-pruning and post-pruning. Pre-pruning is pruning in the DT construction process. Post-pruning refers to pruning after building a DT [17, 18]. 2.4 Intelligent Diagnosis Method of Mechanical Fault (1) FD method based on artificial neural network The human brain is composed of thousands of interconnected neurons. Because of the interconnection of these neurons, humans can carry out advanced and fast pattern recognition calculations. Artificial neural network is developed from the research of human brain. It is a highly parallel distributed system. It can use its nonlinear processing ability to identify the environment and target. In speaking, FD is more pattern recognition efficient. It has been widely used in the FD of mechanical equipment, such as the FD of control system components, actuators and sensors.

198

Z. Zhang et al.

The process of FD based on neural network FD system includes two stages: learning and matching, and each stage includes data preprocessing and feature extraction. The learning sample is trained by the network, and then the diagnosis sample is made. Learning ability is one of the most important functions of the neural network. In the field of FD, feedforward multilayer neural network, also known as BP neural network, is used most. Although FD based on artificial neural network FD system has certain advantages, it also has certain disadvantages, including ignoring the experience and knowledge of domain experts, difficult to obtain training samples, and difficult to understand the expression of network weights. (2) FD method based on fuzzy theory Modern mechanical equipment is becoming more and more complex. According to fuzzy theory, the higher the complexity of mechanical equipment, the stronger the fuzziness of its system. This forces us to deal with a large amount of fuzzy information inevitably when carrying out the condition monitoring and FD of mechanical equipment. Therefore, it is very valuable to apply the fuzzy theory to the FD research of machines and equipment. The fuzzy diagnosis method is based on fuzzy mathematics, and uses the membership function of symptom vector and fuzzy relation matrix to obtain the membership degree of fault cause. The membership degree of fault causes reflects the multiplicity of fault causes and the primary and secondary relationship between them, thus reducing the trouble brought by those uncertain factors to diagnosis. In addition, it can deal with the uncertainty and fuzziness of the system by imitating human thinking, and use the degree of uncertainty to describe variables. Its biggest feature is that its fuzzy rule base can be constructed directly by using expert knowledge, and can make full use of and effectively deal with expert language knowledge and experience. There are many diagnostic methods based on fuzzy theory, among which the methods based on fuzzy clustering, fuzzy logic, fuzzy model and fuzzy comprehensive evaluation are the most widely used. Fuzzy diagnosis technology can also be applied in combination with other methods, such as fuzzy fault tree, fuzzy neural network, fuzzy expert system, fuzzy wavelet analysis, etc. However, it is not that the more various methods are combined, the more perfect the system will be. It is necessary to make reasonable choices according to the actual needs, and strive to make the system simple and applicable on the premise of meeting the needs. Although intelligent FD methods based on fault tree, expert system, artificial neural network and fuzzy theory have their own characteristics, they also have their own limitations. Therefore, the mechanical equipment FD system is constantly introducing new machine learning and artificial intelligence concepts, such as introducing new theories or combining multiple artificial intelligence methods. At present, the methods of combining different intelligent technologies mainly include rule-based expert system and neural network, information fusion and neural network, artificial immune, neural network and expert system. With the arrival of the new era, it is believed that the future intelligent diagnosis methods will develop more and more perfect.

DT Algorithm in Mechanical Equipment Fault Diagnosis System

199

3 Investigation and Research on the Application of DTA in Mechanical Equipment FD System 3.1 Optimization of DTA Model In order to determine the best number of iterations, the trial and error method can be selected during the operation. Multiple independent tests and model verification can be used to determine the best number. In order to measure the distance of the nearest neighbor, we can use the Mahalanobis A way of distance or Euclidean metric. Assuming that the sample category is represented by C;(j = 1, 2), then the posterior probability of s(s ∈ C;) can be expressed as P(C;|s), the sample s belongs to C, and is judged by C; What is pruned is L(C;|C)(i = j, i,j = 1,2). Under this condition, the misjudgment probability of a sample s is specifically expressed as P(e|s), we get Given the two types of posterior probability P(C;|s), the probability that s belongs to C;(j = 1, 2) is Q(C, |s). The decision risk of the random variable L(C;|C) is:          2 L Ci |Cj Q Cj |s (i = j, i, j = 1, 2) (1) R Cj |s = E L Ci |Cj = j=1 The probability of misjudgment is: R(e|s) = P(C1 |s)Q(C2 |s) + P(C2 |s)Q(C1 |s)

(2)

It can be seen from the above analysis that when √ h* is greater than a small number, Q(Cj|s) will be relatively small. When h∗ > h − 1 > 2, compared with the nearest neighbor iteration method after adding threshold, the threshold nearest neighbor method has a higher accuracy rate. 3.2 Experimental Setup First, the algorithm is implemented in Java language and runs on Weka platform. Weka’s full name is Waikato Intelligent Analysis Environment, which is a Java based software that integrates machine learning and DM. Import the arff file dedicated to Weka, which contains the data set to be tested. These data sets have been processed by the arff specification.

4 Analysis and Research on the Application of DTA in Mechanical Equipment FD System 4.1 Optimization Results According to the experimental statistical results, as shown in Table 1: As shown in Fig. 1. It can be seen from the experimental results that with the decrease of β (0.5 ≤ β < 1), the scale of DT construction gradually decreases, and at the same time, the classification accuracy of DT is improved to a certain extent. The scope of the DT also decreases, but in this case it improves the classification accuracy of the DT.

200

Z. Zhang et al. Table 1. Analysis of experimental results

Data set

ID3 algorithm

Algorithm(β = 0.78)

Algorithm(β = 0.58)

Tic-tac-Toe

68%

88%

95%

Nursery

80%

90%

90%

Car

72%

86%

91%

Kr-vs-sp

77%

91%

93%

100% 90% 80% classification

70% 60% 50% 40% 30% 20% 10% 0% Tic-tac-Toe ID3 algorithm

Nursery Car data set Algorithm(β=0.78)

Kr-vs-sp

Algorithm(β=0.58)

Fig. 1. Comparison results between the algorithm in this paper and the ID3

4.2 Training Results of Diagnostic Model The number of weak classifiers can be determined comprehensively according to the calculation time and diagnostic accuracy. According to Table 2, Table 2. Comparison of the accuracy of the different models Model training data type

Weak classifier algorithm

Accuracy of DTA

Ratio coefficient method

87.2

94.2

Correlation coefficient method

88.9

96.1

Unit vector method

89.6

98.6

DT Algorithm in Mechanical Equipment Fault Diagnosis System

201

Model training data type

Unit vector method

Correlation coefficient method

Ratio coefficient method

75

80

Accuracy of decision tree algorithm

85

90 95 Accuracy

100

105

Weak classifier algorithm

Fig. 2. Diagnostic accuracy of each diagnostic model

It can be seen from Fig. 2 that after the improvement of the DTA, the accuracy of the ratio coefficient method and the unit vector method is higher and better than the other models. 4.3 Application of DT in Mechanical Equipment FD System Based on the demand analysis of the major modules of the system, this paper designs the system function mode and interface, so that the system has corresponding functions. Users and experts are the main participants in the system. Users can interact with different modules, including interaction with system experts, input Questions, after reasoning, are accepted conclusions; experts interact with system modules, input knowledge and organize knowledge, and set parameters. This study applies the previously optimized DTA to the FD of the power system. With the help of the superior performance of the algorithm, the rapid diagnosis of faults can be realized, and the faults can be found and solved in time, thereby reducing the loss of the power system due to faults. The design of this system class mainly includes forward reasoning, backward reasoning, backward comprehensive reasoning, reasoning class, DT class, interpreter class, system setting and knowledge editing class and human-computer interaction interface. The DT class mainly includes: data set, number of rules, generation rules, number of nodes (intermediate nodes), execution function, and generation rule function. Inference keywords include name (string) and number (int),

202

Z. Zhang et al.

and the inference process is implemented through keyword description, keyword logic, and suggested results. This paper simulates some fault data according to the actual measured original data, and uses the proposed DTA to classify and construct the DT together with other original characteristic data, and finally obtains the system diagnosis rules. Taking the motor as an example, the historical data and simulation data are analyzed through the DTA, and the DT is obtained as shown in Fig. 3.

RMS 5.5

No trouble

1X >5.1

4.2 3X >3.6

Other faults P 0 + DR PGi i i i,j Gi (2) Equation constraint transformation In order to make the economic dispatching model of power system more in line with the actual situation, it is necessary to make some restrictions on it. Therefore, this paper combines the traditional PSO algorithm with the improved particle swarm optimization algorithm to propose a new economic dispatching method for power system. This method can improve the convergence speed by introducing inertia weight and avoid falling into the local optimal solution effectively. Specifically, when the economic dispatching problem of power system is a nonlinear function, the value range of inertia weight should be controlled between (0,1). However, when the economic dispatching problem of power system is a multi-variable function or multi-objective function, the value of inertia weight should be determined according to the actual situation, so the equation constraint transformation is shown in Formula (7):    Ng  (7) PGi − PGD − PGL  f = F + σ i=1

3.3 Improved Particle Swarm Algorithm (1) Construction of inertia weights Due to the complexity of the internal structure of the system, its dynamic update is a process of constant change and development. In actual operation, the system state fluctuates or changes abruptly to some extent due to the influence of various external factors, at this time, if the perturbation is not analyzed and studied in depth and adjusted accordingly, the system will enter a vicious competition state. For the particle swarm algorithm with small dynamic weights, the global search ability is weak, and it is difficult to reach the optimum in practical situations. In PSO algorithms, a good value of inertia weights is required to improve the global search ability, and researchers page subsequently proposed many inertia weight improvement strategies, such as: linear decreasing weight strategy, fuzzy inertia weight strategy, random inertia weight strategy, etc., but the results are not satisfactory [12]. In this paper, based on the research of PSO algorithm, the improved PSO algorithm is proposed to control the power economic dispatch system effectively, and the new inertia weight calculation is also proposed, as shown in Eqs. (8) and (9). At this time, the evolution formula of the new particle swarm is shown in Eqs. (10) and (11).      (8) Zi,j = c1 r1 Pi,j (t) − xi,j (t) + c2 r2 Pg,j − xi,j (t) /vi,j (t)

wi,j (t) = (wmax − ((wmax − wmin )/itmax ) · t) · 1/ 1 + e−kk·zi,i

(9)

    vi,j (t + 1) = wi,j (t)vi,j (t) + c1 r1 Pi,j (t) − xi,j (t) + c2 r2 Pg,j − xi,j (t)

(10)

222

Y. Ju

xi,j (t + 1) = xi,j (t) + vi,j (t + 1)

(11)

(2) Applying chaotic operations to particle swarm algorithms Chaotic algorithm is a stochastic optimization method which has good performance in solving complex system problems. The method is based on fuzzy theory and a mathematical model is established. The purpose of adding the chaos operation into the particle swarm algorithm is to introduce chaotic particles to improve the dynamic performance of the system at the early stage of the algorithm, so as to make the dynamic behavior of the system clearer during the operation, and then improve the global optimization performance and global search capability of the algorithm. The specific improvement method such as is combined with PSO algorithm, some chaotic parameters can be added in the initial value selection to make the dynamic behavior of the system clearer and thus improve the algorithm performance. (3) Variational operators in genetic algorithms applied to particle swarm algorithms Genetic algorithm is a stochastic search method based on natural selection and population evolution, and its population size is small, but the optimization performance can vary for different types of problems. The variation operator of genetic algorithm can play an important role in the optimization process, which mainly focuses on the crossover problem between individuals and populations. The variation operator of genetic algorithm is introduced into the particle swarm algorithm to improve the ability of the algorithm to generate particle diversity, the genetic algorithm can maintain a certain level of stable work through constant adaptation changes, and this has strong resistance to variation and robustness, which can more effectively improve the system The genetic algorithm is also able to maintain a certain level of stability through continuous adaptation, and this has a strong resistance to change and robustness, so as to improve the system efficiency and dynamic response speed. The specific improvement method is to specify a number of iterations, when less than this number of iterations, the particles are uniformly mutated; when less than this number of iterations, some individuals will be replaced; when greater than this number of iterations, the original level will continue to remain unchanged. The structure of the improved particle swarm optimization algorithm is shown in Fig. 3. In Fig. 3, the structure of the improved particle swarm optimization algorithm is described. By initializing multiple populations and selecting the best particles among different populations for comparison, the optimal value is output repeatedly.

Application of Improved Particle Swarm Optimization

223

Initialize multiple populations randomly

Establish fitness function

Select the best particle and the worst particle

Best particle 1

Best particle 2

Best particle 3

Communication between populations

Multiple iterations output optimal value Fig. 3. Structure of improved particle swarm optimization algorithm

4 Simulation Experiment In order to verify that the improved PSO algorithm can achieve good results in the economic dispatch of power systems, this paper investigates a specific example and tests the algorithm with a power system containing 15 thermal power units, including unit parameters, climbing constraints, energy consumption characteristic equation parameters, and valve point effect parameters. This enables to control the effectiveness of the algorithm, thus achieving improved economic efficiency as well as reliability when optimizing the dispatch. In the economic dispatching problem of power system studied, firstly, the decision vector is set as shown in the formula, and the objective vector is y = F(total cost). The objective function is obtained by imposing a penalty on the objective function, and the optimal solution is obtained by iteratively calculating the optimized particles with the

224

Y. Ju

given penalty parameters. After repeated experiments and comparisons, the optimization results are shown in Table 1. Table 1. Optimization results PSO

EPSO

PG1

134.01

134.85

PG2

286.74

282.82

PG3

291.44

275.32

PG4

255.68

300.00

PG5

230.18

230.01

PG6

294.58

295.27

PG7

221.71

220.05

PG8

328.24

359.75

PG9

308.03

454.67

PG10

469.90

321.39

PG11

371.32

320.23

PG12

368.87

304.76

PG13

499.99

494.98

PG14

442.56

475.48

PG15

496.73

500.00

69 933.97

69 485.13

Total cost

Through the above analysis, it can be seen that the improved particle swarm algorithm has achieved some success in optimizing the power economic dispatch system. In this paper, the main purpose of the improved particle swarm algorithm is to make the economic efficiency of the system more stable by replacing the inertia weights in the system with dynamic adaptive inertia weights. By applying chaotic operations to the particle swarm algorithm, the introduction of chaotic operations in the economic dispatching of power systems can effectively reduce the impact of randomness and uncertainty on the optimization results. In order to ensure the accuracy of the experiment, this paper also tested the algorithm on a power system containing five thermal power units. The experimental results are shown in Table 2. It can be seen from the results in Table 2 that the improved particle swarm optimization algorithm proposed in this paper can optimize the economic dispatch of electric power in the power system containing five thermal power units.

Application of Improved Particle Swarm Optimization

225

Table 2. Optimization results of 5 thermal power units PSO

EPSO

PG1

126.01

124.09

PG2

224.87

220.86

PG3

292.98

289.76

PG4

255.45

300.32

PG5

243.23

236.34

1142.54

1171.37

Total cost

5 Conclusion In this research paper, improved PSO algorithm is mainly used to analyze the economic dispatching problem of power system, in which the improved PSO methods mainly include the construction of inertia weights, the application of chaoticization operation to particle swarm algorithm, and the application of variational operator in genetic algorithm to particle swarm algorithm. These methods can not only solve the local convergence and premature phenomenon existing in the traditional PSO method, to a certain extent it can also improve the convergence speed and solution efficiency of the PSO algorithm, but also make the particles search for the best possible in or near the feasible solution region, which effectively improves the accuracy and speed of the algorithm. The simulation experiments show that the improved PSO algorithm can achieve better results in the economic dispatching of power system, and it can effectively improve the efficiency of the economic dispatching of power system. In the future, the development direction of power system economic dispatching is mainly to optimize the cost of power grid operation and improve the quality of power supply, and the improved particle swarm algorithm, as an efficient, safe and stable method, will replace the traditional particle swarm algorithm method for a long time in the future.

References 1. Mohammadian, M., Lorestani, A., Ardehali, M.M.: Optimization of single and multi-areas economic dispatch problems based on evolutionary particle swarm optimization algorithm. Energy 161, 710–724 (2018) 2. Abdullah, M.N., Tawai, R., Yousof, M.: Comparison of constraints handling methods for economic load dispatch problem using particle swarm optimization algorithm. Int. J. Adv. Sci. Eng. Inf. Technol. 7(4), 1322 (2017) 3. Abbas, G., Gu, J., Farooq, U., et al.: Solution of an economic dispatch problem through particle swarm optimization: a detailed survey - part I. IEEE Access 5(1), 15105–15141 (2017) 4. Ranganathan, S., Rajkumar, S.: Self-adaptive firefly-algorithm-based unified power flow controller placement with single objectives. Complexity 2021(1), 1–14 (2021) 5. Vijayakumar, T., Vinothkanna, R.: Efficient energy load distribution model using modified particle swarm optimization algorithm. J. Artif. Intell. Capsule Netw. 2(4), 226–231 (2021)

226

Y. Ju

6. Sabeti, M., Boostani, R., Davoodi, B.: Improved particle swarm optimisation to estimate bone age. IET Image Process., 179–187 (2018) 7. Srivastava, A., Das, D.K.: A new aggrandized class topper optimization algorithm to solve economic load dispatch problem in a power system. IEEE Trans. Cybernet. (99), 1–11 (2020) 8. Fayyaz, S., Sattar, M.K., Waseem, M., et al.: Solution of combined economic emission dispatch problem using improved and chaotic population based polar bear optimization algorithm. IEEE Access (99), 1–1 (2021) 9. Hamza, Y., Etinkaya, N.: An improved particle swarm optimization algorithm using eagle strategy for power loss minimization. Math. Probl. Eng. 2017, 401–403:550–556 (2017) 10. Dileep, G., Singh, S.N.: An improved particle swarm optimization based maximum power point tracking algorithm for PV system operating under partial shading conditions. Solar Energy 158, 1006–1015 (2017) 11. Ekinci, S., Demiroren, A., Hekimoglu, B.: Parameter optimization of power system stabilizers via kidney-inspired algorithm. Trans. Inst. Meas. Control. 41(5), 1405–1417 (2019) 12. Dong, H., Sun, J., Li, T., Li, L.: An improved Niching binary PSO for feature selection. In: 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), no. 5, pp. 3571-3577 (2018)

Commodity Design Structure Matrix Sorting Algorithm on Account of Virtual Reality Skill Tengjiao Liu1(B) and Apeksha Davoodi2 1 Xi’an Fanyi University, Xi’an, Shaanxi, China

[email protected] 2 Islamic Azad University, Parand, Iran

Abstract. The application of Commodity design structure matrix is a very mature tool in Commodity design. Scholars make full utilize of VR skill to refine the Commodity design structure and collection matrix sorting algorithm at the same time, so as to design a collection of Commodity design structure scheme In the wake of strict logic and scientific rules. Taking a Commodity design algorithm model as an example, the algorithm model comprehensively considers the Commodity performance, expense, and shipping date, and establishes the corresponding Commodity design structure matrix sorting algorithm model. In terms of practical application cases, engineers constantly adjust customer needs and improve the adaptpower of Commodity design structure so as to acquire Pareto majorization. In this text, the research of Commodity design structure matrix sorting algorithm on account of VR skill is studied, which introduces the research of Commodity design structure matrix sorting algorithm. The test results show that the VR skill can validly promote the majorization of commodity design structure matrix ordering. Keywords: Virtual Reality skill · Commodity Design Structure Matrix · Sorting Algorithm · Algorithm Research

1 Introduction Commodity Design Structure Matrix (DSM) for coupling scenarios is a widely utilized Commodity structure model building tool. This tool can integrate multiflash algorithmic models, identify model structures, and integrate smooth, rigorous 3D view modules to help staff deal in the wake of Commodity structure design problems more professionally. The internal module can be collection up to meet the specifications of the connector reasonable and scientific bridge, so as to efficiently collection up a variety of performance commodities. Module collectionup requires multiflash Commodity structure components, and at the same time to meet the coupling of Commodity components. The research of Commodity design structure matrix sorting algorithm on account of VR skill greatly improves the efficiency of matrix sorting. About the research of VR skill, lots of scholars at home and abroad have carried on the research to it. In foreign studies, AkdereM proposed to explore the validity of VR skill as an novelty studying lineage in cultivating cross-cultural abilities. The study © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 227–236, 2023. https://doi.org/10.1007/978-3-031-31775-0_24

228

T. Liu and A. Davoodi

was on account of data from STEM undergraduates in a year one technical program at a vast state school in the Midwest (n = 101). On-line inquiries were utilized to measure universal-diversity dimensions, ambiguity tolerance, cross-cultural sensitivity and cultural knowledge, and test data were collected before and after the intervention [1]. PletzC proposed that IVR is now available on a vast seale, but few organizations in German-speaking countries seem to have been active in using the skill on a vast seale in instruct and drill. Hence, little comes to light skill reception [2]. AtszO proposes that tour goals and tour enterprises for instance adjustment, catering and repositories can also utilize VR skill to hold back novel Coronavirus transmission. In addition, COVID-19 can be viewed as an opportunity for industries and goals to peddle their commodities and services. Hence, this skill will be very avail for the post-COVID-19 tour recovery [3]. Commodity structure ordering matrix is one of the major methods to comes true lots of custom-made, which comes trues the organic merging of customer’s individuation and low expense of lots of manufacture [4, 5]. However, in the course of Commodity allotment, the magnitude of valid allotment schemes is often very vast, so it is indispensable to refine the Commodity allotment so as to obtain the best Commodity structure sequencing scheme to meet customer needs [6, 7]. The research on the ordering algorithm of Commodity design structure matrix on account of VR skill is beneficial to the progress of ordering.

2 Design and Exploration of Commodity Design Structure Matrix Sorting Algorithm on Account of VR Skill 2.1 VR Skill VR skill is basically comes trued by calculating machine analog of dummy circumstance so as to grant folk feeling of circumstance steeping. In the wake of the continuous blossom of social Commodityive forces and wissenschaft and skill, the requirement for VR skill in all aspects of life is increasingly strong [8, 9]. VR skill has also got ahead and is gradually becoming a new domain of wissenschaft and skill. A brand-new useful technology created in the 20th century is virtual reality technology, also referred to as virtual reality or spiritual reality technology. Computer, electronic information, and simulation technology are all parts of virtual reality. In order to create a realistic virtual world with three-dimensional visual, tactile, olfactory, and other sensory experiences with the aid of computers and other equipment, it primarily uses computer technology, utilizing and synthesizing the most recent development achievements of various high technologies such as three-dimensional graphics technology, multimedia technology, simulation technology, display technology, and servo technology. Virtual reality technology is becoming more and more in demand across all sectors of society as a result of the ongoing advancements in science, technology, and social productive forces. Virtual reality technology has also made great progress and gradually become a new scientific and technological field. As its name suggests, the so-called virtual reality combines the virtual and the real worlds. Virtual reality technology is, theoretically, a computer simulation system that

Commodity Design Structure Matrix Sorting Algorithm

229

enables the creation and experiencing of a virtual world. It creates a simulation world using computers so that users can fully immerse themselves in it. The utilization of real-world data and electronic signals produced by computer technology in conjunction with a variety of output devices to create phenomena that people may physically experience is known as virtual reality technology. These phenomena are portrayed by three-dimensional models and can take the form of actual physical things or invisible substances. More and more people are becoming aware of virtual reality technology, and users can have the most genuine emotional experiences there. People report feeling immersed since it’s tough to tell the difference between the simulation’s authenticity and the actual world. Virtual reality also has all of the human senses, including hearing, vision, touch, taste, and smell. Finally, it has a super simulation system that fully realizes humancomputer interaction, allowing users to operate at their discretion and receive the most accurate feedback from their surroundings while doing so. Virtual reality technology is adored by many individuals because to its existence, multi-perception, interaction, and other features. The model of commodity design using virtual reality technology is shown in Fig. 1.

Fig. 1. Model diagram of commodity design using virtual reality technology

In Fig. 1, using virtual reality device to design goods in virtual space can effectively improve the human-computer interaction ability and improve the accuracy of product structure design. This section mainly introduces the Commodity design structure matrix sorting algorithm on account of VR skill. Firstly, the rationality of selecting Commodity design structure matrix in this topic is analyzed from the perspective of VR transmission characteristics on account of the network transmission scene of VR application, and the feasibility and validity of Commodity design structure matrix in VR transmission is further proved, as shown in Fig. 2.

230

T. Liu and A. Davoodi

First.Rationality analysis

Second.Feasibility analysis

Third.effectiveness

Fig. 2. Three characteristics of VR Commodity design structure matrix sorting algorithm

(1) Rationality analysis The underlying physical technologies utilized by WiFi and 5G technologies to improve transmission speed and bandwidth are the same, but there exist still lots of distinctions dia carrier-borne threadless lineages and unlicensed threadless lineages, which are response in expense, base installation distribution and management control level provided to network operators [10, 11]. Hence, 5G networks are more suitable for open, mobile and dense device links, while WiFi is more suitable for networks and enterprises in the wake of high levels of privacy. Future network blossom tends to utilize 5G communication skill outdoors. (2) Feasibility analysis so as to shorten the network transmission time and reduce the delay of VR video application, this text chooses to study the deployment scenario of WiFi network on account of multiflash AP running multiflash WiFi network connectors on non-overlapping channels [12, 13]. The choice of such network deployment is first on account of an intuitive consideration: when multiflash network connectors are available to a customer, that customer has better access to the threadless channel than a single connector. Multipath TCP is the standard protocol of Internet engineering task group. MPTCP empowers a single TCP link to utilize multiflash connectors on the clientele or apache at the same time. It is a protocol that can be extended by TCP backward compatibility. MPTCP presents the same connector to the application layer as a regular TCP link, but actually utilizes joint congestion control dia subflows to multiflashx traffic dia parallel multi-slice flows, allowing traffic to increase and decrease on each subflow as needed becautilize the network characteristics of these subflows change over time. (3) The validity An Edgeapache In the wake of sufficient computing capacity is utilized to connect the threadless link in the wake of wired link or 5G and other high-performance links. As the first node of the threadless link segment of the network, ES is connected In the wake of multiflash aps through exclusive control links, and relevant data required by ES control can be obtained [14, 15]. ES is the key point of the VR transmission lineage of this project, and the location where most tasks for instance task allocation and data scheduling are performed. This text assumes that all data will be sent to ES on time and correctly in the threadless link, and only the data transmission in the threadless

Commodity Design Structure Matrix Sorting Algorithm

231

link segment is concerned, becautilize the threadless transmission channel not only has the problem of limited bandwidth, And the channel Trait In the wake of the dynamic change of time and location, the random time-varying transmission characteristics for VR data transmission has brought a lot of problems, which makes the threadless link section become the main bottleneck of the entire transmission course, so the goal of this text is to improve the transmission, VR of the threadless link transmission effect and the transmission efficiency. 2.2 Research on Ordering Algorithm of Commodity Design Structure Matrix on Account of VR Skill A comprehensive system technology, virtual product development integrates computer graphics, intelligent technology, concurrent engineering, virtual reality technology, multimedia technology, and is composed of multidisciplinary knowledge. It is based on computer simulation and modeling. To achieve the digital mapping of product development in a computer environment, use virtual product development. Based on a digital model of all activities and product evolution across the whole product development process, it forecasts and assesses the behavior of product development. By using virtual reality technology, it is possible to create a virtual environment for product development that is incredibly realistic and has a seamless sensory integration with people. Here, virtual product development is defined as simulating the functionality of future products or the state of manufacturing and maintenance systems prior to the physical realization of product design or manufacturing and maintenance systems in order to make real-time optimization decisions and decisions about the future of the problem.

)Clustering criteria for DSM sorting

)Solution of coding

)Random solution generation operator

Fig. 3. The general course of pattern recognition lineage for English intelligent translation

(1) Clustering criteria for DSM sorting, as shown in Fig. 3 During the construction of Commodity structure matrix, customers make coupling adjustment to the construction, so as to make it more inclined to the diagonal of DSM [16, 17]. This text utilizes clustering algorithm, which is a formula model to deal In the wake of the Angle of constructing sequence. (2) Coding The solution of Commodity structure utilizes two-dimensional array for data courseing. Encoded lines can be represented as modules that aggregate concepts. Sorting is

232

T. Liu and A. Davoodi

the sequence arrangement of modules In the wake ofin the Commodity structure, and the serial magnitude of in-line components is the arrangement of internal components. (3) Random solution generation operator In the course of initialization, the solution is a very major link, virtual skill to initialize the solution, at the same time arbitrary access to new solutions. The random solution formation algorithm generally goes through two steps. First, the serial magnitude of the module is arbitrarily mixed; Second, the components In the wake ofin the module are sorted in arbitrary mixed order.

3 Research on the Effect of Commodity Design Structure Matrix Sorting Algorithm on Account of VR Skill The capacity MR of the scene distribution object is:  M1 + M2 ≤ M MR = M1 + max(M2 , M3 ) = M1 + M3 ≤ M

(1)

where, M1 , M2 , M3 represents the scenario capacity deployed in scenarios 1, 2, and 3, and M represents the maximum capacity of the VR collection device. The motion track of the occluded customer Hi is calculated and compared In the wake of that Hi of the tracked motion track k2 :  2 errik = (pik − qik ) (2) where, pik is the captured position k2 at time k2 . In this course,k2 always on the visible Hi . In this text, VR skill is utilized to establish the sorting algorithm of Commodity design structure matrix by solving the capacity of objects in VR and the error of objects’ motion path. The optimal state of Commodity design structure is acquired by obtaining the optimal sum MR and errik to determine the algorithm model of Commodity design structure matrix ordering. 3.1 Multi-target Allotment Majorization Model of Unit Commodities First, Commodity performance driven allotment majorization model. Commodity performance includes Commodity feature and Trait, feature is the power to acquire a regular deed; Trait is the extent of feature, covering power, safety, time continuity, circumstance and other aspects. Second, the allotment majorization model driven by commodity expense. (1) Construct the expense matrix of the module instance

Commodity Design Structure Matrix Sorting Algorithm

C = (c111 , c112 , · · · cijk )T

233

(3)

where, cijk is the expense of the KTH instance of the JTH module series of the ith featureal module. Third, the allotment majorization model driven by Commodity shipment. In this text, the Pareto optimal collection of the multi-constraint problem of Commodity delivery driven targets is solved by using O(MN2) complexity fast non-dominated ordering mechanism (where M is the magnitude of majorization targets and N is the populace magnitude), advantage point retention method and aggregation range calculation method in the wake of out exterior params. This algorithm has the characteristics of fast operation speed, strong robustness, good solution collection dispersion and so on, and has been successfully applied to lots of engineering majorization design problems.

4 Investigation and Analysis of Commodity Design Structure Matrix Sorting Algorithm on Account of VR Skill Make this lineage an Androidapp (called Ubimaze). It runs on phones running AndroidOS. The phone will have a touch screen, accelerometer and gyroscope sensors, and Wifi. The lineage also includes cardboard VR glasses and a tracking device. Cardboard is designed to display stereoscopic images, consisting of a cardboard box and a smartphone. Kinect is utilized as the tracking device, and the tracking skill in Sect. 3 is utilized to capture the customer’s position and movement. The improved Kinect is mainly composed of four parts: Kinect, mini calculating machine, WiFi module and mobile power supply. Thanks to mobile power support, the improved Kinect becomes a passive device that can be utilized anywhere In the wake ofout power constraints. The circumstance lineage utilized in this experiment is APPLEIOS, the software is MatlabR2010b, and the blossom programming language Java is adopted. Table 1. All datacollections test run params Evaluation index

Evaluation method PSNR

Obj_VQ

Sub_VQ

PCC

0.5072

0.7647

0.9812

SRCC

0.5002

0.7405

0.9923

RMSE

87.123

44.301

1.080

MAE

86.863

44.803

1.192

Table 1 lists the effects of traditional video Trait evaluation index PSNR, subjective VR customer experience evaluation index Sub_VQ and target VR customer experience evaluation index Obj_VQ on THE VR video data collection proposed in this text. It

234

T. Liu and A. Davoodi

can be revealed from the results in the above table: Becautilize Sub_VQ algorithm is a quantitative expression directly obtained by gaussian feature fitting on the same data collection, it has the most outstanding performance in the test results, In the wake of the values of PCC and SRCC both exceeding 0.9.

PCC

Sub_VQ

SRCC

0.9923

PSNR

Obj_VQ

0.9812

0.7405 0.7647

0.5002 0.5072

Fig. 4. Comparison of intelligent recognition effect of English translation

In Fig. 4, Sub_VQ and Obj_VQ represent the VR skill adopted in this text, while PSNR adopts traditional skill. The numerical comparison of SRCC and PCC is shown in the figure. Both Sub_VQ and Obj_VQ are higher than PSNR, indicating that VR skill has excellent performance. The comparison of the intelligent recognition effects of PSNR, Obj_VQ and Sub_VQ in Chinese translation is shown in Table 2. In Table 2, the intelligent recognition effect of using virtual reality technology in Chinese translation is described, among which the intelligent recognition effect of virtual reality technology adopted in this paper is better than PSNR algorithm. A new generation of computer-aided design that uses virtual reality technologies is called virtual product design. It is a three-dimensional computer-aided design environment that supports detailed design and variant design through three-dimensional operation and language commands. It is based on multimedia, interactive infiltration or intrusion. By using efficient product design techniques and tools before the real product is processed, virtual product design establishes the function and structure information model of the product. At the same time, it simulates the product’s structure, function, and performance as a design guide and a preliminary assessment of user needs.

Commodity Design Structure Matrix Sorting Algorithm

235

Table 2. Comparison of Intelligent Recognition Effects of Chinese Translation Evaluation index

Evaluation method PSNR

Obj_VQ

Sub_VQ

PCC

0.5274

0.7839

0.9724

SRCC

0.5104

0.7615

0.9889

RMSE

89.256

46.225

1.367

MAE

88.123

45.289

1.289

By using efficient product design techniques and tools before the real product is processed, virtual product design establishes the function and structure information model of the product. At the same time, it simulates the product’s structure, function, and performance as a design guide and a preliminary assessment of user needs. The data test shows that the research of Commodity design structure matrix sorting algorithm on account of VR skill validly promotes the progress of Commodity design structure matrix sorting algorithm.

5 Conclusions As an major tool for Commodity structure design, DSM is very powerful and can clearly show the coupling relationship dia modules/components, which can provide more direct Commodity information to assist designers in Commodity creation. Under the condition of predetermined modules, DSM arranges the structural components of the Commodity sequentially and coursees the module internals sequentially. In order of coupling, the stronger the coupling, the more modules tend to be placed together. The research on the ordering algorithm of Commodity design structure matrix on account of VR skill is beneficial to the progress of the ordering of Commodity design structure matrix.

References 1. Akdere, M., Acheson-Clair, K., Jiang, Y.: An examination of the validity of VR skill for intercultural competence blossom. Int. J. Intercult. Relat. 82(1), 109–120 (2021) 2. Pletz, C.: Which factors promote and inhibit the skill reception of immersive VR skill in teaching-studying contexts? Results of an expert survey. Int. J. Emerg. Technol. Stud. 16(13), 248 (2021) 3. Atsz, O.: VR skill and physical distancing: A review on limiting human interaction in tour. J. Multidiscipl. Acad. Tour 6(1), 27–35 (2021) 4. Appel, L., Peisachovich, E., Sinclair, D.: CVRRICULUM Program: benefits and challenges of embedding VR as an instructal medium in undergraduate curricula. Int. J. Innov. Instruct. Res. 9(3), 219–236 (2021) 5. Katona, J.: A review of human–calculating machine interaction and vr research domains in cognitive infocommunications. Appl. wissenschafts 11(6), 2646 (2021)

236

T. Liu and A. Davoodi

6. Takami, A., Taguchi, J., Makino, M.: Changes in cerebral blood flow during forward and backward walking in the wake of speed misperception generated by VR. J. Phys. Ther. Wissenschaft 33(8), 565–569 (2021) 7. Thomas, S.: Investigating interactive peddleing technologies-adoption of augmented/VR in the Indian context. Int. J. Bus. Compet. Growth 7(3), 214–230 (2021) 8. Covaciu, F., Pisla, A., Iordan, A.E.: blossom of a VR simulator for an intelligent robotic lineage utilized in Ankle rehabilitation. Sensors 21(4), 1537 (2021) 9. Rauf, F., Hassan, A.A., Adnan, Z.: VR Exergames in rehabilitation program for cerebral palsy children. Int. J. Calcul. Mach. Appl. 183(19), 46–51 (2021) 10. Mohring, K., Brendel, N.: Producing VR (VR) domain trips – a concept for a feeling-based and mindful geographic instruct. Geogr. Helvet. 76(3), 369–380 (2021) 11. Aylward, K., Dahlman, J., Nordby, K., et al.: Using operational scenarios in a VR enhanced design course. Instruct wissenschafts 11(8), 448–448 (2021) 12. Gumonan, K., Fabregas, A.: ASIAVR: Asian studies VR game a studying tool. Int. J. Comput. wissenschafts Res. 5(1), 475–488 (2021) 13. Cherni, H., Nicolas, S., Métayer, N.: Using VR treadmill as a locomotion technique in a navigation task: Impact on customer experience – case of the KatWalk. Int. J. VR 21(1), 1–14 (2021) 14. Loureiro, G.B., Ferreira, J., Messerschmidt, P.: Design structure network (DSN): a method to make explicit the commodity design specification course for lots of custom-made. Res. Eng. Design 31(2), 197–220 (2020) 15. Abad, H.R.M.N., Mahmoodi, A., Pazooki, F.: A new approach to analyzing detail change propagation in redesigning of drone camera stabilizer. Engineering 13(12), 740 (2021) 16. Olabanji, O.M., Mpofu, K.: Fusing multi-attribute decision models for decision making to acquire optimal commodity design. Found. Comput. Decis. wissenschafts 45(4), 305–337 (2020) 17. Brisco, N., Serge, D.Y.: Mechanical lineage topology majorization for better maintenance. Stud. Eng. Skill 8(1), 1–1 (2020)

Machine Vision Communication System Based on Computer Intelligent Algorithm Yuanyuan Duan(B) Nanchang Vocational University, Nanchang, Jiangxi, China [email protected]

Abstract. Machine vision communication is an important application direction of artificial intelligence, and its research value in this wave of scientific research is self-evident. Technologies such as image detection and image segmentation have gradually matured, and image restoration technology has also made some progress. The purpose of this paper is to study the design of machine vision communication system based on computer intelligence algorithm. Combined with the current design and development trend of robot vision system software, analyze the functional requirements and performance requirements of the vision system software, so as to choose a suitable development platform. Design the overall software framework and formulate a software development scheme based on the performance of the development platform and processor. Finally, build a development environment on the development platform. The full variational model image inpainting algorithm is used to repair the pixels of the image. The running results show that the inpainting result of the algorithm is ideal, and the running time of 1000 iterations is 5.6 ms reasonable and feasible. Keywords: Intelligent Algorithm · Visual Communication · System Design · Machine Vision

1 Introduction The current society is an information society, and the transmission of information has become a phenomenon that occurs all the time in contemporary society. In essence, information is intangible and must be received through a certain carrier to produce specific sensory stimulation to people. The transmission mode of different carriers determines the effect of information transmission [1, 2]. In this context, the research and implementation of image processing are becoming more and more important. Image in painting, as the most critical part of machine vision communication, provides a foundation through technologies such as computer intelligent algorithms [3, 4]. Machine vision system design is an important part of communication activities, and its value needs to be discovered. Mohamed AR develops a real-time machine vision prototype for sorting and detecting quality parameters of different agricultural products. The built prototype is used for image acquisition and processing. A simple thresholding method (min-max method) was developed and used using Python software by using © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 237–246, 2023. https://doi.org/10.1007/978-3-031-31775-0_25

238

Y. Duan

color value data for all relevant defects. Three defects (green, black spots, and scars) in oranges, two defects in potato tubers (green and black spots), and two defects in peanuts (broken pods and black spots) were detected using the developed color-difference-based system [5]. High-speed industrial machine vision (MV) applications, such as steel surface inspection, require multiple high-resolution cameras to operate simultaneously. Subramanyam V proposes a time synchronization algorithm and framework with bidirectional communication with timestamps and estimated average path delays. Unicast transmission forms the basis of the synchronization framework, thereby minimizing network utilization, thereby ensuring that the necessary bandwidth is available for image transmission. Experimental results show that the proposed method outperforms existing methods with synchronization accuracy at the microsecond level [6]. Therefore, it is feasible to study the machine vision communication system based on computer intelligent algorithm [7]. This paper conducts a detailed study on the overall design of the machine vision communication system. The research method is mainly to combine the theory and practice of the binocular camera. It is hoped that the relevant theoretical knowledge will be used to solve the design problems encountered at present, and the application in The computer intelligent algorithm in the design of machine vision communication system, and then analyze and discuss the functional frame design of the machine vision communication system, hoping to provide a complete design scheme for the machine vision communication system based on computer intelligent algorithm, so as to improve its Deficiencies in the current system.

2 Research on the Design of Machine Vision Communication System Based on Computer Intelligent Algorithm 2.1 Binocular Camera The transmitting end of the RGB-D camera emits a beam of light to the detected object, and the receiving end recovers the returned structured light pattern, and calculates the distance between the object and the camera according to the principle of structured light. This transmit-receive measurement mode cannot measure transmissive objects, with high cost and high power consumption, so the scope of application is limited [8, 9]. Lidar has the characteristics of high ranging accuracy, wide ranging range, and fast frequency. It is often used in outdoor ranging and mapping. However, compared with image sensors, its sensing ability is insufficient, and it is often necessary to fuse laser point clouds and image information. Lidar is expensive, which limits its implementation in specific projects [10].

Machine Vision Communication System

239

Binocular vision is a method that simulates the human binocular visual system to passively perceive distance between left and right vision. Observe the same object from two positions at the same time. If the scene in the field of view camera has overlapping parts, according to the pixel mapping relationship between the images, the difference between the corresponding pixels is obtained, and then the depth information is obtained. 2.2 Image Distortion The camera will add a lens in front of the photosensitive element, which is a common lens in life [11]. The shape of the lens affects the propagation of light, and in a mechanical assembly, the lens will not be perfectly parallel to the physical plane of the image, both will cause image distortion, resulting in a triangular similarity relationship between the images. Object and point imaging are not satisfied, and there will be deviations in practical applications [12]. This kind of deviation is called deformation, which can generally be divided into radial deformation and tangential deformation. Imaging distortion caused by lens shape is called radial distortion. Lens structures on production lines are usually centrifugal, so this distortion is usually radially symmetric [13, 14]. 2.3 Image Restoration Technology Image inpainting technology refers to the missing and damaged information in the image, and the remaining information is supplemented and repaired according to certain rules, so that the repaired image is close to or achieves the visual effect of the original image. From a mathematical point of view, this problem refers to filling the blank area according to the surrounding valid pixel information [15]. In recent years, image inpainting technology has developed very rapidly and has been widely used: such as repairing scratches on art works; repairing old photos and movies; repairing missing information in communications; removing objects and text in images, etc. [16]. However, because the missing information is unknown, even if it is repaired by certain technical means, we do not know whether it is correct or not, and there is no basis for any objective judgment. Therefore, image inpainting is a completely subjective process, which is very dependent on the perception of the image by the human eye [17, 18]. The structural model of machine vision communication system is shown in Fig. 1. In Fig. 1, the structural model of machine vision communication system is described, which is divided into four sections from top to bottom, namely image data, image features, machine vision and visual communication.

240

Y. Duan

Image data

Image information collection

Image preprocessing

Image features

Image enhancement

Signature analysis

Machine vision

Machine learning

Memplate matching

Visual communication

Visual image communication

Fig. 1. Structure model diagram of machine vision communication system

3 Investigation and Research on the Design of Machine Vision Communication System Based on Computer Intelligent Algorithm 3.1 System Environment Construction The development environment construction includes the JetsonTX2 development environment construction and the model training server construction. For the introduction of the software development environment construction process, please refer to the next section. JetsonTX2 needs to install NVIDIA JetPackSDK before development. NVIDIA

Machine Vision Communication System

241

JetPack SDK is an AI application solution. The JetPack installer uses the latest operating system image to refresh JetsonTX2, install the host and JetsonTX2 development tools, and install the required libraries and APIs for the development environment, Examples and documentation. 3.2 Image Inpainting Algorithm In an image, its data is discrete, so the central difference method is used to discretize the above formula to realize numerical solution. The so-called central difference method will be based on the use of finite differences instead of time derivation. The pixel to be repaired is O, and the set of pixels around point O is considered to be {e, n, w, s}. Assume: v=

1 · ∇u |∇u|

(1)

Through the numerical discretization of the central difference method, we get: ∇ ·v =

    ∂v2 ∂v1 + ≈ ve1 − vw1 + vn2 − vs2 ∂x ∂y

(2)

Then the iterative calculation is performed by the Gauss-Jacobian iterative algorithm, there are,   (n−1) u(n) (i, j) = h(n−1) u0 (i, j) hQ u(Q)(n−1) + (3) o In the image inpainting process, the algorithm based on the total variational model can be divided into the following steps: (1) (2) (3) (4) (5)

Read in the defective image; Assignment, the number of iterations n; Use the relevant conditions to determine the mask information; Use iterative formula for iterative repair; After stopping the iteration, output the repaired image.

3.3 Experiment Setup of Intelligent Algorithm This paper uses the image inpainting algorithm based on the variational model introduced above to perform inpainting tests on images. A picture after distortion correction is used as the original picture for image restoration. The operation time of the algorithm and the repair effect score are analyzed, where the repair effect score is set to 0–5 points, and 5 is the highest evaluation of the repair effect.

242

Y. Duan

4 Analysis and Research on the Design of Machine Vision Communication System Based on Computer Intelligent Algorithm 4.1 System Framework Design Combined with the characteristics of the JetsonTX2 development platform, the overall frame design of the machine vision communication system is mainly composed of three modules and their sub-modules, the visual inspection module, the human-computer interaction module and the visual ranging module. The software framework is shown in Fig. 2.

machine vision communication system

Visual inspection module

Visual ranging module

Human-computer interaction module Fig. 2. Overall software framework

The visual detection module is mainly responsible for the processing links such as video acquisition, target object detection and display recognition results. The detected target object will be sent to the visual ranging module for visual ranging, and the detected face will be sent to the human-computer interaction module for human-computer interaction. This module is mainly based on deep convolutional neural networks. Firstly, the pictures of the detected objects are collected to make a training set, and the pictures of the detected objects are taken to make a test set. Next, the neural network is modified for the recognition requirements. Then use the training set to train the network, get the weights of the network, and use the test set to verify the detection effect. Finally, the TensorRT inference framework is used to optimize and accelerate the network. The human-computer interaction module is mainly responsible for tasks such as face recognition, voice input, voice recognition, execution of related commands, and speech synthesis. It will decide whether to start human-computer interaction according to the result of face recognition, and the command to take the item will send the name of the item to the visual ranging module. Use a deep convolutional neural network to

Machine Vision Communication System

243

recognize the detected face, and start voice interaction if the face in the face database is recognized. Then use the GreenLink USB external sound card to turn on the microphone to collect the voice, and upload the collected voice to the iFLYTEK cloud server through the Internet for online recognition. After understanding the recognition result, you can get corresponding instructions. Finally, perform relevant operations according to the instructions of the interactive object. The visual ranging module is mainly responsible for processing tasks such as camera calibration, image correction and repair, feature matching, and item ranging. The items that need ranging are obtained from the human-computer interaction module, and the ranging results will be sent to the visual detection module for use. The specified object is measured by binocular ranging, and the image is inpainted using an image inpainting algorithm based on a variational model. 4.2 Algorithm Implementation Perform 1000, 2000, 3000 and 4000 algorithm iterations on the original image of image restoration respectively to obtain the restored image. From these figures, it can be found that the algorithm repairs the defective image well, ensures the integrity of the image, and at the same time, the repaired area has good consistency and coordination with the surrounding pixels and information. The repair evaluation and execution time of the algorithm are shown in Table 1. Table 1. Algorithm implementation number of iterations

Operation time (ms)

Repair effect score

1000

5.6

4

2000

9.1

4.2

3000

12.4

4.5

4000

15.6

4.8

The increase in the number of iterations of the algorithm improves the repair effect and also increases the operation time, which greatly affects the repair efficiency, as shown in Fig. 3. The second is the inherent disadvantage of the diffusion performance of the total variational model itself, which is easy to produce gradient effects when dealing with large-area defect areas and defect areas with large grayscale transformations. The socalled gradient effect means that the smooth and continuous signal becomes a segmented equivalent signal, which greatly affects the visual experience of people. Because the analysis of the image restoration process needs to be verified by more iterative experiments, the iterative experiments of image restoration will be carried out for 5000 times, 6000 times, 7000 times, 8000 times, 200 times, 400 times, 600 times and 800 times respectively. The evaluation and execution time of image restoration are shown in Table 2.

244

Y. Duan

18 Operation time (ms)

16

Repair effect score

14 12 value

10 8 6 4 2 0 1000

2000

3000

4000

number of iterations Fig. 3. Application results of the algorithm under different iterations

Table 2. Results table of performance data for image restoration Iterations

Running time (ms)

Repair effect score

5000

16.8

5.2

6000

17.2

5.4

7000

18.9

5.6

8000

24.6

6.8

200

2.4

2.3

400

2.8

2.6

600

3.4

3.2

800

4.6

3.4

In Table 2, the performance data results of image inpainting are described, in which the effect of image inpainting increases with the increase of iteration times. Computers and related technology are used to simulate biological vision in computer vision. Similar to what humans and many other organisms accomplish on a daily basis, its major purpose is to extract the three-dimensional information of the matching scene by processing the gathered photographs or videos. Knowing how to use cameras and computers to gather the data and information we need about a subject is known as computer vision. In other words, it involves giving the computer eyes (camera) and a brain (algorithm) so that it can sense its surroundings.

Machine Vision Communication System

245

5 Conclusions Today’s society is in a diversified visual world, and the dissemination of information also relies more on vision, which tends to develop in the direction of symbolization. In the process of promotion, it can reflect the uniqueness and be more intuitive, and in terms of visual communication, it makes it more Rich and textured. This paper has basically achieved the expected goals, but due to the limitations of scientific research time, personal investment and experimental conditions, there is still room for further exploration in this paper, which needs to be improved in the future: the image inpainting algorithm based on the variational method model, due to Iterative calculation is very dependent on the settings of the initial parameters. Secondly, due to the inherent shortcomings of the algorithm, only small defect areas can be repaired, and gradient effects will occur in the repaired areas. You can refer to other related algorithms to further improve this part of the algorithm.

References 1. Mukerji, C.: studies in visual communication visual language in science and the exercise of power: the case of cartography in early modern Europe. Stud. Vis. Commun. 10(3), 30–45 (2018) 2. Zomay, Z., Keskin, B., Ahin, C.: Grsel letiim Tasarm Blümü rencilerinin Sektrel Logolardaki Renk Tercihleri - Color Preferences of Visual Communication Design Students in Sectoral Logos. OPUS Uluslararası Toplum Ara¸stırmaları Dergisi 17(37), 4181–4198 (2021) 3. Malmsheimer, L.M.: studies in visual communication imitation white man: images of transformation at the Carlisle Indian School. Stud. Vis. Commun. 11(4), 54–75 (2018) 4. Waszkiewicz-Raviv, A., Ksiki, R., Aiello, G., Katy, P.: Visual Communication. Understanding Images in Media Culture. Londyn 2020. Studia Medioznawcze 22(1), 904–907 (2020) 5. Mohamed, A.R., Elgamal, R.A., Elmasry, G., et al.: Development of a real-time machine vision prototype to detect external defects in some agricultural products. J. Soil Sci. Agric. Eng. 11(9), 317–325 (2021) 6. Subramanyam, V., Kumar, J., Singh, S.N.: Temporal synchronization framework of machinevision cameras for high-speed steel surface inspection systems. J. Real-Time Image Proc. 19(2), 445–461 (2022). https://doi.org/10.1007/s11554-022-01198-z 7. Abdollahpour, M., Golzarian, M.R., Rohani, A., et al.: Development of a machine vision dual-axis solar tracking system. Solar Energy 169(July), 136–143 (2018) 8. Skinner, N.P., Laplumm, T.T., Bullough, J.D.: Warning light flash frequency as a method for visual communication to drivers. Transp. Res. Rec. 2675(5), 88–93 (2021) 9. Abhilash, P.M., Chakradhar, D.: Machine-vision-based electrode wear analysis for closed loop wire EDM process control. Adv. Manuf. 10(1), 131–142 (2022) 10. Ghosal, S., Blystone, D., Singh, A.K., et al.: An explainable deep machine vision framework for plant stress phenotyping. Proc. Natl. Acad. Sci. 115(18), 4613–4618 (2018) 11. Santra, B., Shaw, A.K., Mukherjee, D.P.: An end-to-end annotation-free machine vision system for detection of products on the rack. Mach. Vis. Appl. 32(3), 1–13 (2021). https://doi. org/10.1007/s00138-021-01186-6 12. Offert, F., Bell, P.: Perceptual bias and technical metapictures: critical machine vision as a humanities challenge. AI & Soc. 36(4), 1133–1144 (2020). https://doi.org/10.1007/s00146020-01058-z

246

Y. Duan

13. Nicolas-Mindoro, J.G.: Class-EyeTention A machine vision inference approach of student attentiveness’ detection. Int. J. Adv. Trends Comput. Sci. Eng. 9(4), 5490–5496 (2020) 14. Kim, D.H., Boo, S.B., Hong, H.C., et al.: Machine vision-based defect detection using deep learning algorithm. J. Korean Soc. Nondestruc. Test. 40(1), 47–52 (2020) 15. Minakov, V.I., Fomenko, V.K.: Machine vision technology for locomotives to identify railway colour-light signals. World Transp. Transp. 17(6), 62–72 (2020) 16. Bazgir, O., Nolte, D., Dhruba, S.R., et al.: Active shooter detection in multiple-person scenario using RF-based machine vision. IEEE Sens. J.(99), 1–1 (2020) 17. Ranjan, A.: Machine vision techniques used in agriculture and food industry: a review. Int. J. Curr. Microbiol. App. Sci. 9(3), 102–108 (2020) 18. Andrade, B., Basso, V.M., Latorraca, J.: Machine vision for field-level wood identification. IAWA J. Int. Assoc. Wood Anatom. 41(4), 1–18 (2020)

Performance Evaluation of Rural Informatization Construction Based on Big Data Yaping Sun1(B) and Ruby Bhadoria2 1 Zibo Normal College, Zibo 255130, Shandong, China

[email protected] 2 Hawassa University, Awasa, Ethiopia

Abstract. With the advent of the information age, the traditional production and lifestyle have been unable to meet the needs of the development trend of the times. The development of both cities and rural areas is inseparable from the driving of information technology. In my country, the economic development of rural areas is compared with that of cities. The speed is lower, so it is more necessary to speed up the construction of informatization, so as to promote the overall development of the rural economy. The main purpose of this paper is to study the performance evaluation of rural informatization construction based on big data. This paper mainly analyzes the content and function of rural informatization, as well as the principles of performance evaluation construction, and analyzes the construction of rural informatization in a county. Experiments show that from the first year to the third year, there is an upward trend basically every year. The most fundamental reason is that the current number of information sites cannot meet the needs of informatization development; secondly, the number of talents engaged in agricultural high-tech research is also increasing. Keywords: Big Data · Rural Informatization · Informatization Construction · Performance Evaluation

1 Introduction In the process of rural economic development, the use of information technology and high-tech means to promote the development of rural economy has become a major trend at present, and at the same time, it is also an important means of building a new socialist countryside in our country. The reason for this is that the economic development in rural areas of our country is slow, and there is a big gap between the development of urban areas, which affects the development speed of the entire national economy in my country as a whole, and is not conducive to the improvement of the living conditions of the vast number of farmers in our country. In recent years, the role of informatization on the development of the rural economy and the improvement of farmers’ lifestyles has begun to emerge [1, 2]. In a study on performance evaluation, Oyebisi evaluated the performance of construction projects focused on Canaan Urban Residential Areas (CHE) by devising a set © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 247–257, 2023. https://doi.org/10.1007/978-3-031-31775-0_26

248

Y. Sun and R. Bhadoria

of international standards and the Project Management Institute’s performance measures [3]. Key data collection tools, such as quantitative research strategies and questionnaires, are used to demonstrate the utility of the key performance indicators (CPIs) on a fivepoint scale. Mahmoudi believes that the system checks the performance of the project in terms of time and can also measure the effectiveness of the project progress at different stages [4]. Extending EDM systems to fuzzy EDM to make project performance evaluation more realistic, but at the cost of increased computational costs and assumptions. Grey system theory is a promising approach to soft computing for situations involving uncertainty and can be used with as little information and assumptions as possible. The main purpose of this paper is to use big data to study the performance evaluation of rural informatization construction. This paper mainly combines the actual situation of the evaluation object, establishes the principles of orientation, typicality, and operability when constructing the evaluation index system, analyzes the role of informatization, and the role of rural informatization, proposes relevant algorithms, determines the core, and affects performance. The evaluation proposes improvement methods, and also analyzes the current situation of rural informatization construction in a county.

2 Research on Performance Evaluation of Rural Informatization Construction 2.1 Principles for Establishing the Evaluation Index System Combined with the actual situation of the evaluation object, the following principles are established when constructing the evaluation index system [5, 6]. Guiding principle. To fully consider the development trend of agricultural informatization, the selected indicators must have a certain orientation. At the same time, the indicators must be logical and closely related to the construction of agricultural informatization, which can comprehensively and objectively reflect the development status of agricultural informatization. Typicality principle. The selected indicators play a relatively light role in the evaluation model, have certain typicality and representativeness, and can reflect the main aspects of practical problems. Operability principle. The data required for the indicator must be directly measurable or indirectly calculated. If the data cannot be obtained, the indicator is useless. For example, the number of computers owned by each hundred farmers can be obtained through statistics or by querying the relevant yearbook database. Satisfaction of agricultural informatization can be calculated through investigation and research [7, 8]. 2.2 The Role of Informatization The driving effect of informatization on the economy is mainly divided into two aspects: First, informatization plays a role in supporting and transforming the economy. In the process of future economic development, informatization is a very important factor to promote economic development; Second, informatization can change the way my country’s economy develops and eliminate backward productive forces.

Performance Evaluation of Rural Informatization

249

It can be seen from this that informatization plays an irreplaceable and important role in promoting my country’s economic development, and its important role in the future economic development process is reflected in different levels, mainly in the fields of industry and agriculture, culture and education, policies and regulations, etc. [9, 10]. 2.3 The Role of Rural Informatization (1) Rural informatization can change the mode of production in rural areas. Rural informatization construction is a choice for contemporary China to adapt to international trends, and it is also an inevitable choice for solving my country’s three rural issues and China’s rural production modernization. At the same time, it can also promote the rapid development of the rural economy. Rural informatization can change the traditional rural production methods, introduce big data and the Internet into agricultural production and farmers’ lives, and accelerate farmers’ poverty alleviation and prosperity. A county is located in a poverty-stricken area in the west. To promote the rapid development of the county’s economy, the development of agriculture is the key. (2) Rural informatization construction can narrow the gap between urban and rural areas more quickly. As my country’s rural development is far behind the urban development, the main reason is that the development of the rural economy lags behind the urban development, farmers have a single way to obtain information and the information obtained is not enough to develop the rural economy better. Therefore, in order to completely solve the problems of agriculture, rural areas and farmers, it is necessary to vigorously develop rural informatization in rural areas, develop modern agriculture, and accelerate the construction of rural informatization is an inevitable choice for coordinating urban and rural economic development and building a better new socialist countryside [11, 12]. 2.4 The Role of Government in Rural Informatization Construction The five levels of the government are: the central level, the provincial level (a municipality directly under the Central Government), the city level, the county level (a district), and the town level (a township), for a total of six levels. As illustrated in Fig. 1, there are four tiers of government involved in the creation of rural information technology. (1) Role of central government and departments The central government and its departments play a key role in the national rural informatization construction by carrying out the top-level design, providing the development roadmap, and driving policies for the rural informatization construction of governments at all levels. The main strategy is to create the overall plan and strategy for the nation’s rural information construction and to create the construction blueprint; The macro policies for rural informatization were released alongside the national informatization strategy, while the departments released industrial (field) policies alongside their actual operations; launch a number of key national initiatives for the development of

250

Y. Sun and R. Bhadoria

Central government departments

Provincial government departments

Command the overall situation

Planning policy

County-level government

form a connecting link between the preceding and the following

Township government

Establish service stations to promote rural informatization construction

Fig. 1. Role of governments at all levels in rural informatization construction

rural information infrastructure, encourage the building of national rural infrastructure, and create a number of information platforms; Create designated funds. (2) Role of provincial governments and departments Formulate the provincial rural informatization construction and implementation plan and choose the specific development mode in accordance with the central government’s and various ministries’ and commissions’ deployment of rural informatization construction; Publish the provincial plans and meso-level policies for the establishment of rural information infrastructure; Encourage the creation of a provincial platform for rural information services and fortify the creation of enabling structures. (3) Role of municipal and county governments and departments It performs a connecting function in rural informatization construction in accordance with the “platform up and service down” policy. Implement rural informatization construction policies and programs, integrate rural informatization construction forces and resources, and deploy rural informatization construction deployment, as determined by the provincial government. (4) Role of township (town) governments and village-level autonomous organizations The government (organization) at this level is leading the effort to build rural credit, is the closest to rural life, and is most conversant with farmers’ informational needs. Its primary responsibilities include assisting in the development of rural information infrastructure in accordance with the deployment and demands of the superior government, constructing township (town) level and village level information service stations, providing a full team of rural informants, and conducting regular information technology training. The goal of information training is to increase the information literacy of rural people and their capacity to access information. Assist and organize agricultural leading

Performance Evaluation of Rural Informatization

251

enterprises, major farmers, individual businesses, rural brokers, and farmers to engage in information training. 2.5 AHP Use aij to represent the comparison result of the i-th factor relative to the j-th factor, then aji = 1/aij. Pairwise comparison of matrices: ⎡

A = (aij )n×n

a11 ⎢ a21 =⎢ ⎣··· an1

a12 a22 ··· an2

⎤ · · · a4n · · · a2n ⎥ ⎥ ··· ···⎦ · · · ann

(1)

It can be seen from the above that A satisfies aij > 0, aij = a1ji , aii = 1, and A belongs to the positive and negative matrix. Assuming that the weights of a certain layer of factors x1, x2,…,xn are w1, w2,…, wn, respectively, then perform a pairwise comparison matrix: ⎡ ⎤ w1 w1 1 w ··· w n 2 ⎢ w2 w ⎥ ⎢ w1 1 · · · wn2 ⎥ ⎢ (2) A=⎢ . . . . ⎥ ⎥ ⎣ .. .. .. .. ⎦ wn wn w1 w2 · · · 1 wi Then w = wwki · wwkj , there is aij = aik · akj , i,j = 1,2,…,n. If there is aij = aik · akj in j the positive and negative matrices, then A is a consistent matrix.

3 Experimental Research on Rural Informatization Construction 3.1 The Core Elements of Rural Informatization Construction (1) Rural network construction. The construction of rural network is the premise and foundation of rural informatization construction. To use big data and the Internet to develop rural economy and improve farmers’ living standards, the primary premise is to gradually develop and popularize the network in rural areas. The construction of rural information network will promote the rapid development of rural economy, accelerate the progress of rural agricultural technology, and promote the improvement of farmers’ scientific quality. (2) Another important element of rural informatization is the application of rural information technology in agricultural production and farmers’ life. Rural information technology is the result of the development of rural economy and culture to a certain extent, and it is an inevitable choice to develop rural economy. (3) Construction of rural informatization team. For the rapid development of rural informatization, professional rural informatization talents are indispensable and an important driving force to accelerate the development of rural informatization. Rural

252

Y. Sun and R. Bhadoria

informatization talents are an important force to transform rural informatization into real productive forces. The professional talents of rural informatization are mainly composed of two parts. The first is the professional information technology research personnel required for the construction of rural informatization; the second is the grass-roots rural informatization development service personnel. The background is the main force. (4) Legal system construction of rural informatization. The legal system construction of rural informatization is to formulate a set of corresponding laws and regulations to ensure the construction of rural informatization in the process of rural informatization development and construction, so as to ensure that there is a law to abide by for information release and sharing. 3.2 Main Contents of Rural Informatization (1) Information infrastructure. It mainly includes radio and television networks, broadband, mobile data, etc., as well as other supporting equipment and facilities. (2) Rural information resources. Information resources exist in various forms, such as text, pictures, videos, and agriculture-related websites. (3) Rural information technology. Rural information technology refers to the technology that realizes the collection, processing, transmission and storage of various information related to agriculture, mainly including information acquisition, processing, storage and dissemination technology. (4) Rural information service system. Generally speaking, the rural information service system is mainly composed of three parts: the main body of rural informatization construction, the mode of rural informatization construction and the professionals of rural information flower construction. 3.3 Ways to Improve Performance Evaluation The integrity of cadre evaluation emphasizes the comprehensiveness of cadre performance evaluation, and the forward-looking evaluation of cadres emphasizes the continuity of cadre performance evaluation. The locality and cross-section of traditional performance evaluation are the main reasons for the lack of integrity and foresight of cadre evaluation. In the “government big data environment” environment, the shortcomings of traditional performance evaluation can be overcome through comprehensive performance data collection and continuous performance data collection. However, big data cannot solve the problems of the diversity of government goals and the complexity of government performance management projects. On the contrary, due to the characteristics of scale, diversity, high speed, and processability of administrative big data, higher performance evaluation is proposed. Require.

Performance Evaluation of Rural Informatization

253

4 Experiment Analysis of Rural Informatization Construction 4.1 Development of Agricultural Information Resources After sorting out the development of rural information resources in a county, the number of agricultural information websites in a county, the number of agricultural scientific research achievements, the number of agricultural publications, and the amount of information published online are shown in the following Table 1: Table 1. Statistics of rural information resources in a county project

1

2

3

Agricultural information scientific research achievements (unit: item)

8

10

15

Rural information network platform (unit: number)

9

11

13

The number of agricultural information released (unit: ten thousand)

4

6

8

Agricultural economic publications (unit: pieces)

2

7

9

16 14 12 10 number

8 6 4 2 0

1

2 year

3

Agricultural information scientific research achievements (unit: item) Rural information network platform (unit: number) The number of agricultural information released (unit: ten thousand) Agricultural economic publications (unit: pieces) Fig. 2. Statistical analysis of rural information resources in a county

254

Y. Sun and R. Bhadoria

As can be seen from Fig. 2, from the first year to the third year, it basically showed an upward trend every year. The most fundamental reason is that the current number of information sites cannot meet the needs of informatization development; secondly, the number of talents engaged in agricultural high-tech research is also increasing. 4.2 Transmission and Service of Rural Information Resources The specific data are shown in Table 2: Table 2. Transmission and service of rural information resources in a county project

1

2

3

Ownership of landline telephones (per 100 households)

80

60

20

Computer ownership (per 100 people)

15

20

50

Number of TV sets (per 100 households)

100

100

100

Number of rural information organizations (average)

2

7

20

Number of agricultural TV channels (average)

10

10

14

Rural information service talents (per 10,000 people)

900

1000

1500

Internet usage (per 10,000 people)

987

4032

6114

From the statistics in Fig. 3, we can see that in the rural areas of a certain county, the number of TV sets and computers is gradually increasing, while the number of landline telephones is declining every year due to the rapid development of network information.

Performance Evaluation of Rural Informatization

120

7000

100

6000

number

5000

80 number

255

4000

60

3000

40

2000

20

1000

0

0 1

2 year

3

Ownership of landline telephones (per 100 households) Computer ownership (per 100 people) Number of TV sets (per 100 households) Number of rural information organizations (average) Number of agricultural TV channels (average) Rural information service talents (per 10,000 people) Internet usage (per 10,000 people) Fig. 3. Analysis of county-level rural information resource transmission and service

4.3 Rural Informatization Talents and Population Quality The details are shown in Table 3: Table 3. Statistics of rural informatization talents and population quality in a county project

1

2

3

Number of people with secondary school education (per 100 people)

20

24

28

Number of training sessions per capita (years)

3

5

8

Number of agricultural technicians (per 100 people)

9

20

40

As can be seen from Fig. 4, some rural farmers in a certain county can basically receive production training on agriculture 2–3 times a year. However, the trained farmers are only some of them, not all of them.

number

256

Y. Sun and R. Bhadoria

45 40 35 30 25 20 15 10 5 0 1

2 3 year Number of people with secondary school education (per 100 people) Number of training sessions per capita (years) Number of agricultural technicians (per 100 people) Fig. 4. Analysis of rural informatization talents and population quality in a county

5 Conclusions Rural informatization construction is the main driving force for the development of rural economy. For the development of rural agriculture, replacing manpower with machines can not only improve efficiency, but also increase agricultural output, and gradually promote traditional agriculture to modern agriculture. Secondly, informatization construction not only plays an important role in science and technology, but at the same time, the development of informatization can also improve farmers’ informatization awareness, promote the transformation of farmers’ awareness, and gradually change the traditional agricultural production mode to relying on technology and the Internet. A model for promoting agricultural development. Finally, the construction of informatization can also improve the way of life of farmers, improve the supply efficiency of rural public services, and change the situation of information blocking in rural areas in the past.

References 1. Pujadas-Gispert, E., Alsailani, M., Koen, K., et al.: Design, construction, and thermal performance evaluation of an innovative bio-based ventilated faade. Front. Architectural Res. 9(3), 681–696 (2020) 2. Adusei, R., Iddrisu, M.M.: On Construction and performance evaluation of (4096, 815, 3162) Hermitian code. Appl. Math. Inf. Sci. 14(1), 69–73 (2020) 3. Oyebisi, S.O., Okeke, C., Alayande, T.A., et al.: Performance evaluation in a construction project: an empirical study of Canaan-city housing estate, Ota, Nigeria. Int. J. Qual. Eng. Technol. 8(1), 15–25 (2020)

Performance Evaluation of Rural Informatization

257

4. Mahmoudi, A., Javed, S.A., Deng, X.: Earned duration management under uncertainty. Soft. Comput. 25(14), 8921–8940 (2021). https://doi.org/10.1007/s00500-021-05782-6 5. Siew, L.W., Fai, L.K., Hoe, L.W.: Performance evaluation of construction companies in Malaysia with entropy-VIKOR model. Eng. J. 25(1), 297–305 (2021) 6. Vimaladevi, M., Zayaraz, G.: A game theoretic approach for quality assurance in software systems using antifragility-based learning hooks. J. Cases Inf. Technol. 22(3), 1–18 (2020) 7. Rizzi, W., Difrancescomarino, C., Ghidini, C., et al.: How do I update my model? On the resilience of predictive process monitoring models to change. Knowl. Inf. Syst. 64(5), 1385– 1416 (2022) 8. Gurjar, J., Agarwal, P.K., Jain, P.K.: A comprehensive methodology for comparative performance evaluation of public transport systems in urban areas. Transp. Res. Proc. 48(4), 3508–3531 (2020) 9. Hammes, G., Souza, E.D., Rodriguez, C.T., et al.: Evaluation of the reverse logistics performance in civil construction. J. Cleaner Prod. 248, 119212.1–119212.13 (2020) 10. Gadafa, C., Gangwar, P.K., Rawat, V.S., et al.: Multi criteria decision-making model and approach for the selection of contractor in construction projects. Solid State Technol. 63(6), 10571–10585 (2020) 11. Abualdenien, J., Schneider-Marin, P., Zahedi, A., et al.: Consistent management and evaluation of building models in the early design stages. Electron. J. Inf. Technol. Constr. 25, 212–232 (2020) 12. Lee, M., Jeon, G., Lee, I., et al.: A study on hil-based simulation environment construction for evaluation of autonomous driving performance. J. Korean Inst. Commun. Inf. Sci. 46(1), 95–101 (2021)

Genetic Algorithm in Ginzburg-Landau Equation Analysis System Bentu Li(B) Department of Public Courses, Shandong Vocational College of Science and Technology, Weifang 261053, Shandong, China [email protected]

Abstract. In scientific and engineering computing, there are many large-scale application problems that require physicists and mathematicians to conduct numerical simulations by building models. By analyzing the model, designing a numerical calculation method, and quickly solving the problem, we have a further understanding of the phenomenon under study. In many fields, partial differential equations (PDEs) are the most commonly used mathematical models for describing problems. Therefore, it is necessary to study partial differential equations. The main purpose of this paper is to study the application of genetic algorithm in the Ginzburg-Landau equation analysis system. This paper tests two numerical examples of FGLE comparing OM, MHSS, PMHSS and PGSOR and the proposed NSM, and the results show that NSM significantly outperforms all other methods. Keywords: Genetic Algorithms · Ginzburg-Landau Equations · Analytical Systems · Partial Differential Equations

1 Introduction In recent years, because the fractional differential operator can accurately describe the anomalous diffusion process, it has become an important tool to describe the model more accurately. Compared with integer-order PDE, fractional-order partial differential equations are more accurate in simulating memory effects, diffusion motion, and genetic properties. Therefore, they are widely used in signal processing, superconductivity, and fluid mechanics. Different from integer-order partial differential equations, the computational load and memory requirements are very large, and the calculation time is long, which has caused many obstacles for in-depth research. How to use numerical methods to discretize, and through the research on the algebraic structure of the coefficient matrix of the discrete system, design an efficient, fast and stable fast algorithm has become the primary and critical problem of simulation in practical engineering applications [1]. In the study of the Ginzburg-Landau equations, Mckenna studied the numerical solutions of the Ginzburg-Landau-type equations that appear in the superconducting thin film model [2]. Nouri proposed new numerical methods adding an improved ordinary differential equation solver to the Milstein method for solving rigid stochastic systems © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 258–266, 2023. https://doi.org/10.1007/978-3-031-31775-0_27

Genetic Algorithm in Ginzburg-Landau Equation Analysis System

259

[3]. Hilder considers the moving front solution linking the intrusion state to the unstable ground state in the Ginzburg-Landau equation with additional conservation laws [4]. The nonlinear stability of a sufficiently fast frontier with respect to a perturbation that is positioned exponentially in front of the frontier is demonstrated. The main purpose of this paper is to analyze the application of genetic algorithm in Ginzburg-Landau equation analysis system. This paper mainly studies the fast iterative solution of fractional Ginzburg-Landau equations in one-dimensional space. Through finite difference discretization, a complex linear system with a Toeplitz-like structure is obtained, and according to the structural characteristics of the system, a new matrix splitting iterative method is designed, combined with loop preprocessing, to achieve fast operations [5, 6]. The convergence of the iterative method proves the effectiveness of the method. Numerical experiments verify the efficiency of the proposed method. Secondly, an Alternating Direction Implicit (ADI) difference scheme is designed for the equation, the equation is discretized into two linear subsystems, and an iterative method of matrix splitting that scales separately is designed to solve it quickly and prove its convergence. Numerical experiments show the economy and efficiency of the proposed method.

2 Equation Analysis 2.1 Development Status The Ginzburg-Landau equation is a mathematical model proposed by the former Soviet Union physicists V.L.Ginzburg and L.D.Landau in 1950. Its initial form, assumed to be a phenomenological model, describes the microscopic properties of superconductors that do not involve them. The later Ginburg- Landau’s theory was derived by Lev Gor’kor from Bardeen-Cooper-Schrieffer’s micro-theory, thus showing that it also appears in some limit micro-theory and giving a micro-explaining of all its parameters [7, 8]. Regarding the numerical study of the Ginzburg-Landau equation, many literatures have studied such equations by the finite difference method. The one-dimensional Kuratomo-Tsuzuki (KT) equation is a special case of the Ginzburg-Landau equation. Several researchers have considered numerical simulations of this equation and established several finite-difference schemes. Since the 1990s, many mathematicians and physicists at home and abroad have been interested in and widely concerned about the Ginzburg-Landau equation. At present, this equation is widely used in superconductivity, fluid mechanics, optical fiber communication, reaction-diffusion equations, etc. Has a wide range of applications. In recent years, more and more researchers have established higher-order compact difference schemes for partial differential equations, which provide a new way of thinking for our theoretical proof. 2.2 Fractional Partial Differential Equations This paper mainly focuses on the fast numerical solutions and two types of application problems of the following three types of fractional partial differential equations: Category 1: Fast Solutions to FGLE in One-Dimensional Space.

260

B. Li

The basic form of GL equation is as formula (1): (1) √ Among them, v, η, κ, ζ are given real numbers, and i = − 1 is an imaginary unit. Later, Tarasov and Zaslavsky applied the fractional Ginzburg-Landau (FGL) equation to describe the dynamic process in fractal media. Among them, the spatial fractional GL (SFGL) equation is the most widely studied one: (2)

Among them, α ∈ (1,2) is the order of the derivative, t is the time variable,  = (a, b) is the space domain, t is the time, x is the space, and u = u(x, t) is about The complex-valued function of the two, u0(x) is a known function. Class II: Fast Solutions of FGLE in Two-Dimensional Space. The Ginzburg-Landau equation has a wide range of applications. Compared with the traditional Ginzburg-Landau equation caused by the Brownian trajectory fractal, the FGLE is derived from the variational EL equation of the fractal medium on the L´evy path.

(3)

where  = (a,b) × (c,d) is the square domain, x = (x1,x2)T ∈ R2 is the vectorized two-dimensional space vector, t is the time variable, 1 < βs < 2(s = 1,2) is the order of the derivative, v > 0, k > 0, and u0(x) is a known function. The third category: two-dimensional time-space fractional Stokes equations. The Stokes equation, as the basis for the computational model of incompressible Newtonian fluid flow, is a class of motion equations describing incompressible fluids [9, 10]. ⎧ β ⎪ ∇ ·u =0 ⎨ c Dta u + vR 2 u + ∇p = f , (x, t) ∈  × [0, T ], (4) x∈ u(x, 0) = u0 (x), ⎪ ⎩ u(x, t) = |∂ = 0, (x, t) ∈ ∂ × [0, T ] where u is a two-dimensional vector, representing the fluid velocity, p is a scalar, representing the pressure, is the viscosity term, v is called the viscosity coefficient, and f is a given external force. The first equation, expresses the conservation of fluid momentum. The second equation means that the incompressibility is continuous.

Genetic Algorithm in Ginzburg-Landau Equation Analysis System

261

2.3 Preprocessing Technology Although the Krylov subspace method has a good theoretical foundation, a possible weakness of the iterative method compared to the direct method is that it is more sensitive to the properties of the coefficient matrix, which hinders the wide application of the iterative method. When solving some linear systems, direct use of the iterative method will lead to non-convergence or slow convergence. For example, a large proportion of linear systems usually obtained by discrete partial differential equations are ill-conditioned, and their large condition numbers lead to When the Krylov subspace method is directly used to solve the problem, the convergence speed is slow and the numerical solution error is large. Therefore, it is very important to reduce the condition number of the coefficient matrix of the linear system, so as to improve the convergence speed of the Krylov subspace method and the reliability of the numerical solution. Therefore, the preprocessing technique, as a common means to reduce the condition number, can effectively speed up the Krylov subspace iteration method [11, 12]. Usually preprocessing techniques are divided into three categories, let M be a preprocessing sub: (1) Left preprocessing: apply iterative method to M−1Ax = M−1B; (2) Right preprocessing: apply iterative method to AM−1u = b, x = M−1u; (3) Split (both sides) preprocessing: If M = M1M2, apply the iterative method to M1 − 1AM2 − 1u = M1 − 1b, x = M2 − 1u. To seek a high-quality preprocessing matrix, the main purpose is to find a better approximation of M that satisfies A in a certain sense, that is, a linear operator (matrix) M with a high degree of aggregation of the eigenvalues of the preprocessed matrix.

3 Experimental Study 3.1 Two-Dimensional Constant Coefficient Nonlinear Ginzburg-Landau Equation iut + 12uxx + 12(β − iε)uyy + (1 − iδ)|u|2 u = iγ u

(5)

where u is a complex-valued function, and β, ε, δ, and γ are real numbers. In the past ten years, the application of the Ginzburg-Landau equation has become more and more extensive, but its solution process is relatively cumbersome. From the perspective of theory and methods, we learn the KDV equation, using the F expansion method, the homogeneous equilibrium method, the Jacob ellipse The Ginzburg-Landau equation is solved by means of the function expansion method, and the explicit solutions of the Ginzburg-Landau equation in three degenerate situations are obtained. The (G /G) expansion method, which is different from the F expansion method, can be better It has the advantages of directness and simplicity in solving nonlinear evolution equations.

262

B. Li

3.2 Traveling Wave Reduction of the Ginzburg-Landau Equation When u = eiηφ(ξ), η = k1x + k2y − ωt, ξ = x + y − υgt, where k1 and k2 are wave velocities, ω is angular frequency, and υg is group velocity, Eq. (7) can be transformed into:  



1  1 1 2 1 2 k + βk22 ϕ + ϕ 3 − i εϕ  − k1 + βk2 − vg ϕ  − εk − γ ϕ + δϕ 3 = 0 (1 + β)ϕ  + sk2 ϕ  + ω − 2 2 1 2 2 2

(6)

Let the real and imaginary parts of the left-hand side of this equation be zero, respectively, we get   1 1 2   2 k + βk2 ϕ + ϕ 3 = 0 (7) (1 + β)ϕ + sk2 ϕ + ω − 2 2 1 

  1 2 1 (8) i εϕ  − k1 + βk2 − vg ϕ  − εk − γ ϕ + δϕ 3 = 0 2 2 2 It is easy to prove that a sufficient and necessary premise for the existence of non-zero solutions φ(ξ) of Eqs. 8 and 9 is that the corresponding coefficients of the equations are proportional: ω − 21 k12 + βk22 1 (9) = 1 2 δ γ − 2 εk2 The group velocity can be obtained from the above formula: vg = k1 + βk2 + εδk2 ; angular frequency ω, ω = 21 k12 − k22 + γδ . 3.3 Generalized Ginzburg-Landau Equation   du + (κux − γ uxx − u)dt = (β u|2 u − δ u|4 u − μu2 ux − v(|u|2 )ux )dt + u ◦ dW (t) u(0, t) = u(1, t) = 0, t ≥ 0 u(x, t0 ) = u0 (x) (10) where β = β1 + iβ2, γ = γ1 + iγ2, δ = δ1 + iδ2, μ = μ1 + iμ2, ν = ν1 + iν2 are complex numbers, κ, βi, γi, δi, μi, νi(i = 1, 2) is a real number, in this paper we assume γ1 > 0, δ1 > 0, δ1γ1 > |μ|2 + |ν|2. The fractional-order Ginzburg-Landau equation is used to describe various nonlinear phenomena and is a generalization of the integer-order Ginzburg-Landau equation. At first, the integer-order Ginzburg-Landau equation was proposed by Ginzburg and Landau to describe the phase transition in superconductors near the critical temperature In addition, the Ginzburg-Landau equation can also simulate the dynamic process of the electromagnetic behavior of superconductors in an external magnetic field. Therefore, in biology, chemistry and many physical fields, such as superconductivity, superfluidity, nonlinear optics Wait. Integer order Ginzburg-Landau equations are extremely important. However, in the process of physical experiments, it is found that the traditional integer order equations can no longer describe these physical phenomena well. Therefore, scholars began to conduct a lot of research on the basis of integer order equations. The fractional GinzburgLandau equation was derived by Tarasov et al. on the basis of the variational Euler-Lagrange equation of fractal media, and the fractional Laplace operator was introduced into the equation.

Genetic Algorithm in Ginzburg-Landau Equation Analysis System

263

3.4 Genetic Algorithm Process The characteristics of the natural principles of biological evolution, such as “natural selection” and “survival of the fittest,” are the major inspirations for genetic algorithms (GA). It is a self-organizing, adaptive artificial intelligence global optimization probability search algorithm created by mimicking the genetic processes of mutation, hybridization, and natural selection of organisms. A genetic algorithm is biologically sound. It offers a representation of biological intelligence from the viewpoint of the intelligence generation process, which has unique cognitive significance; it is appropriate for any kind of function, whether expressed or not; it has a parallel computing behavior that is attainable; it can address any kind of practical issue; and it has a wide range of practical applications.The flow chart of traditional genetic algorithms is shown in Fig. 1.

Encoding

Initialize the population

Assess individual fitness in the population

Choose

Cross

Variation

Fig. 1. Flowchart of the traditional genetic algorithm

The concepts of biological evolution and inheritance are used in genetic algorithms. Its features include intelligence, parallelization, robustness, simplicity of operation, and ease of implementation, which set it apart from more conventional optimization techniques like the enumeration method, heuristic algorithm, and search algorithm. The evolutionary algorithm can address the majority of real-world issues since it is suited for most functions with or without expression and has a workable parallel processing behavior. In order to solve complicated nonlinear and multidimensional spatial optimization problems, genetic algorithms are widely utilized in the disciplines of automatic control, computing science, pattern recognition, engineering, intelligent fault diagnostics, management science, and social science.The flowchart of the improved genetic algorithm is shown in Fig. 2.

264

B. Li

Fig. 2. Flow chart of the improved genetic algorithm

4 Experiment Analysis 4.1 Analysis of Numerical Results In this section, we test two numerical examples of FGLE. For comparison, OM, MHSS, PMHSS and PGSOR as well as the proposed NSM are tested. The comparison between the test methods is shown in the number of iteration steps (IT) and in seconds is the unit of CPU time (CPU). In the preliminary test of the experiment, the optimal iteration parameters are set, and β = 3.2 is selected. GB RAM and Windows 10 PC. It is solved by iterative methods such as the Krylov subspace method, and all split matrices are realized by algorithms according to the formula. Use the NSM of the conjugate gradient method with the T.Chan preprocessor to solve the Toeplitz sublinear system involved. Likewise, we list the MHSS, PMHSS and PGSOR methods in the table: Analyzing the results in Table 1, we can know that NSM requires fewer iteration steps and computing time than OM, PMHSS, and MHSS; compared with PGSOR, PGSOR has fewer iteration steps, but NSM has less computing time. It should be noted that the nonsmooth condition does not affect the time of the NSM method, which is more efficient than other methods. The overall results show that NSM significantly outperforms all other methods. 4.2 Numerical Test Apply the difference scheme in this paper to compute the following boundary value problems:

Genetic Algorithm in Ginzburg-Landau Equation Analysis System

265

Table 1. Numerical results method

OM

12

IT

PGSOR

1282



0.75

9

0.02

27

2562



2.68

9

0.02

5122



12.31

8

10242



70.76

8

20482





8

40962





8

6.34

21

6.98

17

0.099

46

8.080

81922





8

33.52

20

33.87

17

0.189





163842







17

0.360





CPU

IT

PMHSS

NSM

IT

IT

CPU

IT

CPU

0.02

13

0.004

51

0.060

23

0.04

16

0.009

46

0.080

0.09

21

0.12

17

0.010

41

0.213

0.38

21

0.42

17

0.024

56

1.336

1.53

21

1.70

17

0.041

51

3.060

CPU





CPU



MHSS

    ut = 54 u + (1 + i)uxx − (1 + 2i)u|2 u + (1 + 2i)u|2 ux + (2 + i)u2 ux + (−1 + 2i)|u|4 u, u(0, t) = u(4π, t)

(11) The above equation has a solution: x 1 u(x, t) = √ ei( 2 −t ) 2

(12)

Remember: en ∞ = un − U n ∞ order = log2

(13)

e(h1 ,τ1 ) e(h2 ,τ2 )

The following numerical results are obtained by applying the difference format to the equation: Table 2. Errors at different times t T

t=1

t=2

t=5

t = 10

0.1

0.0012

0.0018

0.0036

0.0067

0.01

5.3047E-005

9.30367E-005

2.1608E-004

4.2398E-004

From the analysis of Table 2, it can be known that the above calculation results better verify the convergence, stability and calculation accuracy of the format in this paper.

266

B. Li

5 Conclusions In recent years, as the discrete of high-dimensional fractional operators has become more and more mature, more and more fractional equations can be solved quickly. In addition, the solutions of fractional partial differential equations are becoming more and more diverse, such as deep learning and other new methods. Become a hot spot. With the increasing application of fractional equations, more and more engineering problems such as image restoration rely on fractional models to solve them. How to deal with new problems and the challenges brought by new methods are the contents that need to be solved in the future.

References 1. Ganesh, M., Thompson, T.: A spectrally accurate algorithm and analysis for a GinzburgLandau model on superconducting surfaces. Multiscale Model. Simul. 16(1), 78–105 (2018) 2. Mckenna, P., Glotov, D.: Numerical mountain pass solutions of Ginzburg-Landau type equations. Commun. Pure Appl. Anal. 7(6), 1345–1359 (2017) 3. Nouri, K., Ranjbar, H., Baleanu, D., et al.: Investigation on Ginzburg-Landau equation via a tested approach to benchmark stochastic Davis-Skodje system. AEJ - Alexandria Eng. J. 60(6), 5521–5526 (2021) 4. Hilder, B.: Nonlinear stability of fast invading fronts in a Ginzburg-Landau equation with an additional conservation law. Nonlinearity 34(8), 5538–5575 (2021) 5. Figueira, M., Correia, S.: A generalized complex Ginzburg-Landau equation: global existence and stability results. Commun. Pure Appl. Anal. 20(5), 2021–2038 (2021) 6. Kachmar, A., Aydi, H.: Magnetic vortices for a Ginzburg-Landau type energy with discontinuous constraint II. Commun. Pure Appl. Anal. 8(3), 977–998 (2017) 7. López, J.L.: On nonstandard chemotactic dynamics with logistic growth induced by a modified complex Ginzburg-Landau equation. Stud. Appl. Math. 148(1), 248–269 (2022) 8. Ignat, R., Jerrard, R.L.: Renormalized energy between vortices in some Ginzburg-Landau models on 2-dimensional Riemannian manifolds. Arch. Ration. Mech. Anal. 239(1), 1–90 (2021) 9. Sadaf, M., Akram, G., Dawood, M.: An investigation of fractional complex Ginzburg-Landau equation with Kerr law nonlinearity in the sense of conformable, beta and M-truncated derivatives. Opt. Quant. Electron. 54(4), 1–22 (2022) 10. Ignat, R., Kurzke, M., Lamy, X.: Global uniform estimate for the modulus of two-dimensional Ginzburg-Landau vortexless solutions with asymptotically infinite boundary energy. SIAM J. Math. Anal. 52(1), 524–542 (2020) 11. Sci, G.: A jacobi collocation method for the fractional Ginzburg-Landau differential equation. Adv. Appl. Math. Mech. 12(1), 57–86 (2020) 12. Bezotosnyi, P.I., Dmitrieva, K.A., Gavrilkin, S.Y., et al.: Ginzburg-Landau calculations for inhomogeneous superconducting films. IEEE Trans. Appl. Supercond. 31(99), 1–7 (2020)

Object Detection in UAV Images Based on Improved YOLOv5 Zhenrui Chen(B) , Min Wang, and Jinhai Zhang Shandong Jiaotong University, Weihai, Shandong, China [email protected]

Abstract. Aiming at the problems of small targets, many instances, and complex backgrounds in UAV aerial images, in this paper, an improved YOLOv5-based algorithm for detecting objects in UAV images is presented. This paper enhances the robustness of an algorithm for recognizing aerial images by incorporating a spatial pyramid pooling network with a probability pooling method, introducing an upsampling network structure based on deconvolution and convolution attention mechanism. As a result, the issue of invalid features negatively impacting recognition accuracy is resolved, and a higher accuracy in recognizing aerial images is achieved. By conducting experiments on the VisDrone public dataset, it was found that the enhanced algorithm achieved an average accuracy of 34.9%, which is 3.21% higher than the average accuracy achieved by the original, unimproved algorithm. Keywords: YOLOv5 · object detection · UAV images · deconvolution · VisDrone dataset

1 Introduction UAVs have high mobility, are not limited by ground traffic conditions, and have a broad monitoring field of vision. They are widely used in surveying and mapping, security inspection, search and rescue, and military and other fields [1, 2]. The combination of computer vision technology and UAV can make the UAV perceive the surrounding environment, reduce the dependence on manual control, and further expand the application scope of UAV. In recent years, it has achieved outstanding results in image inspection and recognition. Two of the most prominent algorithms among them are the two-stage detection algorithms, such as R-CNN, Faster R-CNN, and MaskR-CNN, and single-stage detection algorithms based on candidate frame-free, which typically include SSD [3], Reina-Net [4], and the YOLO series of algorithms. Compared to traditional algorithms, deep learning algorithms exhibit superior accuracy in target recognition tasks and are widely adopted in practical engineering applications. In 2015, YOLOv1 [5] was proposed. Its core idea is similar to that of Faster R-CNN. Later, In YOLOv2 [6], the speed, accuracy, and object recognition capabilities were improved by integrating several optimization strategies such as batch normalization, a high score classifier, and a priori © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 267–278, 2023. https://doi.org/10.1007/978-3-031-31775-0_28

268

Z. Chen et al.

box, along with increasing the number of recognized object categories. The YOLOv3 architecture [7] incorporates the Feature Pyramid Network (FPN) and darknet-53 network, which increase the network complexity while also enabling flexibility in trading off between speed and accuracy. This architecture significantly enhances computation speed. The network architecture of YOLOv4 [8] has changed greatly. After conducting numerous parameter tuning experiments, the optimal balance between the number of parameters and overall performance was identified, resulting in improvements in both areas.YOLOv5 contributes significantly to network lightweight, making it faster and easier to deploy. In this paper, based on the YOLOv5 network architecture, a deep optimization is carried out, and a probability pooling layer is introduced into the Spatial Pyramid Pooling Fast (SPPF) network [9], which further improves the feature extraction effect. In order to increase the field of view during the upsampling process, a deconvolution upsampling module is introduced into the neck network, which copies the original upsampling pixels and converts them to deconvolution, which further improves the target recognition accuracy of UAV images.

2 YOLOv5 Target Recognition Algorithm YOLOv5 is an advanced YOLO object detection and recognition algorithm. Drawing inspiration from the CSPNet [10], this paper employs an enhanced CSPNet as the backbone network in the YOLOv4-based object detection algorithm and performs image prediction at multiple scales. In terms of structure, YOLOv5 has four parts: input, backbone, neck and head output. The structure in detail can be observed in Fig. 1.

Fig. 1. Schematic diagram of YOLOv5 network structure

YOLO v5 features a serialized network architecture that is distinct from earlier versions. The network includes four structures: YOLO v5s, YOLO v5m, YOLO v5l and

Object Detection in UAV Images Based on Improved YOLOv5

269

YOLO v5x. The architecture is modifiable by tuning two parameters, depth multiple and width multiple, which control the number of residual components and convolution kernels, respectively. Networks with different depths are obtained by setting different numbers of residual components in each Cross Stage Partial Networks (CSPN); Different numbers of convolution kernels are set in the Focus structure and each CSPN to obtain networks with different widths. By making these settings flexible, it is possible to achieve a balance between detection speed and accuracy that is tailored to specific needs. In the latest YOLOv5 (release-v6.0), the author further improved the network structure, in which the backbone network adopts a C3 structure with 3 convolutional layers. Figure 2 show the CS structure. The performance of C3 structure is faster than bottleneck CSP in forward calculation and back propagation. In addition, in the backbone network, the author replaces the spatial pyramid pooling (SPP) with fast spatial pyramid pooling (SPPF). Replace the original three parallel maxpooling with serial one. The network structure of SPPF is shown in Fig. 3. Using the SPPF of serial maxpooling can also get the results of SPP in practice, and can bring more performance improvements.

Fig. 2. C3 network structure

Fig. 3. SPPF network structure

270

Z. Chen et al.

In the neck segment network, Path-Aggregation Network (PANet) [11] is used for feature fusion. PANet improves on FPN [12] by introducing a bottom-up information flow path, which shortens the distance that information has to travel and enhances the accurate localization information throughout the entire feature extraction network. Choosing an appropriate loss function is critical for assessing the precision of predictions. At the head output, the loss function of complete Intersection over union loss (CIoU Loss) [13] is formed by introducing two parameters: center point distance and aspect ratio. The formula is as follows: LCIoU = 1 − IoU + ν=

gt 4 (arctan whgt π2

α=

ρ 2 (p, pgt ) c2

+ αv

2 − arctan wh )

(1)

v (1−IoU )+v

3 The Proposed Method Based on the YOLOv5 algorithm, this paper takes into account the accuracy and speed, and the input image size is flexible. After multi-layer convolution operation, it enters the probability pool SPPF network to sample the image, reduce the dimension, reduce the network parameters and increase the local receptive field of the convolution kernel; In the neck convolution module, the super-resolution methods is used. By improving the two aspects mentioned above, the algorithm’s detection accuracy has been significantly enhanced. 3.1 Improved SPPF Network YOLOv5 continues the SPPF network in YOLOv3. In the SPPF structure, maximum pooling is used for operation. The objective of pooling is to condense the information in a specific region to facilitate the extraction and generalization of information. Pooling operations can achieve several goals, including reducing the dimensionality of data, compressing features, expanding the receptive field, and achieving invariance to various transformations, such as translation, rotation, and scaling. Therefore, when designing pooled operations, we should reduce the loss of information mapping in the feature map on the basis of simplifying the operation. When performing pooling operation, two commonly used methods are Average pooling and max pooling. Average pooling computes the mean value of the features within each sub-region, preserving more background information. On the other hand, max pooling outputs the maximum value of features within each sub-region, highlighting the strongest features, but may result in loss of information when the differences between features are not significant. Probability pooling is a method that assigns probabilities to pixels based on their values, and each pixel within a region contributes to the overall output according to its corresponding probability value. The contribution of each pixel is proportional to its probability value, so pixels with higher values have a greater impact on the output. This design comprehensively considers the different contributions of each pixel in the region (Fig. 4).

Object Detection in UAV Images Based on Improved YOLOv5

271

Fig. 4. Probability pooling

The calculation process of  probability pooling is as follows. Firstly, calculate the statistics of the pooling area k∈Rj ak , Divide this statistical sum with each eigenvalue to get the probability value of each eigenvalue pi =  ai a , Then follow the formula k∈Rj k  sj = i∈Rj pi ai Calculate the probability weight value within each region. Where Rj is the window size for sampling. Probability pooling is applied in the SPPF structure, and the improved probability pooling SPPF structure is shown in Fig. 5.

Fig. 5. Probability pooling SPPF

3.2 Super-Resolution Method For upsampling operations, there are currently some interpolation methods for processing,These methods require us to choose when deciding the network structure. In YOLOv5, the nearest neighbor interpolation is used for upsampling.

272

Z. Chen et al.

Super-resolution methods are used after an upsampling method, in addition to subpixel convolution. Shi et al. [14] further studied the efficient sub-pixel convolution network and considered that the functions between deconvolution and efficient sub-pixel convolution are the same. In this paper, the super-resolution method is used to improve the upsampling method based on nearest-neighbor interpolation in YOLOv5[15]. We introduced two layers. Our first layer is expressed as an operation F1: F1 (Y ) = max(0, W1 ∗ Y + B1 )

(2)

where Y is the upsampling result, W 1 and B1 , and ‘*’ denotes the convolution operation. The second layer is expressed as: F2 (Y ) = max(0, W2 ∗ Y + B2 )

(3)

Here W2 and B2 represent parameters. 3.3 Convolution Attention Mechanism Module As the YOLO architecture evolved, the model became increasingly intricate, the network deeper, and the semantic dimension of the extracted features expanded. However, each layer of the network causes some feature loss. To improve the accuracy of the YOLOv5 algorithm for various types of targets and to isolate more robust features for network training, this study proposes a convolutional attention mechanism module to perform different feature processing. The convolutional attention mechanism module is a type of attention module specifically designed for convolutional neural networks. This module is capable of generating an attention map along two independent dimensions and then multiplying the attention map with the input feature map in order to perform adaptive feature optimization. Compared to other types of attention modules, the CBAM module has several advantages including good applicability and low computational cost. Therefore, in this paper, the CBAM module is introduced to improve the feature extraction ability of the algorithm. The CBAM module is composed of two sub-modules, which are the channel attention module and the spatial attention module. The channel attention module focuses on important parts of the input feature map to improve the quality of the extracted features. Figure 6 illustrates this concept.

Fig. 6. Diagram of channel attention module

Object Detection in UAV Images Based on Improved YOLOv5

273

And the channel attention is computed as: Mc (F) = σ (MLP(AvgPool(F)) + MLP(MaxPool(F))) c )) + W (W (F c ))) = σ (W1 (W0 (Favg 1 0 max

(4)

where σ represents the sigmoid function, the MLP weights are denoted by W0 ∈ RC/r×C , and W1 ∈ RC/r×C . It’s important to note that these MLP weights W0 and W1 , are shared for both inputs and W0 is followed by the ReLU activation function. The spatial attention module aims to focus on the positional information of the significant parts in the input feature map. This module is visualized in Fig. 7 to show its structure.

Fig. 7. Diagram of spatial attention module

The calculation of the spatial attention is performed as follows:   Ms (F) = σ f 7×7 ([AvgPool(F); MaxPool(F)]) s ])) = σ (f 7×7 ([F savg ; Fmax

(5)

where σ denotes the sigmoid function and f 7×7 represents a convolution operation with the filter size of 7 × 7. In this paper, the convolution attention mechanism module is used to strengthen the attention of the single convolution layer in the original network. In the process of sending the shallow layer to the deep layer, the learning of the meaningful feature map of the network is further strengthened, which can enable the network to better learn the feature information of the target. In a given test image, the proposed method can capture the recognition features of the target more accurately, resulting in improved recognition performance.

4 Experiments In order to test the proposed improved network, the performance and speed are compared with other mainstream methods on the VisDrone dataset, and the results are analyzed quantitatively and qualitatively.

274

Z. Chen et al.

4.1 Experimental Environment and Dataset The server configuration used in the experiment is i7 9750H, 16G RAM, NVIDIA GeForce GTX1650 graphics card, and the system is ubuntu18.04LTS. The deep learning framework uses pytorch1.9. This paper employs the VisDrone dataset [16] to train and test the proposed model. The dataset includes 10 categories of pedestrians (people with walking or standing postures), people (people with other postures), cars, vans, buses, trucks, motorcycles, bicycles, awning tricycles and tricycles. The VisDrone dataset consists of 288 video clips in two different image sizes, 1360 × 765 and 960 × 540 pixels, providing a total of 10,209 still images captured by drones at different heights, including 6,471 training images Set images, 548 validation set images and 3190 test set images, a total of 2.6 million target instance samples. Figure 8 shows some of the images in VisDrone from different scenes and viewpoints.

Fig. 8. Partial image of VisDrone dataset

The model’s overall detection performance is evaluated using the mean average precision (mAP) metric, which is calculated as follows: TP Precision = TP+FP TP Recall = TP+FN 1 AP = 0 p(r)dr  mAP = C1 c∈C AP(C)

(6)

In the evaluation of the detection performance of the model, TP, FP, and FN respectively represent the targets that are correctly detected, falsely detected, and missed. The recall represents the recall rate, while the precision represents the precision of the detections. The precision-recall curve (p(r)) is obtained with different IOU thresholds, where

Object Detection in UAV Images Based on Improved YOLOv5

275

each threshold corresponds to a p(r) curve. The mAP is defined as the average of AP values of all categories, where AP is the average precision, and C is the category. The changes in loss value and average accuracy during training are shown in Fig. 9 displays the training progress by showing the variations of the loss value and average accuracy. The loss values of the training set and verification set are respectively presented in A and B. The plot indicates that the loss value converges steadily as the number of iterations increases, and there are no signs of under-fitting or over-fitting, indicating the normal training of the model.

Fig. 9. Visualize the training process

4.2 Comparison of Detection Performance In order to prove the performance of the improved YOLOv5 algorithm for detecting various targets in UAV images, this paper compares and analyzes the target detection algorithm on the VisDrone test set with the benchmark target detection algorithm. Table 1 shows the comparison of YOLOv5 and Improved YOLOv5 algorithms. The mAP values of the 10-category targets in the VisDrone test set. Furthermore, this paper uses YOLOv5s to train our model. Table 1 shows that the enhanced YOLOv5 algorithm has an overall better performance compared to the original algorithm, with an increase of 2.2% in mAP. For the target categories with large aspect ratio and small number of instances, such as pedestrians and people, the optimal mAP values of 35% and 28.7% were achieved, respectively. From table,We can see that the improved YOLOv5 algorithm has great advantages in

276

Z. Chen et al.

Table 1. Comparison of YOLOv5 and Improved YOLOv5 algorithms on VisDrone test set Class

YOLOv5

Imporoved YOLOv5

P

R

mAP

P

R

mAP

all

0.425

0.325

0.304

0.459

0.318

0.326

pedestrian

0.442

0.264

0.264

0.419

0.351

0.35

people

0.414

0.176

0.168

0.495

0.278

0.287

bicycle

0.303

0.0906

0.0916

0.275

0.105

0.103

car

0.593

0.73

0.708

0.485

0.729

0.714

van

0.395

0.388

0.344

0.493

0.342

0.36

truck

0.404

0.393

0.336

0.517

0.268

0.279

tricycle

0.258

0.213

0.156

0.457

0.211

0.226

awning-tricycle

0.361

0.179

0.168

0.303

0.113

0.121

bus

0.681

0.501

0.541

0.414

0.462

0.282

motor

0.402

0.314

0.252

0.365

0.36

0.139

dealing with the task of UAV image target detection, and its detection effect is very impressive. In addition, the recognition speed and model size are two indicators comparative experiments were conducted, and the experimental results are shown in Table 2. Table 2. Comparison of algorithm recognition speed and model size Method

Recognition speed/ (fps·s−1 )

Model size/ MB

YOLOv4

65

256

YOLOv5

86

41.2

YOLOx

96

45.8

Improved-YOLOv5

103

41.6

As can be seen from Table 2, compared with YOLOv4, YOLOv5 improved the recognition speed by 21 frames/s to 86 frames /s. Identification of YOLOx, The speed has reached 103 frames/s, another step higher than YOLOv5. Although we can improve the algorithm by using the Improved SPPF Network, Super-resolution Method, and Convolution Attention Mechanism Module, we can reduce the parameters in the construction model to make the execution speed of the algorithm faster.It reaches 96 frames/s, higher than YOLOv4 and YOLOv5 algorithms. In terms of model size, compared with YOLOv4, YOLOv5. The model size of has been greatly reduced from 256MB of YOLOv4. It has been reduced to 41.2MB, with a reduction of 83.9%. That is because YOLOv5 uses Python architecture, and the trained

Object Detection in UAV Images Based on Improved YOLOv5

277

model is smaller, It is easier to deploy, and YOLOv5-Dense is the same as YOLOv5 is almost the same, indicating that adding jump connections can be done in While obtaining higher recognition accuracy, it hardly increases the calculation of the model. Improved-YOLOv5 increases compared with YOLOv5 4.6MB, but still far less than YOLOv4.

5 Conclusion The detection performance of UAV targets is enhanced by enhancing the network structure of the YOLOv5 algorithm. By using probability-based mean pooling to improve the maximum pooling layer in the fast spatial pyramid pooling layer(SPPF), the feature extraction effect is optimized, and the features with insignificant differences can be retained while highlighting the strong features. In addition, the Super-resolution Method is used for upsampling at the neck end to dynamically adjust the weight of each input feature, fully emphasizing the importance of shallow fine-grained feature information in the feature fusion process, and effectively improving the target detector’s ability to detect small target details. Perception ability. Finally, To enhance the feature extraction capability of the original neural network, this study introduces the convolutional attention mechanism module CBAW. The results of the experiments demonstrate that the improved algorithm achieves a higher mAP0.5 (2.2% improvement) compared to the original algorithm, while still maintaining real-time performance.

References 1. Zhu, X., Zhang, Xinwei, Gu, M., Zhao, Y., Chen, F.: Spruce counting method based on UAV visible images. J. Forest. Eng. 6(04), 140–146 (2021). https://doi.org/10.13360/j.issn.20961359.202008020 2. Zheheng, L., Peng, D., Fuquan, J., Sen, S., Rulan, W., Gangsheng, X.: The application of illegal building detection from VHR UAV remote sensing images based on the convolutional neural network. Bull. Surv. Mapp. (04), 111–115 (2021) .https://doi.org/10.13474/j.cnki.112246.2021.0120 3. Liu, W., Anguelov, D., Erhan, D., et al.: SSD: single shot multiBox detector. In: European Conference on Computer Vision, pp. 21–37 (2020) 4. Lin, T.Y., Goyal, P., Girshick, R., et al.: Focal loss for dense object detection. In: IEEE Transactions on Pattern Analysis & Machine Intelligence, pp. 2999–3007 (2019) 5. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. IEEE Conf. Comput. Vis. Pattern Recogn. 2020, 779–788 (2016) 6. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, pp. 6517–6525 (2019) 7. Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804. 02767v1 (2019) 8. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: YOLOv4: optimal speed and accuracy of object detection. arXiv preprint arXiv:2019.10934v1 (2020) 9. He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2019)

278

Z. Chen et al.

10. Wang, C.Y., Liao, H.Y.M., Wu, Y.H., et al.: CSPNet: a new backbone that can enhance learning capability of CNN. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, pp. 1571–1580. IEEE (2020) 11. Liu, S., Qi, L., Qin, H., et al.: Path aggregation network for instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759–8768 (2019) 12. Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 936– 944 (2020) 13. Zheng, Z., et al.: Distance-IoU loss: faster and better learning for bounding box regression. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 07 (2020) 14. Zheng, Z., et al.: Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE Trans. Cybern. 52(8), 8574–8586 (2021) 15. Shi, W., et al.: Is the deconvolution layer the same as a convolutional layer?. arXiv preprint arXiv:1609.07009 (2020) 16. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI). Preprint (2020) 17. Zhu, P.F., Wen, L.Y., Du, D.W., et al.: Vision meet s drones: past, present and future (2020). arXiv pr eprint: https://arxiv.org/abs/2001.06303

Evaluation Indicators of Power Grid Planning Considering Large-Scale New Energy Access Zhirui Tang and Guihua Qiu(B) Foshan Power Supply Bureau of Guangdong Power Grid Co., Ltd, Foshan 528000, Guangdong, China [email protected]

Abstract. Due to the continuous maturity of new energy(NE) power generation, conversion, and storage technologies, the application cost is gradually reduced, so that the installed capacity of NE such as photovoltaic and wind power(WP) is increasing year by year. On the other hand, with the support of NE policies, new energy power generation plans and power grid(PG) plans have been promulgated successively, opening up important channels for these environmentally friendly energy to enter thousands of households. Therefore, driven by science and technology and national policies, large-scale NE sources will be widely connected to the power grid, providing strong support for solving the problems of energy shortage and environmental pollution in my country. This paper takes the reliability index and economic index as the breakthrough point, and the connected NE is wind power energy and photovoltaic power generation energy. In terms of reliability index, the influence of two new energy sources connected to the power system on capacity confidence and BVDI is analyzed. When the wind farm is connected to the system, the Weibull distribution parameter is used to reflect the change of BVDI with the parameters. The results show that the Weibull distribution parameter changes. It has little effect on the capacity confidence. The BVDI increases with the increase of the Weibull distribution parameter(DP). When the photovoltaic power plant is connected to the system, the Beta DP is used to reflect the change of the BVDI with the parameter. The results show that the change of the Beta DP affects the capacity. The influence of confidence is not large. When the A of the Beta DP is constant, the BVDI becomes smaller as the β increases; when the β of the Beta DP is constant, the BVDI becomes larger as the A increases. In terms of economic indicators, four new energy connection schemes are constructed, and the four schemes of abandoning NE electricity, saving fuel costs and pollution emission costs are compared. The results verify that different schemes of NE connecting to the power grid can save energy. The role of emission reduction, and to a certain extent cost savings. Keywords: Large-Scale New Energy · Power Grid Planning · Evaluation Index · Capacity Confidence

1 Introduction Analyze the PG planning evaluation indicators, reflect the rationality of the PG planning scheme through the indicators, and further optimize and adjust the PG planning scheme, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 279–289, 2023. https://doi.org/10.1007/978-3-031-31775-0_29

280

Z. Tang and G. Qiu

so as to realize the overall balance of the new energy access capacity in the power grid planning stage, and strive to achieve coordinated development in the PG planning stage. The optimal PG planning research scheme that can not only meet the needs of NE access but also meet the needs of PG development [1, 2]. So far, the relevant research on the evaluation indicators of PG planning considering large-scale(LS) NE access has achieved good results. In my country, the basis of wind power grid planning is the model established for cluster wind power output. At this stage, the wind power model considering uncertain factors is more suitable for analyzing and studying the uncertainty of the power output of a single wind farm that ignores the wake effect in the wind farm. The impact of capacity planning cannot give a comprehensive and accurate description of the impact of LS cluster WP integration on grid planning [3, 4]. The problem to be considered in power grid planning at this stage is how to establish a more comprehensive and detailed large-scale cluster wind power output model, which can not only accurately reflect the change of wind speed and the impact of wind speed differences at different locations in the field on the output of units, but also fully take into account the wind farm correlation and complementarity [5]. Considering the LS integration of NE into the PG has a great impact on the planning, construction and operation of the PG, some scholars have focused on how to connect the NE access in the western region with the PG planning scheme to achieve the coordinated development of NE and PG development. In full consideration of the distribution of NE resources and resource enrichment in the western region, after carefully analyzing and studying the characteristics of NE power generation in the western region, as well as the PG absorbing capacity and external transmission capacity, a plan for the overall transmission of NE to the PG is proposed, and the NE is sent to the grid. The overall energy transmission planning scheme is combined with the overall planning of the regional PG, and the rationality of the PG planning scheme is analyzed from the environmental indicators and economic indicators [6, 7]. Although there are not many research literatures on grid planning evaluation indicators for LS new energy access, the research on the impact of intermittent NE sources such as wind energy on grid planning has received extensive attention. This paper analyzes the influence of NE wind power grid connection on grid frequency and grid stability, and then puts forward the calculation formulas of reliability coefficients such as capacity reliability and bus voltage distribution index. Finally, the parameters of Weibull distribution and Beta distribution are compared and discussed respectively. The influence of reliability of different new energy systems is analyzed, and the reliability and economic indicators of wind farms and photovoltaic power plants connected to the grid are analyzed, and the best grid planning scheme is obtained.

Evaluation Indicators of Power Grid Planning

281

2 Reliability Evaluation Index of New Energy Power Generation and Grid Planning 2.1 The Impact of Grid-Connected New Energy Generation on the Power System (1) Influence on grid frequency Large-scale power grids have sufficient backup power and adaptability. When new energy is used for power generation, frequency stability is not necessary [8]. However, the difference between small PG and large PG is that the adjustment ability of small PG is weak during operation, and the frequency offset problem caused by the access of NE will affect the stability of PG operation [9]. Therefore, small grids need to have spinning reserve capacity. NE wind power has the characteristics of randomness and intermittence. In order to make the power supply process continue to be stable, the PG needs to adjust the rotating reserve capacity at any time [10]. (2) Impact on grid stability At present, the passive beam network is still used in the distribution network system. In this network system, the problems of power information collection, power switch operation, and power supply are relatively simple. Power monitoring, control and transmission are all performed by the power supply department done [11]. The access of new energy sources complicates these problems, especially the need to control and avoid the “islanding” phenomenon that may occur after the access of new energy sources, that is, to prevent the new power grid from continuing to connect to the main distribution network when it is disconnected from the main distribution network. The situation occurs when the independent distribution network where it is located transmits electricity [12]. 2.2 Reliability Index Coefficient (1) Reliability of new energy capacity The calculation formula of the capacity credibility after adding new energy to the power system is as follows: C1 =

CR Cnew

(1)

Among them, C1 represents the capacity reliability, CR represents the rated capacity of conventional units equivalent to the grid-connected new energy, and Cnew is the rated capacity of the grid-connected new energy [13]. (2) Bus voltage distribution index (BVDI)  BVDI =

1 m 2 (Ui − U ) i=1 m−1

(2)

282

Z. Tang and G. Qiu

Ui is the i-th voltage observation value; U is the average voltage; m is the sample size. The probability distribution of wind speed is mainly positively skewed, not normal distribution. The calculation formula is as follows:   k m k−1 m k (3) h(m) = ( ) exp −( ) b b b The probability model of photovoltaic power generation consists of two parts: light intensity model and photovoltaic cell model. The calculation formula is as follows: h(t) =

μ(γ + β) t γ−1 t β−1 ( ) (1 − ) μ(γ)μ(β) tmax tmax

(4)

The simplified photovoltaic cell model is as: H=

M i=1

Hi

(5)

M X =

i=1 Hi Xi

H

(6)

2.3 Construction of the Evaluation Index System for Power Grid Planning Power grid planning is a crucial component of building a power grid, and good planning is the cornerstone of a power grid that runs securely, steadily, and profitably. In addition, it is a crucial requirement to guarantee the reasonable, efficient, and complete application of money for power building. In-depth analysis of the decomposition and quantification of indicators in a specific area of power supply reliability, operating economy, or safety is referred to as the research on evaluation indicators from a single perspective, and evaluation results in this respect are presented. There are evaluations of the power quality indicators of the power grid, but network loss is the main focus of the economic analysis of the operation of the regional power grid.The evaluation process of the distribution network is shown in Fig. 1.

Evaluation Indicators of Power Grid Planning

283

Obtain the original data of various indicators of the distribution network

Score each indicator based on the original data

Calculated

Scores and comprehensive evaluation of various indicators Fig. 1. Evaluation process of the power distribution network

Since there is a link between variables, changes in any one of the indicator systems will result in changes in other indicators. Ignoring this issue could lead to more technical calculations being wrong, which would be bad for the economy and the smooth operation of the power system. As a result, it is essential to examine the laws governing numerous indicators and their correlation when building the power system. This will enable the safe and efficient functioning of the power system by quantifying the effects of unpredictable elements on the power grid during planning.The analysis process of the reference layer index of the power distribution network is shown in Fig. 2

284

Z. Tang and G. Qiu

Distribution network evaluation index level

Grid safety

Customer satisfaction

Human safety and reliability Safe and reliable system

Quality of power products Degree of fulfillment of power service commitments

Corporate social responsibility

Maximize corporate profits

Power supply responsibility Green environmental responsibility

Asset management Sales revenue management

Fig. 2. Analysis process of the reference layer indicators of the power distribution network

3 Experimental Research 3.1 Research Purpose The increase of users’ demand for electricity leads to the influence of various uncertain factors in the evaluation of PG planning, which may affect the normal operation of the power system. This problem has aroused the attention of power companies and consumers. The goal of PG planning is to make the PG develop better and adapt to the economic requirements of the power supply area, not only that, but also to play its leading role. PG planning is an important part of the economy and development of the power supply area, and it is also a work before the power grid project. PG planning has extremely complex characteristics, which are mainly reflected in the uncertainty of indicators, correlation, wide range and scale. Therefore, in order to formulate a plan that can not only meet the power demand but also meet the environmental protection requirements of electricity consumption, the research on the planning scheme for LS NE access to the power grid must start from the evaluation indicators.

Evaluation Indicators of Power Grid Planning

285

3.2 Research Content The research content of this paper is mainly divided into two parts. First, for the reliability index(RI), calculate the new energy capacity confidence and BVDI of the wind power access system when the Weibull distribution parameters change and the photovoltaic power station access system when the Beta distribution parameters change, and use these two reliability indicators to reflect the LS NE reliability of energy access to the grid. The second is for economic indicators(EI), which are generally reflected by cost. Therefore, four NE access schemes have been formulated, comparing wind farm NE, photovoltaic power station and other NE sources connected to the power system alone and two NE sources connected to the power system at the same time. In the economic cost situation, the cost is reduced, indicating that the access of NE to the PG can improve economic benefits.

4 Analysis of the Results of Power Grid Planning Evaluation Indicators 4.1 Reliability Index

Table 1. Calculation results of system reliability index under different Weibull distribution parameters k

c

C1

BVDI Bus 1

Bus 2

Bus 3

1

4.5

26.74%

0.0016

0.000171

0.000438

1.3

4.6

27.13%

0.0022

0.000234

0.000462

1.2

3.8

28.45%

0.0013

0.000153

0.000379

2.5

7.3

28.57%

0.0020

0.00246

0.000586

2.8

6.9

29.33%

0.0017

0.000213

0.000507

3

8.1

27.28%

0.0024

0.000268

0.000615

Table 1 shows the calculation results(CS) of the RI of the WP access system under different Weibull distribution parameters (k, c). It is assumed that the new energy access points are busbars 1, 2 and 3, so the busbar voltage distribution index is taken as busbar 1, 2 and 3 respectively. Voltage distribution index for busbars 2 and 3. From the perspective of capacity confidence, the Weibull distribution parameters have little effect on capacity confidence, because the new energy capacity confidence is mainly related to the original system itself and the new energy access location and capacity. Generally speaking, the larger the Weibull distribution parameter is, the larger the bus voltage distribution index will be when the new energy of the wind farm is added to the power system. Comparing the busbar voltage distribution index, it can be seen that the BVDI 1 is the largest, indicating that the addition of wind farms makes the voltage quality of busbar 1 worse.

286

Z. Tang and G. Qiu Table 2. Calculation results of system reliability index under different Beta parameters

A

β

C1

BVDI Bus 1

Bus 2

Bus 3

0.5

1.2

11.35%

0.0024

0.000263

0.000548

0.5

2.4

10.21%

0.0019

0.000241

0.000536

0.5

3.6

12.72%

0.0017

0.000192

0.000483

0.6

3.8

12.58%

0.0021

0.00194

0.000486

0.7

3.8

9.87%

0.0022

0.000257

0.000494

0.8

3.8

10.63%

0.0025

0.000268

0.000517

Table 2 shows the CS of the RI of the photovoltaic power station access system under different Beta distribution parameters (α, β). From the perspective of capacity confidence, the effect of beta distribution parameters on capacity confidence is not obvious, and the reason is consistent with the reason that the addition of wind farms does not affect capacity confidence. From the busbar voltage distribution index, when A remains unchanged and β increases, BVDI gradually becomes smaller; when A increases and β remains unchanged, BVDI gradually becomes larger. Among the three types of busbars, the voltage distribution index of busbar 1 is the largest, indicating that the addition of photovoltaic power stations makes the voltage quality of busbar 1 worse. 4.2 Economic Indicators When the power system is connected to different new energy sources, the economic indicators under the four schemes of new energy connection are compared. The plan is as follows: Option 1: Connect to a photovoltaic power station with an installed capacity of 50MW; Option 2: Access to a wind farm with an installed capacity of 110MW; Option 3: Connect to a photovoltaic power station with an installed capacity of 50MW and a wind farm with an installed capacity of 110MW; Option 4: Connect two photovoltaic power stations with an installed capacity of 50MW and a wind farm with an installed capacity of 110MW.

Evaluation Indicators of Power Grid Planning

287

After the four NE schemes are connected to the power system, the amount of abandoned new energy and the proportion of abandoned new energy are shown in Fig. 3, and the fuel cost and pollution emission cost are shown in Fig. 4. From the data in the chart, it can be seen that the access to new energy can play a role in energy conservation and emission reduction to varying degrees. When the connected new energy is a photovoltaic power station, the probability of abandoning new energy is smaller than that of a wind farm, because the distribution of photovoltaic power stations on the time scale is roughly the same as the load. In the case of option 4, the reason why the probability of abandoning new energy is the largest is that considering the ability of the power system to accept NE, under the premise of taking into account the stability constraints, the system’s ability to accept NE cannot be increased indefinitely. Among the four schemes obtained by connecting new energy to the power grid for planning, scheme four can achieve good economics.

70 Abandon new energy

63.75

MW)

60 abandon new energy

proportion % 50 40 30

24.62 18.37

20

16.86

10 0

30.18

15.43

6.95 3.24 plan 1

plan 2

plan 3 plan

Fig. 3. Abandoned new energy power and proportion

plan 4

288

Z. Tang and G. Qiu

plan

plan 4

1352

783.64

plan 3

1245.8

708.95

plan 2

960

516.37

plan 1

253.47 0

200

400

433.92

600

800

1000

1200

1400

1600

economic cost(yuan/day) Saving cost of pollution discharge

Saving fuel cost

Fig. 4. Economic cost savings (yuan/day)

5 Conclusion The simulation results of power system access to different forms of new energy show that in terms of reliability indicators, whether it is connected to wind farms or photovoltaic power plants and other forms of new energy, the impact on capacity confidence is not obvious, but it will affect the bus voltage. Distribution index, and among the three types of busbars at the new energy access point, the connection of the wind farm or the photovoltaic power station to the power system will lead to the deterioration of the voltage quality of the busbar 1. In terms of economic indicators, the access of new energy can play a positive role in energy conservation and emission reduction, and can save economic costs. Due to the anti-peak shaving effect of wind farm output and load changes, the probability of curtailment of photovoltaic power generation is slightly lower. The wind-solar hybrid operation can better adapt to changes in system load. Therefore, from the perspective of economic benefits, it is necessary to make overall planning for the access method of new energy. In this paper, the economic cost of connecting wind farms and photovoltaic power plants to the power system at the same time is more than that when they are connected separately, and the probability of abandoning new energy is greater, indicating that large-scale wind farms and photovoltaic power plants and other new energy sources can improve economic benefits when connected to the grid.

Evaluation Indicators of Power Grid Planning

289

References 1. Gür, T.M.: Review of electrical energy storage technologies, materials and systems: challenges and prospects for large-scale grid storage. Energy Environ. Sci. 11(10), 2696–2767 (2018) 2. Alshammari, B.M.: Assessment of reliability and quality performance using the impact of shortfall generation capacity index on power systems. Eng. Technol. Appl. Sci. Res. 9(6), 4937–4941 (2019) 3. Lonnqvist, T., Sandberg, T., Birbuet, J.C., et al.: Large-scale biogas generation in Bolivia – A stepwise reconfiguration. J. Clean. Prod. 180(APR.10), 494–504 (2018) 4. Badi, A., Mahgoub, I.: ReapIoT: reliable, energy-aware network protocol for large-scale internet of things (IoT) applications. IEEE Internet Things J. 8(17), 13582–13592 (2021) 5. Schumacher, K.: Approval procedures for large-scale renewable energy installations: comparison of national legal frameworks in Japan, New Zealand, the EU and the US. Energy Policy 129(JUN.), 139–152 (2019) 6. Elbasuony, G.S., Aleem, S., Ibrahim, A.M., et al.: A unified index for power quality evaluation in distributed generation systems. Energy 149(APR.15), 607–622 (2018) 7. Milanovi´c, J.V., Abdelrahman, S., Liao, H.: Compound index for power quality evaluation and benchmarking. IET Gener. Transm. Distrib. 12(19), 4269–4275 (2018) 8. Bakhshi, R., Sadeh, J.: Economic evaluation of grid–connected photovoltaic systems viability under a new dynamic feed–in tariff scheme: a case study in Iran. Renew. Energy 119(APR.), 354–364 (2018) 9. Safaee, S., Ketabi, A., Farshadnia, M., et al.: A multi-port MMC topology with reduced capacitor size for use in grid-connected PV systems. Energy Sci. Eng. 9(11), 2019–2035 (2021) 10. Mishra, N., Singh, B.: Fifteen-level converter with MPC control for grid-connected systems. Int. J. Power Energy Syst 40(1), 36–45 (2020) 11. Rasool, S., Islam, M.R., Muttaqi, K.M., et al.: Coupled modeling and advanced control for smooth operation of a grid-connected linear electric generator based wave-to-wire system. IEEE Trans. Ind. Appl. 56, 5575–5584 (2020) 12. Messo, T., Luhtala, R., Aapro, A., et al.: Accurate impedance model of a grid-connected inverter for small-signal stability assessment in high-impedance grids. IEEE J. Ind. Appl. 8(3), 488–496 (2019) 13. Isen, E., Bakan, A.F., et al.: Highly efficient three-phase grid-connected parallel inverter system. J. Modern Power Syst. Clean Energy 6(05), 239–249 (2018)

Certificateless Blind Proxy Signature Algorithm Based on PSS Standard Model Li Liu(B) and You You Anhui Technical College of Mechanical and Electrical Engineering, Wuhu, Anhui, China [email protected]

Abstract. In recent years, the theory and technology of certificateless public key system are constantly enriched and developed, and the security and efficiency are further improved. Therefore, the research on proxy blind signature in certificateless public key cryptosystem can well meet the higher requirements of security and efficiency in the application of proxy blind signature. Therefore, this paper studies and analyzes the certificateless blind proxy signature(BPS) algorithm based on PSS standard model(SM), analyzes the formal definition and security model of certificateless standard signature protocol of PSS SM, discusses the specific process of certificateless proxy blind signature scheme, and proposes a certificateless BPS algorithm. Through security analysis and efficiency comparison analysis, the results show that the proposed scheme not only meets the security of proxy blind signature scheme, but also has high efficiency. Keywords: PSS Standard Model · Certificateless Signature · Blind Proxy Signature · Signature Algorithm

1 Introduction With the rapid development of information network, the security of network has been seriously questioned. Ensuring the security of network environment has become an issue that everyone must be concerned about. The important measure to realize network security communication is cryptography technology. The traditional method is to sign before encryption, and the current method is to integrate signature and encryption. It not only reduces the complex operation of certificate management, but also avoids the security defects of key escrow. Based on the certificateless scheme, this paper studies and analyzes the certificateless BPS algorithm of PSS SM [1]. Research on certificateless BPS algorithm based on PSS SM has been studied and analyzed by many scholars at home and abroad. Tiliwalidik proposed a multi-bank electronic payment protocol based on quantum multi proxy blind signature. Compared with classical electronic payment protocols, quantum electronic payment protocols can not only protect users’ anonymity, but also provide them with different payment options. In addition, quantum multi-agent blind signature, quantum key distribution (QKD) protocol and one-time pad are used in the research. Hel proposes a designated verifier proxy blind signature (dvpbs) scheme for UAV networks, and proves that the scheme is unforgeable © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 290–299, 2023. https://doi.org/10.1007/978-3-031-31775-0_30

Certificateless Blind Proxy Signature

291

under adaptive selective message attack under random oracle model. The efficiency of dvpbs scheme is compared with other signature schemes. The experimental results show that the dvpbs scheme is effective [2]. Blind signature is a special kind of digital signature. Its design idea and method can be used for reference to the design of blind proxy. Therefore, this paper proposes a certificateless BPS algorithm, and analyzes the security performance of certificateless BPS algorithm based on PSS SM. The certificateless BPS scheme in this paper not only realizes the mechanism of multi receiver and multi message, but also each receiver can receive its own unique ciphertext, which makes it more convenient for the sender to send messages. Compared with the existing signcryption schemes, this scheme satisfies the confidentiality, unforgeability, verifiability and multi message communication under two adversaries [3, 4].

2 Certificateless BPS Algorithm Based on PSS SM 2.1 Certificateless Standard Signature Protocol of PSS SM 2.1.1 Formal Definition and Security Model Interaction phase: according to the actual execution environment, the attacker adaptively sends the following requests or queries to the Challenger: Entity public key query: given the query entity, the Challenger calls setukey algorithm to generate entity public key and send it to the attacker; Update public key request: given the new public key selected by the attacker, the Challenger finds and updates the corresponding public key entry of the entity in the record table; Entity secret value query: given the query entity, the Challenger calls extppk algorithm to generate part of the entity’s private key and send it to the attacker. Message signature query: given the signing entity and the corresponding message, the Challenger first calls the extppk algorithm to generate part of the private key and further calls the setukey algorithm to obtain the entity key, then executes the sign algorithm to generate the signature information of the message under the entity, and finally sends it to the attacker. Signature forgery: after the adaptive interaction phase, the attacker outputs a signature of the message under the target entity. If the following conditions are met at the same time, we say that the attacker wins the interaction with the challenger. Signature is the valid signature of the message under the target entity; The attacker has not asked part of the private key and secret key of the target entity; The attacker did not ask for the signature information of the message under the target entity [5]. If a certificateless standard digital signature protocol satisfies the existence of unforgeability, it is considered secure. In other words, it can prevent attackers from forging signatures on new messages. In reality, some applications (such as authenticated key exchange protocol) have higher security requirements: the protocol can prevent attackers from forging a new signature for signed messages. In order to capture this attack behavior, in the above two simulation games, we allow the attacker to query the signature of the target message under the identity of the target entity [6]. A certificateless standard signature protocol satisfies strong existential unforgeability. If the signature protocol satisfies the condition of existential unforgeability, given some known valid signature

292

L. Liu and Y. You

information of the target message, the attacker can still not forge a new valid signature of the message. 2.2 Basic Concepts of Blind Signature Blind signature, the process of deblinding is to take the signed document out of the envelope, but when the document is still in the envelope, no one can read it. In the e-cash system, in order to ensure the authenticity and usability of the e-money used by consumers, the bank first blind signs it before it can be used, which not only ensures the authenticity of the e-money, but also ensures the anonymity of consumer information [7, 8]. At present, many different blind signature schemes have been proposed, which provides a very good reference for the design of BPS schemes. 2.3 General Algorithm for Certificateless Proxy Blind Signature In a certificateless proxy blind signature scheme, there are generally five legal participants, they are key generation center KGC, user x (original signer), user y (proxy signer), message owner T and signature verifier. KGC mainly completes the generation of system parameters and some private keys of users, and publishes the public parameters of the system; The original signer x completes the authorized signature and grants the signature right to the proxy signer y by generating the authorization certificate; The message owner t completes the blind processing of message n; The proxy signer y generates the proxy signature private key, and then signs the blinded message; The signature verifier verifies the correctness of the signature according to the public parameters of the system and the public key information of the user. The construction process of certificateless proxy blind signature scheme is shown in Fig. 1. 2.3.1 Specific Process of Certificateless Proxy Blind Signature Scheme Delegated authorization: the original signer x generates the authorization information nY, calculates the authorization signature Sig(nY), and sends it to Y; Y verifies whether the authorized signature Sig(nY) is valid. If it is valid, a proxy key is generated. Otherwise, it refuses to receive or requests to resend; Proxy blind signature: the message owner t sends the signature Sig(T(n)) to t; T deblinds Sig(T (n)) to obtain proxy blind signature Sig(n) [9]. Signature verification: the verifier executes the algorithm to verify the signature according to the message n, params, user identity information, user public key and signature Sig(n). If it is true, it returns T; otherwise, it returns F.The flow of blind proxy signature algorithm without certificate based on PSS standard model is shown in Fig. 2:

Certificateless Blind Proxy Signature

293

Fig. 1. Proxy blind signature process

It can be seen from Fig. 2 that the process of blind proxy signature without certificate and certificate solves the problems of storage, management and key deposit. The proxy signature technology is adopted to realize the special conversion of signature. In this paper, a new certificateless proxy signature method is proposed by combining certificateless signature and proxy re-signing. Then, through secret sharing and threshold cryptography technology, the problem of too large proxy authority in proxy re-signing protocol is effectively solved. On this basis, a specific non-authenticated threshold proxy re-signature scheme is established, which has been verified in the standard mode.

294

L. Liu and Y. You

Signing process

Signer information

Algorithm

Identity Summary

Electronic signature

Signing process

Autograph

Issued by the

Certificate

Information Summary Identity information of signer

Compare consistency

Algorithm test Certificate information

Fig. 2. Flow chart of blind proxy signature algorithm without certificate based on PSS standard model

2.3.2 Specific Description of the Scheme (1) Generation of system parameters f is a large prime, g is the large prime factor of f -1, q ∈ Zf∗ , and gq ≡ 1mod f . The two one-way hash functions are: H1 : {0, 1}∗ × Zf∗ → zg∗ H2 : {0, 1}∗ × Zf∗ → zg∗

(1)

b = qa modf

(2)

The public parameters of the system are params = {f , g, q, b, H 1 , H 2 }. (2) Partial private key generation Let the user’s identity information be IDi , where IDi ∈ {0,1}*. KGC optional roi ∈ zq *; User I randomly selects Zi ∈ Zg *, and generates some public keys for himself γi and send it to K to generate a temporary partial private key DKi for user I, and send it to user I through a public channel; Ui = H1 (IDi , ρli , γi )

(3)

DKi = (rli + aUi + γia )modg

(4)

The DKi generated by KGC can be transmitted to user I through the public channel, because no one can pass the known γi ≡ qzi mod f and B ≡ qa mod f calculate qazi , which is a CDH problem [10].

Certificateless Blind Proxy Signature

295

(3) Private key generation. The private key of user I is:

Ri = (Di + Zi )modg

(5)

(4) Public key generation, user I generates public key( μ Li , γi ) And disclosed that: ρli = qrli modf , γi = qzi modf

(6)

In the certificateless signature protocol, the only key generation center is responsible for extracting part of the signer’s private key and distributing it to multiple signing servers with the help of secret sharing technology. The schematic diagram of the single key generation center is shown in Fig. 3.

Fig. 3. Schematic diagram of single key generation center

(5) Generation of proxy key The original signer x establishes an authorization information nY , which is used to describe the identities of the original signer X and the proxy Y [11]. X randomly selects KX ∈ Zq*, then generates an authorized signature (nY , RV , rx ) and sends it to Y; Proxy signer B first calculates Eqs. (7) and (8), and then verifies whether Eq. (9) is valid: UX = H1 (IDX , ρLX , γX )

(7)

e = H2 (nv , sX )

(8)

e e eUA qrv ≡ sX ρlX γX b modf

(9)

Anyone can use X’s public key( μ LX ,γX ) Public key of Y(μ LY , γY ) And proxy information Rx to verify the validity of proxy public key [12]. The verifier first calculates

296

L. Liu and Y. You

UX = H1 (IDX , μLX , γY ), UY = H1 (IDY , ω LY , γY ) And then verify whether Eq. (10) is valid to ensure the validity of the proxy public key: e

bf = (μLX bUX γX μLY bUY γY ) sY modf

(10)

(6) Generation of proxy blind signature Proxy signer y randomly selects KY ∈ Zg*, and sends the result to user T. user T can choose α,β ∈ Zg*, and will be sent to Y: w ≡ α −1 wmodg

(11)

After the proxy signer y receives the w sent by T, the formula (12) gets s and sends it to t: R = (αS + β)modg

(12)

(7) Signature verification stage: the signature verifier first calculates Eq. (13), and then verifies whether Eq. (14) is valid:

S˙ ≡ qR b−w f modf

(13)



w = H2 (n, S )

(14)

If the validation formula is valid, return T; otherwise, return F.

3 Security Performance Analysis of Certificateless BPS Algorithm Based on PSS SM 3.1 Correctness Analysis Proof of partial private key validity verification formula (5): qDi ≡ qRLi +aUi modf ≡ μLi bUi modf

(15)

Proof of authorized signature verification formula (9): qRv modf ≡ qeRX +KX modf ≡ qeRX qKX modf ≡ sX qe(DX +zX ) modf ≡ sX qeDX qezX modf ≡ sX μeLX beUX γXe modf

(16)

Proof of proxy public key validity verification formula (10): Rf ≡ (e(DX + zX + DY + zY ) + KX )modf e

bf ≡ (μLX bUY γX μLY bUY γY ) sX modf

(17) (18)

Proof of signature verification formula (14): just verify whether r’ = r is valid. 

S ≡ sYα qβ modf ≡ S

(19)

Certificateless Blind Proxy Signature

297

3.2 Enforceability (1) Enforceability of proxy signature Suppose that a malicious KGC attempts to impersonate the original signer x to generate an authorized signature, because the authorized signature type RV ≡ (eRX + kx ) modg contains X’s private key RX and KX randomly selected by X. First, KGC cannot obtain KX ; In addition, even if KGC knows part of the private key DX of X, it cannot solve the private key SX because it does not know the secret value ZX of X. unless the DLP problem can be solved, no ZX can be solved. Therefore, the malicious KGC cannot forge the authorized signature. Ordinary attackers have less information than KGC. Because they do not know part of X’s private key DX and X’s secret value ZX , they cannot forge authorized signatures unless they can break the underlying DLP problem. (2) Identifiability Both in the signature algorithm and in the signature verification process, the public key of the original signer X and the public key of the proxy signer y are involved. In addition, the authorization information MW established by a is used, which contains the identity information of the two. Therefore, the identities of X and Y can be effectively distinguished to meet the authentication. (3) Abuse Resistance The authorization and license information nY established by X specifies the authorization scope and duration, and nY always exists in the signature process. Just check whether the signature of the proxy signer Y exceeds the scope specified in the authorization certificate to determine whether Y has abused the signature right. (4) Blindness Before signature generation, the message owner t uses Eqs. (12), (13) and (14) to hash the message n to be signed with a hash function and a random value only known to him α and β the blind information is obtained through blind processing, and then sent to y, so y does not know the specific content of the message. Even if y saves the (V, R) used in the signature process, it cannot be obtained from Eq. (12) g α and β; In addition, unless the DLP problem can be solved, y cannot pass V = α − 1vmodq α. Therefore, y can never know anything related to the signed message, that is, the scheme is blind.

298

L. Liu and Y. You

4 Efficiency Analysis of Certificateless BPS Algorithm

Table 1. Calculation amount comparison. programme

Authorization phase

Option 1

Authorization verification

Signature phase

Signature verification

total

1H + 2S + 1A 3P + 1H

2S + 1A

4P + 1E + 1H + 1S

7P + 1E + 2H + 5S + 2A

Option 2

1H + 1S + 1A 3P

1H + 3S + 3A

4P + 1E + 3H + 1S

7P + 1E + 5H + 4S + 3A

Option 3

2H + 1S

2P + 2H

1S + 2A

3P + 2H + 1S

5P + 6H + 4S + 2A

Text scheme

1Eq + 1H + 1M

4Eq + 2H + 3M

2M

2Eq + 1M + 1H

6Eq + 4H + 7M

To measure the efficiency of a scheme in terms of the amount of computation, it is mainly to consider the operation with high computational cost. In the method of certificateless proxy blind signature, we focus on the authorized signature and authentication stage, including the amount of computation in the proxy blind signature and authentication stage. For simplicity, the bilinear comparison operation, the exponential operation and hash operation in cyclic group G 2 are counted as P, E and E respectively, the scalar multiplication operation and point addition operation in cyclic group G 1 are counted as s and a respectively, and the multiplication operation and exponential operation in Zq* are counted as m and EQ respectively. The actual running quantity of this method and other certificateless proxy blind signature methods is shown in Table 1. The certificateless proxy blind scheme provided in this paper is established by discrete pairs, whether in the stage of authorization or license authentication, or in the stage of proxy blind signature or signature authentication. Because all calculations in bilinear pairing operations are much larger than those in integer group Zq*. From Table 1, it can be seen that although all the certificateless proxy blind signature methods given in other methods such as 2 and 3 also use bilinear pairing operation, since all the calculations in this scheme are based on all the calculations in Zg*, the method given in this paper has obvious advantages in terms of efficiency and meets the high efficiency.

5 Conclusions In this paper, the certificateless BPS algorithm based on PSS SM is discussed and analyzed. Although the scheme constructed in this paper has high efficiency while meeting the security of proxy blind signature scheme, the research on certificateless signature scheme based on PSS SM can be further carried out as follows: This paper constructs a relatively comprehensive security model to prove the security of the new scheme under the SM, To make the security of the scheme more convincing. Therefore, the construction of certificateless signature scheme without bilinear pairings based on PSS SM is the

Certificateless Blind Proxy Signature

299

focus of further research; Taking full advantage of the efficiency of the certificateless signature scheme without bilinear pairings and combining it with various special signatures widely used in practical fields, the ultimate goal of constructing a digital signature scheme is to be applied in practice; We also need to deal with various uncertain attacks. Therefore, further improving the certificateless signature scheme without bilinear pairings under the PSS SM so that it can be applied in practice is one of the next tasks to be studied. Acknowledgments. This work is supported by the Key projects of natural science research in Colleges and universities of Anhui Province (No.KJ2020A1107, No.KJ2021A1523).

References 1. Tiliwalidi, K., Zhang, J.-Z., Xie, S.-C.: A multi-bank e-payment protocol based on quantum proxy blind signature. Int. J. Theor. Phys. 58(10), 3510–3520 (2019). https://doi.org/10.1007/ s10773-019-04217-1 2. He, L., Ma, J., Mo, R., et al.: Designated verifier proxy blind signature scheme for unmanned aerial vehicle network based on mobile edge computing. Secur. Commun. Netw. 2019(3–4), 1–12 (2019) 3. Rawal, S., Padhye, S.: Cryptanalysis of ID based Proxy-Blind signature scheme over lattice - ScienceDirect. ICT Express 6(1), 20–22 (2020) 4. Rao, R., Gayathri, N.B., Reddy, P.V.: Efficient and secure certificateless directed proxy signature scheme without pairings. J. Comput. Math. Sci. 10(5), 1091–1118 (2019) 5. Kumar, M.: Cloning attack on a Proxy Blind signature scheme over braid groups. Int. J. Comput. Sci. Eng. 7(6), 152–155 (2019) 6. Loganathan, E., Dinakaran, K., Valarmathi, P., Shanmugam, G., Suryadevara, N.: PSSRDBModel-Protein 3D structure prediction server based on the secondary structure informations. Mater. Today: Proc. 16, 1596–1602 (2019) 7. Ming, Y., Wang, Y.: Certificateless proxy signature scheme in the standard model. Fund. Inform. 160(4), 409–445 (2018) 8. Heo, J., Jeong, H., Lee, E.: Proxy signature, id-based partially blind signature and proxy partially blind signature using bilinear-pairing. KIISE Trans. Comput. Pract. 26(12), 556–560 (2020) 9. Isaiyarasi, T.: Authenticated tripartite key agreement protocol using digital signature algorithm. JP J. Algebra Number Theor. Appl. 46(1), 1–19 (2020) 10. Somayehee, F., Nikkhah, A.A., Roshanian, J., Salahshoor, S.: Blind star identification algorithm. IEEE Trans. Aerosp. Electron. Syst. 56(1), 547–557 (2019) 11. Zhou, B.M., Lin, L.D., et al.: Security analysis of particular quantum proxy blind signature against the forgery attack. Int. J. Theor. Phys. 59(2), 465–473 (2020) 12. Elliott, R.T., Arabshahi, P., Kirschen, D.S.: A generalized PSS architecture for balancing transient and small-signal response. IEEE Trans. Power Syst. 35(2), 1446–1456 (2019)

Mine Emergency Rescue Simulation Decision System on Account of Computer Technology Hongtao Ma1,2(B) 1 China Coal Technology and Engineering Group Shenyang Research Institute, Fushun 113000,

Liaoning, China [email protected] 2 State Key Laboratory of Coal Mine Safety Technology, Fushun 113122, Liaoning, China

Abstract. Mine Flood Emergency Decision Aid System, which provides automatic 3D modeling of extracted data and 3D simulation analysis of any mine flood drilling operation. The influence of mine location and atmospheric pressure environment on water flow is considered in the analysis method, and the simulation results are consistent with the actual physical experiment results. This intuitive system application can in its liquidity and the loading area submerged in the water phase, especially with the underground work and personnel on behalf of GPS without water area, can be judged trapped underground personnel analysis and prediction of survival and identify the counterpart aid drilling harm, and then provide the basic information to support for emergency response decision situation. This text studies the design and implementation of mine emergency rescue simulation decision-making system on account of computer technology, expounds the relevant content and theory of mine emergency rescue simulation decision-making. The data test shows that the design and implementation of mine emergency rescue simulation decision system on account of computer technology has excellent performance in emergency rescue. Keywords: Computer Technology · Mine Emergency Rescue · Simulation Decision · System Design and Implementation

1 Introduction During the actual operation of mine emergency command system in China, many problems need to be dealt with, such as information asymmetry and lack of information. In this text, a mine emergency rescue simulation decision-making system is established by computer technology to improve the process of the system responding to various matters in the mine emergency process [1]. Using computer technology, the completion of mine emergency rescue simulation decision-making system is beneficial to the efficient operation of mine emergency rescue simulation decision-making [2]. Many scholars at home and abroad have studied computer technology. In foreign studies, Alanazy M M proposed to investigate the experience of using computers for pre-service special education teachers and their perception of knowledge, as well as © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 300–309, 2023. https://doi.org/10.1007/978-3-031-31775-0_31

Mine Emergency Rescue Simulation Decision System

301

their preparation for incorporating computer technology into teaching. Barriers to incorporating computer technology into teaching practice and the level of confidence gained by teachers using technology applications in teaching. Fifty-eight female pre-service special education teachers from an education institute in Saudi Arabia responded to a needs assessment survey. Most of the participants had more than eight years of experience using computers [3]. Agarwala R proposed to address the rising incidence of visual impairment, especially in rural areas, the development of a low-cost infrared refractometer on account of a minicomputer, using off-the-shelf hardware components. Clinical verification shows that the infrared refractor works in the range of + 4.0 ~ 6.0 D at 50 cm. In addition, astigmatism measurement of the column by human eyes shows a 0.3-day absolute error and a high correlation between axis evaluation [4]. Sultanova D proposed the application of computer technology in English teaching. Computer technology provides a special communication environment between people. The environment includes representatives of different countries, ages, occupations, regardless of their position. These studies are of great significance to English teaching [5]. The system successfully overcomes the technical bottleneck of this research, the simulation analysis results are consistent with the actual physical experiment results, and the influence of mine geography and atmospheric pressure environment on the water flow movement of the three places is considered, and the drilling three-dimensional simulation analysis is carried out. The design and implementation of mine emergency rescue simulation decision-making system on account of computer technology can effectively improve the effect of mine emergency rescue simulation decision-making.

2 Design and Exploration of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology 2.1 Computer Technology Computer technology refers to the technical means and methods, or equipment, software and application technology used in the field of computer. Computer technology has obvious integration with electrical systems, theoretical physics, scientific engineering, information technology and geometry, and is developing rapidly [6, 7]. Computer technology is a complete system. The main technology includes architecture, overall application, software maintenance, theory and application technology. First, system structure technology The progress of electronic devices, microprogramming and solid state engineering, virtual memory technology, operating system and programming language, etc., all have great influence on computer system structure technology [8, 9]. Second, system management technology System management automation is to realize the automation of computer system management through the operating system. The main purpose of an operating system is to use computer software and computer resources as efficiently as possible to increase machine capacity, remove time constraints, promote operation, improve system reliability, reduce the cost of computing problems, etc. [10, 11]. Third, system maintenance technology

302

H. Ma

The computer system can process the information intelligently and analyze the data of the network virus. This part has a lot of software parts can carry out high-precision virus inspection, the use of high-tech means, improve the system virus search efficiency. The system contains many functions, among which data can be shared [12, 13]. A few trends in the future of computing: 1. megafandization, megafandization means that computers are developing in the direction of huge amount of data, huge storage and super comprehensive capability of functions [14]. 2. Multi-media, traditional information processed by computers contains a lot of text and data. In many applications, users can use this tool to optimize file information in various forms, such as pictures, videos, and materials [15]. 3. Network: The Internet connects multiple service nodes through cables to promote high-speed transmission of network data [16]. 4. Artificial intelligence.Computer artificial intelligence conforms to the law of world economic development. In the future, many economic affairs need intelligent technology. It is very necessary to apply intelligent technology to life and work. The intelligence is very important. As shown in Fig. 1, there are two modules, namely module 1 and module N, in which data are communicated in the way of interface to realize data sharing. The system allocates all data uniformly and uses algorithm mechanism to realize the intelligence of the target object [17]. 5. Miniaturization, the transaction processed by the computer will be more refined, and at the same time, it will be more miniaturized in object characteristics, such as smaller volume, which improves the efficiency and speed of computer processing. At the same time, software is becoming more lightweight, with many software being modified to be smaller but more powerful [18].

2.2 Design and Implementation of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology The structure of the mine emergency decision comprehensive command information base is composed of the following elements. (i)

Scene subsystem. Includes information collection stations, communication terminals and wireless/cable connections, including, inter alia, the collection of information on mine conditions, working members, various system components, and the exchange of processing systems [19]. (ii) Emergency control subsystem. In the underground command base collection of sound, light, people, information data collection and other issues. The subsystem uniformly processes the received mine emergency information and data, processes and analyzes the data, and then the analyst deals with various affairs according to the data [20]. (iii) Land-based command systems (daytime and rescue field command). The data includes information acquisition, communication tools research and development, network innovation and processing, etc., need to simulate and analyze its data [21].

Mine Emergency Rescue Simulation Decision System

303

Fig. 1. Computer artificial intelligence performance

(iv) Remote command centre. The investigation base is divided into main ground, accident rescue sub-station, investigation leader on land, small mine site, linkage of plot site, data exchange, step assistance and so on [22]. (v) Terrestrial/private network. For communications systems that primarily represent “combat integration”, they often combine traditional communication systems as traditional means of communication with improvised emergency communication systems to enable emergency information transmission and data transmission within the site.

3 Design and Implementation Effect of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology This text uses computer technology to realize mine emergency rescue simulation decision system. The main computer technology used is neural network algorithm. The optimal parameters of neural network need to be solved by iteration, and the commonly used optimization algorithm is gradient descent algorithm. Assuming that there is an objective function, the gradient about the parameter is the fastest direction that causes the value of the objective function to rise, and the parameter that minimizes the value of the objective function can be obtained by searching along the opposite direction of the gradient.The calculation formula of circulating neural network involved in mine emergency rescue simulation decision system is as follows: θ = θ − μ∗ ∇θ J (θ )

(1)

where, ∇θ J (θ ) represents the gradient of the objective function, represents the learning rate, and μ is the step length moved during each iteration. Assuming that the current

304

H. Ma

moment is the moment, then the output ht of the hidden layer at the moment is calculated by the input data and the output of the hidden layer at the last moment, and then the new hidden layer output ht−1 will also be transmitted to the next moment, thus forming a circular structure. The calculation formula of the recurrent neural network is as follows: ⎧ ⎨ ht = σ (Wx xt + Wh ht−1 + bh ) (2) Ot+1 = W0 ht + b0 ⎩ yt = softmax(Ot ) Wx is the weight parameter of the input data, Wh is the weight parameter of the hidden layer, W0 is the weight parameter of the output layer. bh and b0 are both bias vectors. Ot is the output gate. Average Precision (AP) and Mean Average Precision (mAp) are used in the mine emergency rescue simulation decision system in this text. In target detection, the average accuracy AP is determined by the p-R curve area drawn by Precision and Recall. The calculation formula of accuracy and Recall is as follows: Precision = Recall =

TP TP + FP

TP TP + FN

(3) (4)

where, “True Positive” refers to the number of Positive samples correctly predicted; False Positive refers to the number of correct samples predicted from wrong samples; False Negative refers to the number of positive samples predicted as False samples. MAP is used to represent the overall accuracy of the model, N(class) represents the total number of categories, and the formula is:  AP mAP = (5) N ( class)

3.1 3d Visualization of Mining Engineering The system is equipped with an independent function module, 3 d modeling can be introduced into the standard format to wire 3 d coordinate measurement engineering data, engineering section parameters, such as automatic generation of 3 d stereo image, mine engineering to meet the scene of the accident emergency rescue flood when the need of rapid 3 d modeling, quickly generate water flooded mine shaft process three-dimensional simulation analysis results of three-dimensional stereogram. The mining engineering plan (DXF or DWG format) can also be imported, and the 3d model can be generated by selecting engineering conductors and inputting elevation data of measuring points. The 3D engineering data can be automatically collected into the database for centralized storage and management, and the 3d model can be automatically updated by modifying the data. The system has the function of automatically generating mining working face. Just click the mouse to take the air inlet lane, air return lane, cutting hole and stoping line of

Mine Emergency Rescue Simulation Decision System

305

the working face to automatically generate a THREE-DIMENSIONAL working face, automatically calculate the reserves, inclination Angle, length of the working face and strike length of the working face. Input the working face advance distance, can automatically move the mining line, automatically generate goaf. The importance of its function lies in that the position of working face directly affects the calculation result of water inflow in permeable mine. Starting from the national conditions and the current situation of emergency rescue in coal mines, this paper aims to establish a complete mine emergency rescue system and build a mine emergency rescue system with advanced technology and equipment to achieve a unified command, coordination and mobilization of emergency rescue system. According to the mining engineering drawing, the 3d mining stereogram contains a large number of mined-out areas and abandoned tunnels. These abandoned works have been isolated by airtight walls to achieve wind and water isolation. Similarly, in the model of permeable analysis, a drawing for setting the closed wall is provided to represent the closed wall in the actual project, so as to prevent the water flow from simulation analysis from entering the abandoned roadway.The specific structure of the mine emergency rescue simulation decision-making system is shown in Fig. 2:

Performance module

Client display

Network service

Browser

Access services

Application program

Networking services

Service module Geographic information service

Data module

Data hierarchy

Content management

Cluster management

Database

Fig. 2. Structure diagram of mine emergency rescue simulation decision-making system

306

H. Ma

It can be seen from Fig. 2 that the system display module mainly implements hierarchical authorization for different types of personnel. Its main function is to receive and return data, complete HTML pages generated by JavaScript, and provide users with a friendly interactive interface to receive relevant information entered by users in the browser, handle accidents, and return corresponding data. The service module of the system provides basic services for applications. This layer is the core of the whole system, including Web server and GIS application server. It includes basic application modules and software platforms such as safety hazards, emergency plans, statistical analysis, etc. The third part of the system is data module, which includes attribute database and data database. It is based on MySQL database, and carries out data query, analysis and processing through SQL statements. It mainly constructs spatial information database through mine rescue organization data, rescue equipment and materials data, emergency plan data and hypergraph software. The main purpose of the system is to make the emergency rescue work of the mine information and process, and provide reference and guidance for accident rescue. The system includes GIS graphic analysis, plan management, hidden danger management, emergency drill training, rescue information management, system management and other modules.

4 Investigation and Analysis of Mine Emergency Rescue Simulation Decision System on Account of Computer Technology For feature extraction part, this text uses SP-RES2Net (the algorithm model is mainly used for feature extraction). The feature extraction module includes specific structure and parameter details. The scale in SP-RES2NET is set as 4, the length and width of the output feature graph is 1/8 of the original graph, and the number of channels is 512. The Encoder also uses Row Encoder and bidirectional LMT recurrent neural network. The network hiding layer is 256 and the dropout strategy is 0.2. In the decoder part, double-layer LSTM is used, the number of hidden layers is 512, the length and of word embedding vector is 80, and the dimension of context vector calculated by attention model is 512. Computer technology is adopted to test the mine emergency rescue simulation decision system, and its data are shown in Table 1. BLEU, Exact Match(identification accuracy) and MER Score are the evaluation criteria of the model. Computer Technology (Ours), Double Attention and Densenet-csattn are test models. Table 1. Mine emergency rescue simulation decision system test data Model

BLEU

Exact Match

MER Score

Computer technology(Ours)

87.88

77.89

86.12

Double Attention

82.65

76.45

81.23

DenseNet-CSAttn

80.12

75.12

80.32

Mine Emergency Rescue Simulation Decision System

307

Each index of IM2LATEX-100K test set is the best, among which BLEU score is improved by 9.69%, Exact Match is improved by 3.69%, and evaluation score of mathematical formula recognition model is improved by 7.22% (Fig. 3).

90

85

80

75

70

65 BLEU

Exact Match

MER Score

Fig. 3. Diagram of mine emergency rescue simulation decision system

In the figure above, the red diamond represents the Computer technology(Ours) adopted in this text, green represents Double Attention, and blue represents densenetcsattn. The Computer technology(Ours) adopted in this text is the most efficient. The data analysis shows that the design and implementation of mine emergency rescue simulation decision-making system on account of computer technology has a good performance in mine emergency rescue simulation decision-making.

5 Conclusions The system includes the development of emergency decision comprehensive information platform, server, software design and other data processing to meet the actual functional requirements of the mine with complete functions and short network life. The system has the characteristics of serialization, systematization, organization and practicability, which improves the quality and speed of information processing and speeds up the basic

308

H. Ma

system of mine information construction. The design and implementation of mine emergency rescue simulation decision-making system on account of computer technology have realized the good effect of mine emergency rescue simulation decision-making.

References 1. Rosanti, R., Anggela, R., Rina, R.: Pendidikan lingkungan hidup melalui media kontekstual berbantuan information computer technology (ICT) bagi anak didik dan orang tua. GERVASI Jurnal Pengabdian kepada Masyarakat 4(1), 1–1 (2020) 2. Alanazy, M.M., Alrusaiyes, R.F.: Saudi pre-service special education teachers’ knowledge and perceptions toward using computer technology. Int. Educ. Stud. 14(3), 125 (2021) 3. Agarwala, R., Leube, A., Wahl, S.: Utilizing minicomputer technology for low-cost photorefraction: a feasibility study. Biomed. Opt. Express 11(11), 6108–6121 (2020) 4. Sultanova, D., Muratova, M., Jalolova, I.: Computer technology is the best means of formation learning environment for studying and teaching English Language. Bull. Sci. Pract. 6(4), 411–415 (2020) 5. Yusupova, A.K., Aliyev, N.A.: THE use of computer technology in the lessons of streometry. Theor. Appl. Sci. 82(2), 613–616 (2020) 6. Christenson, R.E., Harris, M.J.: Real-time hybrid simulation using analogue electronic computer technology. Int. J. Lifecycle Perf. Eng. 4(1/2/3), 25–25 (2020) 7. Gamboa, J., Gamboa, A.G.: Impact to information computer technology: computer competency of Tinajero high school teachers in Philippines. Religación Revista de Ciencias Sociales y Humanidades 5(24), 152–157 (2020) 8. Ht, A., Tw, A., Yl, A., et al.: Computer vision technology in agricultural automation —a review - ScienceDirect. Inf. Process. Agric. 7(1), 1–19 (2020) 9. Aldridge, H.L., Johnson, J.B., Krishnasamy, R., et al.: High performance SiGe HBT performance variability learning by utilizing neural networks and technology computer aided design. ECS Trans. 98(5), 127–134 (2020) 10. Zajc, B., Paszkiel, S.: Using brain-computer interface technology as a controller in video ´ games. Informatyka Automatyka Pomiary w Gospodarce i Ochronie Srodowiska 10(3), 26–31 (2020) 11. Dehnert, J., Stopp, J., Windisch, P., et al.: Correction to: quick-erect stopping system for radiation protection and mine rescue in small-scale mining. Min. Metall. Explor. 37(6), 1819 (2020) 12. Hays, P., Kowalczyk, P.: Military technology in mine rescue. Sci. J. Mil. Univ. Land Forces 199(1), 178–197 (2021) 13. Tetzlaff, E., et al. Analysis of recommendations from mining incident investigative reports: a 50-year review. Safety 6(1), 3 (2020) 14. Hargrave, Munday, Kennedy, et al. Mine Machine Radar Sensor for Emergency Escape. Resources, 2020, 9(2):16–16 15. Kulkarni, A., Halder, S.: A simulation-based decision-making framework for construction supply chain management (SCM). Asian J. Civil Eng. 21(2), 229–241 (2019). https://doi.org/ 10.1007/s42107-019-00188-0 16. Cremen, G., Galasso, C., Mccloskey, J.: A simulation-based framework for earthquake riskinformed and people-centered decision making on future urban planning. Earth’s Future 10(1), e2021EF002388 (2022) 17. Favereau, M., Robledo, L.F., Bull, M.T.: Homeostatic representation for risk decision making: a novel multi-method simulation approach for evacuation under volcanic eruption. Nat. Hazards 103(1), 29–56 (2020). https://doi.org/10.1007/s11069-020-03957-2

Mine Emergency Rescue Simulation Decision System

309

18. Zheng, X., Baoyuan, W., Li, M., et al.: Research and development of key technology and equipment for vertical mine rescue. Coal Mine Saf. 049(012), 108–111 (2020) 19. Xiandu, P., Shufang, L., Ziwei, Z.: Fuzzy extension matter-element evaluation model of mine rescue capability. Coal Mine Saf. 49(12), 4 (2021) 20. Yuling, H., Yangyang, W.: 3d Virtual mine simulation system platform based on NeoAxis engine. Coal Mine Saf. 5, 3 (2021) 21. Hai, T., Yujie, W., Xianfeng, C., et al.: Coal Mine Safety 11, 3 (2020) 22. Xiang, Z.H.A.O.: Study on optimization of coal mine emergency management and rescue scheduling process. Coal Mine Saf. 9, 4 (2021)

Smart Phone User Experience for the Elderly Based on Public Platform Dan Ji(B) Bohai Shipbuilding Vocational College, Jinzhou 125000, Liaoning, China [email protected]

Abstract. As a vulnerable group under the background of intelligence, the wide application and complex operation of smart phones and functions have brought difficulties to the elderly to adapt to social trends and enjoy intelligent services. The purpose of this paper is to study the optimization design of smartphone user experience based on the public platform. This paper expounds the arrival of the aging society, and on this basis, analyzes and studies the current situation of the mobile phone interface design of the aging users, analyzes the WeChat public platform and the cognitive function of the elderly, and thinks about the existing problems. Combining the design theory of vision and operation, a reasonable visual and interactive design scheme of the mobile phone interface for aged users is obtained. Taking the WeChat public platform as the carrier to optimize the design, carry out the optimized design according to the design goals, and test the optimized design, and get feedback results. The user satisfaction is over 4.3, and the platform design can be further optimized, so that users have better user experience. Keywords: Public Platform · Smart Phone · Elderly Users · Experience Optimization

1 Introduction The contradiction between the elderly and digital intelligence has developed into a social problem. The acceptance and use of intelligent products by the elderly group is an important factor affecting the effective and comprehensive implementation of policies [1, 2]. At present, the maintenance of life security for the elderly mainly focuses on the government, and the society and the market have not yet fully played their roles [3]. Therefore, under the multiple backgrounds of normalization of epidemic prevention and control, digital intelligence and population aging, how to give full play to the role of the government, society and the market in the elderly, and use the important medium of smartphones to help the elderly cross the digital divide. Physiologically and psychologically, it is worth considering to improve the quality of life of the elderly and promote the harmonious and stable development of society [4, 5]. Today, older people are exposed to information technology to stay connected with younger generations [6]. Among various technologies, social networking sites (SNS) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 310–318, 2023. https://doi.org/10.1007/978-3-031-31775-0_32

Smart Phone User Experience for the Elderly Based on Public Platform

311

are rarely used by most older adults. To bridge the digital divide, Kostrzewska M sought to determine the role of social media in the over-55 age group, digging deep into the minority of older users of SNS. Research methods for the systematic and comparative analysis of concepts and conclusions published in the scientific literature. Inferring characteristics of older adults and their needs as a basis for social media users [7]. Wilaiwan W investigates the use of communication devices and apps in the elderly population and identifies factors associated with the health impacts associated with the use of mobile communication devices in the elderly in Thailand. A descriptive crosssectional descriptive study was conducted in four major regions of Thailand. 448 older adults who regularly use smartphones or tablets participated in the study. Participants with a mean age of 65.11 ± 5.26 years participated in the study. The average time spent using the device was 2.70 h/day. The reported positive health effects of smartphone or tablet use are as follows: increased self-worth and confidence (90.6%) and increased intimacy with others (82.6%) [8]. As the main media of current social development, how should new media be effectively used to help cope with the problem of population aging, enhance the social participation of the elderly, and promote the better integration of the elderly into society [9, 10]. The innovations in the research process of this paper mainly include the following points: conducting a detailed investigation on the use of existing smartphones by the elderly, analyzing the reasons and performances that affect the user experience of the early and elderly people using smartphones, and using scientific theories for followup experiments. Do the foreshadowing. The gestures, colors and fonts of the WeChat public platform are optimized for the user experience of the elderly. On the basis of the subjective design task, the result summarizes the objective user satisfaction evaluation for the elderly.

2 Research on the Optimization Design of Smart Phone User Experience for the Elderly Based on the Public Platform 2.1 WeChat Public Platform (1) Service methods based on big data The database formed by the WeChat public platform is very powerful over time. The managers of public accounts can publish information according to the nature of their accounts and the main service objects, and continue to accumulate information, thus forming a huge database, and the content information of the database. It will increase with the increase of the information published by the account [11]. (2) Service methods based on geographic location The public platform can also use location information to provide services to users in need, such as some service numbers and enterprise numbers, and you can add your own geographic location information. When users need services or help nearby, they can find a suitable public account according to their location. Mini Programs recommend nearby Mini Programs for users based on their location, allowing users to quickly discover available services [12, 13].

312

D. Ji

(3) Service method based on information push The information push function is one of the main functions of most official accounts. Accounts can actively push information for users, or push relevant information according to the keywords queried by users [14]. 2.2 Cognitive Function of the Elderly The human brain accepts the information input from the outside world, and through the processing of the brain, converts it into internal psychological activities, and then dominates human behavior. To a certain extent, cognition is a “mirror” that reflects human memory, attention, and information processing capabilities, as well as human intelligence, learning capabilities, and understanding capabilities. Cognitive impairment refers to the impairment of cognitive function, which affects the individual’s ability to take care of themselves in life. Cognitive impairment is one of the psychological disorders, which is manifested as cognitive deficit or abnormality [17, 18]. Cognitive aging is also a part of cognitive impairment, specifically referring to age-related cognitive impairment, which is a natural part of human growth and aging. Cognitive aging has a significant negative impact on the ability of daily living and independence of life in the elderly, and it is also the main cause of Alzheimer’s disease. As the age of the elderly increases, the problem of cognitive aging in elderly individuals becomes more and more serious, and their processing speed of information also slows down. After the age of 65, most elderly people show signs of brain decline, and the rate of memory decline is greatly accelerated. 2.3 User Experience of the Elderly Using Smartphones There are significant usability barriers in the use of smartphones by the elderly. It refers to the difficulties and problems caused by the mismatch between individual characteristics and smartphone characteristics among the middle-aged and the elderly. It mainly comes from two aspects. One is the cognitive impairment manifested by individual characteristics, such as vision degradation, memory loss, etc., and the other is the tool impairment manifested by smartphone features, such as smartphone icons cannot be recognized, and small fonts affect reading. Receptive disorders also affect smartphone use among the early and elderly population. Receptive disorder refers to non-use or refusal to use, which is manifested in the fact that the early and elderly people do not accept some functions or applications of smart phones. It is the main reason for the intensification of the secondary information gap, that is, the functions and services provided by smart phones. It cannot be used effectively and its utilization is limited. The reasons that affect the use of smartphones among the early and elderly people mainly come from usability barriers and acceptance barriers. Usability barriers exist before, during and after the use of smartphones in the early and elderly population, and acceptance barriers exist before and after the use of smartphones. The analysis based on the user experience research framework is shown in Fig. 1.

Smart Phone User Experience for the Elderly Based on Public Platform

313

enjoyment

Availability

before use

Using

not accepted for use

After use

not accepted for use

Fig. 1. Reasons and manifestations that affect the user experience of the early and elderly people using smartphones

3 Investigation and Research on the Optimization Design of Smart Phone User Experience for the Elderly Based on the Public Platform 3.1 Test Equipment The device of the task test experiment is a touch-screen mobile phone that can run smoothly. The main screen size of the mobile phone selected for this test: 5.7 inches, the main screen resolution: 2560 × 1440 pixels, and Android version 7.0. The mobile phone has installed WeChat for testing and has followed some WeChat public accounts for testing. WeChat version 6.6.1 has also installed mobile phone screen recording software to record the task process, so that subsequent observations can be made to make up for the shortcomings in the testing process. In addition, it is necessary to ensure that the device network is in good condition, and the test environment is quiet and free from external disturbances, and the comfort of the tester during the test should also be considered. 3.2 Test Method Considering the complexity of the functions of the WeChat public platform, this test formulates test tasks for the WeChat public platform based on the results of user research and user function attention, and selects the points that users are more expected and interested in for testing as much as possible. There are three tasks finally determined, which are briefly described below. Task 1: Collect the content in the official account. 2: Share the favorited content with friends and un-favorite the content. 3: Find and unfollow the official account.

314

D. Ji

The t-test formula used in this paper is as follows: t= t=

X −μ

(1)

σ√X n

X1 − X2 (n1 −1)S12 +(n2 −1)S22 n1 +n2 −2

(2) ( n11

+

1 n2 )

where s is the sample standard deviation and n is the number of samples. In addition, the formula for calculating s is:  1 ] S= [  (n − 1) (X1 − X2 )2

(3)

It can be inferred that the formula for calculating the population standard deviation is:  ∗

S =



(X1 − X2 )2 n

(4)

4 Analysis and Research on the Optimization Design of Smart Phone User Experience for the Elderly Based on the Public Platform 4.1 Optimal Design of Experience for the Elderly on the Public Platform Gesture design: The “public platform” mainly adopts two gestures that are easiest for the elderly to grasp and remember: click and slide. When using functions such as browsing photos, advanced gestures such as zooming in or zooming out will be used, and corresponding interface feedback will be given at this time. The “Public Platform” has added a right-swipe design to quickly return to the previous interface in some interfaces, and at the same time It also retains the function buttons on the upper right, which can be controlled in various ways. Elderly users can switch the interface comfortably, whether it is one-handed or two-handed. Color design: In this optimization design, the large button form filled with solid color on the account follow page is abandoned, and the follow button is designed in the form of a wireframe without filling. The overall page is more concise, in line with the elegant and simple style that the elderly like. When the button is clicked, the “Follow” button becomes the “Following” button. The added “Receive Article Push” button is enabled by default, and users can also close it according to their own needs. At the same time, the “Select Group” dialog box with a white background pops up, and the background is covered by a black (#000000) semi-transparent (20% opacity) mask, making the dialog box more prominent.

Smart Phone User Experience for the Elderly Based on Public Platform

315

Font design: Generally, the main text size of the mobile phone interface is 32px. When designing for elderly users, the font needs to be relatively enlarged, but it is definitely not a random increase, but a size adjustment based on the principle of respecting reading habits and ensuring visual aesthetics. According to the different needs of users in different interfaces, different settings are selected for the font size of different interfaces. The font adopts the standard font commonly used in Android, the Chinese font is Source Han Sans, and the English font is Roboto. Choose Bold for the font style of the information content that needs to be highlighted. 4.2 User Satisfaction Evaluation Results The satisfaction survey results of the optimized design are shown in Table 1. The satisfaction of each dimension exceeds 4.3, which is between “satisfied” and “very satisfied”. Users are close to “satisfied” with the optimized WeChat public platform experience, and there is still room for optimization. The order of satisfaction of the three tasks from high to low is: task 2, task 1 and task 3. The sorting results are shown in Fig. 2. Table 1. Satisfaction Survey user segmentation

Number of samples

task one

task two

task three

Novice senior users

10

4.53

4.73

4.38

Older users with experience in using other public platforms

10

4.88

4.92

4.56

After the task is over, the user will be returned to the user in time, and the user will get subjective feedback by comparing the platform experience before and after optimization. Comparing the two user feedback forms, it can be found that some changes in functions or gestures have solved the experience that some users were not satisfied with before, but some designs have also brought new confusion and discomfort to users. In this paper, the confusion mentioned by users is sorted into a book and elaborated in the form of images. Since the response of different users is inconsistent, this paper only records the problems that most users respond to. As for the problems mentioned by a few users, this paper does not care about them for the time being. Figure 3 shows some of the doubts raised by 100 users regarding the optimized platform (more than one choice).

316

D. Ji

5

task one

task three

task two

4.9 4.8 satisfaction

4.7 4.6 4.5 4.4 4.3 4.2 4.1 Novice senior users

Older users with experience user segmentation

Fig. 2. Collation results

As can be seen from the related confusion diagram of users shown in Fig. 3, 86 users said that although gestures on the optimized platform became simpler and more suitable for the elderly, the flexibility of gestures was not high. 91 users expressed that the intelligence level of the platform was still too high for the elderly, and the elderly’s acceptance ability was limited, so they could not adapt to the intelligent operation of the platform. Another 87 users reported limited typographical problems; The confusion mentioned by 66 other users is not addressed here. Therefore, it can be found from the above analysis that users have a high degree of overall satisfaction with the platform, but there are still some problems that need to be improved. The confusion of users gives the platform a reasonable space for progress, and the subsequent research can begin to deal with the confusion of users.

PROBLME

Smart Phone User Experience for the Elderly Based on Public Platform

Other

317

66

Limited page composition

87

Too intelligent

91

Gesture flexibility is not high

86 0

50

100

150

NUMBER OF PEOPLE Fig. 3. User confusion collates records

5 Conclusions Compared with rural areas, the application of intelligent services in cities is more common, and basic living guarantees such as clothing, food, housing, and transportation of urban residents are inseparable from smartphones. This affects the quality of life of the elderly, especially the elderly living alone. Therefore, this paper locates the research object in the urban elderly group, and designs the optimization of the elderly experience on the public platform. Research on the use of smart phones by the elderly, take the elderly as the research object, and analyze the problems affecting the use of smart phones by the elderly, which is conducive to broadening the research angle of the use of smart phones by the elderly, optimizing the design of public platforms, and establishing and improving the elderly in urban areas. Intelligent service policy provides a theoretical basis.

References 1. Rajagopala, L., Ford, M., Jasim, M., et al.: OP0009-pare successful patient education on covid-19 vaccine safety in a large rheumatology cohort using interactive mobile-phone video technology: context, results, and next steps. Ann. Rheum. Dis. 80(1), 5–6 (2021) 2. Lesmono, A.D., Bachtiar, R.W., Maryani, M., et al.: The instructional-based andro-web comics on work and energy topic for senior high school students. Jurnal Pendidikan IPA Indonesia 7(2), 147–153 (2018) 3. Stott, R.: Under shadows, Samsung shows off its foldable phone. Dealerscope 60(12), 16 (2018)

318

D. Ji

4. Mugarura, N.: The use of “mobile phones” in changing the banking regulatory landscape in Africa. Afr. J. Int. Comp. Law 27(2), 308–330 (2019) 5. Gowri, S.N., Kesavan, N.: A review on challenges and opportunities of mobile phone service providers in INDIA. Xi’an Jianzhu Keji Daxue Xuebao/J. Xi’an Univ. Archit. Technol. XII(III):4218–4224 (2020) 6. Latunde, A.T., Papazafeiropoulos, A., Kourtessis, P., Senior, J.M.: Co-existence of OFDM and FBMC for resilient photonic millimeter-wave 5G mobile fronthaul. Photon Netw. Commun. 37(3), 335–348 (2019). https://doi.org/10.1007/s11107-019-00845-z 7. Kostrzewska, M., Wrukowska, D.E.: Senior in social media. Zeszyty Naukowe Wy˙zszej Szkoły Humanitas Zarz˛adzanie 21(2), 245–260 (2020) 8. Wilaiwan, W., Rohitrattana, J., Taneepanichskul, N., et al.: Health effects of using mobile communication devices: a case study in senior citizens. Thailand. Environ. Asia 11(2), 80–90 (2018) 9. Marrapu, S., Satyanarayana, S., Arunkumar, V., et al.: Smart home based security system for door access control using smart phone. Int. J. Eng. Technol. 7(1), 249–251 (2018) 10. Patra, P.: Distribution of profit in a smart phone supply chain under green sensitive consumer demand. J. Clean. Prod. 192(AUG.10), 608–620 (2018) 11. Nath, A.: Comprehensive study on negative effects of mobile phone/smart phone on human health. Int. J. Innov. Res. Comput. Commun. Eng. 6(1), 575–581 (2018) 12. Harkin, D., Molnar, A.: Operating-system design and its implications for victims of family violence: the comparative threat of smart phone spyware for Android versus iPhone users. Violence Against Women 27(6–7), 851–875 (2021) 13. Ali, A.H., Hassan, M.P.H., Naseer, B.A.: Vision based gait by using smart phone technology incidence of first time stroke post analysis. Int. J. Sci. Eng. Res. 12(4, April 2021 Edition), 229–237 (2021) 14. Yana, B., Koch, M., Kalita, A., et al.: To Study the effects of deep neck flexor strengthening exercises and mckenzie neck exercises on smart phone users suffering from neck pain: a comparative study. Int J Pharm. Bio. Sci 11(1), 261–267 (2021) 15. Atta, H., Nazeer, M.T., Mahfooz, M.: Examining effects of smart phone exercise apps usage on weight reduction in young university students. Sir Syed J. Educ. Soc. Res. (SJESR) 4(1), 456–460 (2021) 16. Sucharitha, S.T., Rangasamy, P., Vaishikaa, R., et al.: Smart phone apps for smoking cessation - a qualitative study among healthcare providers in Chennai. J. Evid. Based Med. Healthcare 8(21), 1630–1635 (2021) 17. Abbas, K., Ali, W., Sadaf, A., et al.: Advantages of smart phone hearing aids over traditional hearing aids. Asian J. Med. Health 85(1), 53–56 (2021) 18. Namgung, K.S., Kim, H.Y.: The structural relationships among mother’s smart phone use perception, parenting behavior, young child’s smartphone overdependence, emotional intelligence and playfulness. Korea Open Assoc. Early Childhood Educ. 25(5), 221–250 (2020)

Design and Research of IOT Based Logistics Warehousing and Distribution Management System Suqin Guo1(B) , Yu Zhang2 , and Xiaoxia Zhao3 1 Fujian Yili Electric Power Technology Co., Ltd., Beijing 100055, China

[email protected]

2 Beijing Normal University, Beijing 100161, China 3 North China Electric Power University, Beijing 100000, China

Abstract. The distribution and storage of logistics enterprises is the core link of enterprise material management. This paper designs a logistics warehousing and distribution system based on Internet of things technology, which aims to solve the material warehousing and distribution of logistics enterprises. In the storage of materials, the application of RFID technology will automatically obtain the detailed information of materials. At the same time, EPC and ons middleware are used to carry out in-depth processing to form a stored list. Relevant staff can realize the purpose of querying material information at any time; In the process of material configuration, the required material information parameters are obtained, and the system will mark the transportation route, transportation environmental conditions, position details, etc. of the material, and the instructions of storing the material in the shelf or area and conveying the sound and light status. In the process of transportation, the online monitoring function will be implemented. At the same time, the system can also build a logistics management platform based on the form of network, achieve the sharing of logistics industry warehouse, transport vehicles, staff, material information, etc., optimize the comprehensive management efficiency of logistics work, and practically achieve the management objectives of intellectualization, digitization and networking [1]. Keywords: Internet of things · Warehouse management · Distribution system · RFID technology · Temperature and humidity

1 Introduction At present, China’s traditional logistics and transportation industry is in the process of transition from the traditional hardware upgrade mode to the software management upgrade mode. At present, it is in the early development stage of “logistics cost management era”. At present, the management mode of large-scale warehousing and logistics enterprises has entered a new era of information management and control. At present, with the integration of information technology and science, logistics companies and internal personnel need to receive and process a large amount of data and information © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 319–328, 2023. https://doi.org/10.1007/978-3-031-31775-0_33

320

S. Guo et al.

every day. Relying only on the original information management and control technology, it is difficult to meet the current demand for the rapid development of information technology,This requires us to apply brand-new technology and use the high-speed and convenience of Internet data to carry out daily work management. In this case, the management means of informatization are integrated into our daily work and life [2]. The traditional definition of informatization is also gradually changing. It not only refers to the text, computing and instant messaging tools used in work, but has become an indispensable management means for daily management and operation. Therefore, with the development of the emerging Internet of things technology, it is of key significance for enterprises and logistics businesses to study how to manage and allocate stored goods more efficiently, compress the warehousing process, and improve the use efficiency of people, finance and materials.

2 Hardware Design of Warehousing and Distribution Management System based on RFID Technology of Internet of Things 2.1 RFID System Design (1) Build middleware EPC To reduce the complexity of application development, the main processing core function of the middleware EPC system can filter and group the data read by the reading device in time, so that the disordered data can form meaningful events, shield the differences of different types of readers and writers, and provide a unified interface. EPC middleware is a general service, which is located between hardware and operating system platforms and applications. It has standard program interfaces and protocols to support these services. The functions of the middleware in the work are as follows: control the read-write devices according to the predetermined working mode, and ensure good main coordination between various read-write devices; Filtering data according to certain rules, most of the redundant data is cleared, and the real effective data is transmitted to the background information system [3]. Savant’s layered and modular middleware components are adopted. The processing container module is its core. The processing modules included are divided into standard and user-defined. The API defined by the special processing module interacts with the internal modules.The internal structure of savant middleware is shown in Fig. 1. Savant system has the following characteristics: Savant program is distributed, which can summarize and integrate information flow by levels. Savant program can collect, store and send data at different levels, and connect with other savant programs to build a distributed overall. Savant system at the edge of the network can directly exchange information with the interpreter. They will perform data proofreading, which is data proofreading. Interpreter coordination: when the interpreter reads signals from two overlapping areas, it is possible to read the information of the same label and generate redundant electronic codes of the same product. Analyzing the read information and deleting these redundant product codes is a task for savant.

Design and Research of IOT Based Logistics Warehousing

321

Fig. 1. Savant middleware internal structure

For data transmission, savant system must decide what information needs to be transferred up or down the supply chain at each level [4]. Data storage: savant system maintains a real-time storage event database. Management tasks: in the hierarchy, no matter what level savant system is in, all savant systems have a unique set of task management system (TMS), so that user-defined tasks can be used to realize data management and data monitoring system. (1) Ons middleware construction The object name resolution service (ONS) is an automatic network service system. Only one product electronic code is stored in the EPC tag, and the product details are not stored. The static information is stored in the EPCIS. The service that parses the EPC code into one or a group of URLs is provided by ons. To obtain the details of EPC related products, you can use the URL. Here, the records of the manufacturer’s location are stored in the ons, and DNS can reach the records of the EPCIS server location. DNS is used to build the ons. The construction of ons program includes: ons transmission, ons service device, local ons, cache program, and mapping information. The top level in the ons hierarchy is the root ons server, which owns the top-level domain name of the EPC namespace. Therefore, it is generally necessary to query all ons through it. Local ons queries are responded by local ons and return the URL of successful queries. Save frequent and recent query URLs to reduce external queries stored in the ons local cache. In the first station, the ons local cache is used as a query, which reduces the workload of the ons server and greatly improves the query efficiency. The actual content of the services provided by the ons system is the mapping information, which points out the mapping relationship between the EPC code and the relevant URL, in which the ons servers at

322

S. Guo et al.

different levels are distributed and stored. In this way, DNS system makes the best use of the current Internet architecture by ons system, saving a lot of repeated investment. (1) Activate label reading This paper mainly studies the role of RFID technology based on the Internet of Things in the logistics warehousing and distribution management system, and a characteristic function of RFID technology is the ability to read labels, which is often used in electronic readers, but it can also play a key role in the field of logistics. Since the process of logistics warehousing and distribution involves the identification and classification of addresses in various regions, RFID technology can well realize this function. In order to activate the function of tags in this system, the system needs to provide a large enough Vre, and Vre must be greater than or equal to the activation voltage of the tag chip. In order to ensure the normal operation of the label function, there are: Vre ≥ Vla

(1)

The formula for calculating label induced voltage can be obtained from the principle of electromagnetic induction: Vla = 2π fr qkM μ0 H

(2)

where, q represents the quality factor of the label circuit, k represents the number of turns of the label coil, M represents the area of the coil, H represents the magnetic field intensity, fr is the carrier frequency, so the induced electric field activating the label can be calculated as: H = Vre /(2π fr qkM μ0 ) ≥ Vla /(2π fr qkM μ0 )

(3)

Then the electromagnetic induced electric field of the minimum label can be calculated as: Hm = Vla /(2π fr qkM μ0 )

(4)

2.2 Hardware Design of Warehouse Management System The warehouse management system provides a powerful configuration function for users to adjust flexibly, such as permissions, operation rules and many other business modes, so as to adapt to the needs of business changes and adopt a modular approach. For various data collected during warehouse management, each module of the system transmits data to EPC server through middleware, and can exchange information with superior server through ons server to obtain task plan or report data. Internet users query management materials through the network query interface (their identity needs to be verified for user login) [5]. The entire warehousing operation of the user is managed and controlled by the warehousing management system installed on the warehousing management workstation. The design of each functional module of the warehousing management system has been basically completed, mainly including: management warehousing module, management outbound module, system module management, report module management, management system parameter module, EPC code management module, inventory management module, etc.

Design and Research of IOT Based Logistics Warehousing

323

2.3 Hardware Design of Distribution Management System The distribution management system transmits various data collected during the material distribution operation to the server through the middleware, and can exchange information with the superior server through the ons server to obtain the task plan or report data. Each transit warehouse is equipped with a material distribution workstation, and each mobile warehouse is equipped with a vehicle mounted terminal controller and a vehicle mounted mobile handset. Internet users query materials through the network query interface (their identity needs to be verified when logging in). The entire material distribution operation is managed and controlled by the material distribution system installed on the workstation. The design of each functional module of the material distribution management system has been basically completed, mainly including: management of material entry and exit module, recording of material data module, planned transportation module, radio frequency barcode management module, material safety supervision module, etc.

3 Software Design of Warehousing and Distribution Management System based on RFID Technology of Internet of Things 3.1 Software Design of Data Acquisition Module 3.1.1 Design Process of Collecting Goods Data (1) Order processing process design: customer order is the beginning of each warehouse management process. The management process starts after the order is generated, and the subsequent processes of related management enter the life cycle. After entering the system, the customer order is marked as status 1. Then, judge whether the order has passed the audit. After the audit, judge whether the order meets the actual requirements. If the order is unqualified, it will be returned to the customer and the order is marked as status 0. If the order is qualified, the next process will be carried out. The relevant business operations are carried out through the customer’s order quantity and commodity category. For example, when the order information is accurate, you can directly customize the planned shipment. On the contrary, the order information needs to change the order category and the number of goods. At this time, the status is recorded as 2. Make the delivery plan immediately after the order information changes. During the final shipment, check again whether the order has been changed. If it is confirmed that there is no change again, it will be marked as status 3, and the goods will be delivered and enter the status to be shipped; In case of any change, the order plan such as shipment quantity, commodity type and arrival date will be changed according to the user’s request, and shipment will be waited [6]. Design process of data collection for inbound and outbound goods. After checking the basic information of the goods before warehousing, the RFID electronic tag installed on the freight pallet can be written by the reading and writing device. When the goods enter the warehouse, the RFID reader / writer installed at the entrance of the warehouse will scan and identify the goods warehousing label and obtain data. In addition, the label information shall be checked. Once the goods information does not match the

324

S. Guo et al.

information in the label information, the goods information shall be re entered. After checking that there is no error, the management system center will transfer the goods information attribute to the database for backup. When leaving the warehouse, the RFID reader at the gate of the warehouse will scan and read the labels of the goods and check the information in the database. If any problem is found, it will be immediately notified to the warehouse management staff for solution. If it is confirmed that there is no error, the goods information will be directly cleared and the electronic labels will be taken away together. 3.1.2 Design Process of Environmental Storage Data Collection It is designed based on the data collected by ZigBee protocol stack. At the same time, the CodeBlocks development tool is used to design the software process program for collecting JN5139 node data module. After starting this node, first initialize the hardware, then initialize the software configuration, and then the node looks for the network to see if it can join the network? Then you can start to query and process the events to be processed. After polling, if there is something to be handled, take care of it quickly [7]. If the node enters the sleep state, it means that there are no pending events. Wait for the intervention interrupt to appear and jump to the normal state. After the node terminal program code that collects environment storage data starts this node, first initialize the hardware, then initialize the software configuration, and then the node looks for the network to see if it can join the network? Then you can start to query and process the events to be processed. After polling, if there is something to be handled, take care of it quickly. If the node enters the sleep state, it means that there are no pending events. Wait for the intervention interrupt to appear and jump to the normal state. The node terminal program code for collecting environmental storage data is as follows: create the temperature and humidity node reading function vreadtemphumidity() to judge the temperature and humidity sensor stemphumiditysensor The parameter status of estate. Set the initialization parameters and read them into stemphumiditysensor Ul6humidreading internal storage. When the starting conditions of the temperature and humidity sensor are met, start the temperature and humidity sensor, enter the standby state, and read the ambient temperature and humidity in real time. The temperature sensor operates. After the ambient temperature and humidity data are collected, the relevant data are sent to the control room for relevant operations. 3.2 Design Software of Transmission Data Module 3.2.1 Design Node Coordinator Software After the coordinator is powered on, first initialize the hardware, then initialize the software configuration, and prepare the network. The sequential scanning of channels is realized by sending the data frame coordinator. If it is determined that the channel is empty, the data frame response will be obtained. On the contrary, the next channel scanning will continue. After the channel is determined, the data listening is performed through the coordinator on the address assigned by the network. If a node applying to

Design and Research of IOT Based Logistics Warehousing

325

join the network is a non-coordinator, the coordinator assigns a specific network address to the non-coordinator node. In the network collection, all the detected data are summed up by the coordinator node and delivered to the warehouse information system management center on time. If the coordinator cannot transmit and handle the information requests sent by those nodes in time, the data information will be discarded by the coordinator. At the same time, if the request sent by the node cannot be responded in time, the coordinator will receive the request sent by the node again and respond to the request. In such a working mode, the coordinator is always in a working state and cannot be in a sleep state [8]. The indirect method is used to design the network data transmission mode. The data transmission process includes terminal request, data, data and information. The main program algorithms are as follows, and the data transmission process is represented by pseudo code: Initialize the uint8*pu8payload data pointer, then judge the RFID location number, and enter MAC creation when the conditions are met_ MCPS_ REQ_ Data data transfer request. Via smcpredrsp uParam. sReqData. U8handle sets the hardware, responds to the request, sets the address for the source node, and sets the short address for the destination node. If there is a request event, confirm the request and use the indirect data transmission method; Add 1 to the data variable, enter the next source node address, and perform the same operation. Transmit the scanned data, set the transmission data, set the data transmission length, and conduct data transmission. 3.2.2 Software for Designing Node Routing The routing node in the network system collects the node data and the data near the node, which are the sensor devices of the application itself. According to the actual situation, the data obtained from centralized purchase is processed, and then the data packets are forwarded to complete the function of transmitting data between nodes. Some procedures of data received by the routing node are as follows: first, read the long address and short address of the node, and judge whether the data frame is lost. If there is no frame loss and there is a new frame, the number of frames will increase automatically, and the data will be displayed on the PC. Lost data frame, data frame correction, PC display data [9]. 3.3 Test and Analysis of Centralized Purchase Data In the trial operation of the system, further testing is required. The logistics storage center applies the key indicators that the system can achieve to evaluate the performance of logistics warehouse management based on RFID technology. After the application of the system, its key evaluation indicators and results are shown in Table 1. The comparison results between the data collection results and the traditional management system are shown in Table 2. Similar methods are used to detect the data status of its location, and the detection results have been. Therefore, the results show that the storage location of tags is closely related to other factors for data acquisition. This system can accurately identify and obtain

326

S. Guo et al. Table 1. Stage evaluation indicators and their effects

Primary index

Secondary index

effect

Reader writer performance

Read/write distance

1.5 m

Read/write speed

35(Pieces/second)

Read/write accuracy

99.8%

System module

Intelligent reading and writing

excellent

System query, addition, deletion, etc

excellent

Information receiving speed

excellent

system stability

excellent

Table 2. Data analysis and comparison of RFID warehouse management system Traditional management system

RFID warehouse management system

Warehousing speed of goods

70T/H

100T/H

Cargo throughput

800T/H

1000T/H

Warehouse storage

7000T

10000T

Error rate

5.3%

4.5%

Labor cost

120000 yuan/month

50 thousand

Warehouse management cost

50 thousand

30 thousand

sales revenue

12 million

14 million

the label and location information of the pallet, which meets the functional requirements of the company to collect cargo storage information [10]. Based on Table 2, it can be seen that the performance of RFID warehouse management system in all aspects has obvious advantages over the traditional warehouse management system. However, the advantages of RFID warehouse management system may not be obvious only by looking at the comparison of table data. Therefore, this paper makes image examples of the warehousing speed and cargo throughput of the two warehouse management systems, as shown in Fig. 2. Figure 2A shows the warehousing speed and cargo throughput of a traditional warehouse management system. Figure 2B shows the comparison of warehousing speed and cargo throughput of RFID warehouse management system. From the comparative data in Fig. 2, it can be clearly seen that the warehousing speed of RFID warehouse management system is 100t /H and the cargo throughput is 1000t /H, while the warehousing speed of traditional warehouse management system is 70t /H and the cargo throughput is 800t /H. The speed of RFID warehouse management system is 730t /H higher than that of the traditional warehouse management system, the gap is

Design and Research of IOT Based Logistics Warehousing

A

B

Warehousing speed Cargo throughput

RELEVANT DATA

1000 100

70

QUANTITY

QUANTITY

T/H

T/H

800

Warehousing speed Cargo throughput

327

RELEVANT DATA

Fig. 2. Comparison of warehousing speed and cargo throughput of two management systems

obvious, and the cargo throughput of the RFID warehouse management system is 900t /H more than the cargo throughput of the traditional warehouse management system. It can be seen that the efficiency of RFID warehouse management system is much higher than the traditional warehouse management system.

4 Conclusion Nowadays, with the growing maturity of wireless sensing technology, digital mobile communication technology and automation technology, the intelligent terminal equipment of logistics warehousing system is becoming more and more intelligent, which provides software and hardware conditions for the construction of “smart logistics”. The application prospect of IOT technology is very broad. However, in China, due to the late start of the application of Internet of things technology in the logistics warehousing system, it is still in a relatively low-end stage. It is not very mature in terms of data collection and transmission, system network architecture, information encryption security, etc., and there is still a big gap from the requirements of modern “intelligent logistics” and “intelligent warehousing”. There is still room for exploration and research in the above aspects.

328

S. Guo et al.

References 1. Dimitriou, T.: Key evolving RFID systems: Forward/backward privacy and ownership transfer of RFID tags. Ad Hoc Netw. 37, 195–208 (2019) 2. Chen, M., Luo, W., Mo, Z., et al.: An efficient tag search protocol in large-scale RFID systems with noisy channel. IEEE/ACM Trans. Networking 24(2), 703–716 (2020) 3. Tewari, A., Gupta, B.B.: Cryptanalysis of a novel ultra-lightweight mutual authentication protocol for IoT devices using RFID tags. J. Supercomput. 73(3), 1085–1102 (2016). https:// doi.org/10.1007/s11227-016-1849-x 4. Jedda, A., Mouftah, H.: Decentralized RFID coverage algorithms with applications for the reader collision avoidance Problem. IEEE Trans. Emerging Topics Comput. 1 (2020) 5. Farris, I., Militano, L., Era, A., et al.: Assessing the performance of a novel tag-based readerto-reader communication paradigm under noisy channel conditions. IEEE Trans. Wirel. Commun. 15(7), 4813–4825 (2021) 6. Zhao, J.X.: Research and design on the SHT11 monitoring system by proteus. Electron. Des. Eng. 7 (2021) 7. Antolí, N.D., Medrano, N., Calvo, B.: Reliable lifespan evaluation of a remote environment monitoring system based on wireless sensor networks and global system for mobile communications. J. Sens. 2021, 1–12 (2021) 8. Panjaitan, S.D., Fratama, N., Hartoyo, A.: Telemonitoring temperature and humidity at bioenergy process using smart phones. 14(2), 762 (2020) 9. Zheng, X.X., Li, L.R., Shao, Y.J.: A GSM-based remote temperature and humidity monitoring system for granary. 44, 01060 (2019) 10. Zhao, J.X.: Research and design on the SHT11 monitoring system by proteus. Electron. Des. Eng. (2019)

Shipping RDF Model Construction and Semantic Information Retrieval Wei Guan1(B) and Yiduo Liang2 1 Research and Training Center, Dalian Neusoft University of Information, Dalian, Liaoning,

China [email protected] 2 School of Software, Dalian University of Foreign Languages, Dalian, Liaoning, China

Abstract. In the era of big data, China’s shipping industry is developing rapidly, but the level of shipping information management is still at a low level, which becomes a hindering force for shipping industry to advance to a higher level. In order to solve this drawback, this paper combines the technology of semantic web and shipping information, constructs the shipping RDF semantic model and realizes the shipping semantic information retrieval to achieve the purpose of improving the efficiency of shipping information management. Keywords: Semantic Web · RDF Model · Shipping Information

1 Introduction Under the era of big data, shipping industry has become an inseparable and important industry in country’s economic development, which not only occupies a large part in GDP, but also greatly boosts the related industries and provides a large number of jobs. With the acceleration of global economic integration process, chinese traditional shipping enterprises are challenged and impacted by more and more foreign counterparts. Facing the increasingly fierce international shipping market competition, traditional shipping enterprises have been converted to modern logistics enterprises, and “modernization of shipping management driven by informationization” will certainly become an important means to improve the scientific and technological innovation ability of enterprises and promote the virtuous cycle of enterprise development, and will become an important factor for chinese shipping enterprises to participate in international competition. However, the current chinese shipping industry is still at a low level of informationization, both the business volume and work efficiency of shipping industry are at a low level, which greatly slows down the development speed of shipping industry. Most of the information still runs in traditional storage and management way, and there are problems such as single content of shipping information, scope limitation and information lagging. For example, the lack of restraint mechanism for the reporting of production data of shipping companies, shipping agents and freight forwarders, and the lack of guarantee for the accuracy of information, which leads domestic shipping enterprises to rely mostly © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 329–340, 2023. https://doi.org/10.1007/978-3-031-31775-0_34

330

W. Guan and Y. Liang

on the information and judgment of institutions or investment banks. And such information usually has bias and cannot give objective guidance to shipping enterprises. In the aspect of shipping business information, many ports in China, such as Shanghai and Ningbo, have established electronic data exchange center successively, but the effective information sharing is not realized among ports, which does not reflect the information gathering effect, and the accuracy of information is also lack of guarantee, which leads to the effectiveness of information is greatly reduced. The emergence of the semantic web has changed this status quo to a large extent. The Semantic Web is a vision of the next generation Internet designed by Tim Berners-Lee in 2000, which describes a beautiful vision for people, by introducing clear semantic and structured descriptions for all kinds of data on the Web, so that machines can understand the resources on the web and realize the semantic-based information exchange between computers. The semantic web’s goal is to research how to describe information in a form that computers can understand and process, and its ultimate goal is to improve the automation and intelligence of the Internet so that computers can find exactly and efficiently the information that users need in web resources, thus developing one existing information island in the World Wide Web into a huge knowledge base. As this meets the expectations of people for the Internet, the construction of the semantic Web has started to become a hot topic of research. At present, the research on natural language processing has not yet reached a level where it can be used practically on a large scale, and the most feasible way to enable machines to understand semantic information correctly is to provide a unified, formal description of the meaning of resources by developing standards. RDF is a common language for representing web information proposed by the W3C, which draws on the knowledge representation of the semantic web and uses a triple (s,p,o) to describe web information. In the triple (s,p,o), s means subject, p means predicate, and o means object. Kim S. et al. proposed a scenario graph generation method based on the use of the model of Resource Description Framework (RDF) to elucidate semantic relationships [1]. Shaik M. H. provided the required information by using RDF and SPARQL to learn about the user’s query intent and the created career ontology in a semantic manner [2]. Mozahker Z. et al. used RDF and SPARQL to manage ship information [3]. Mozahker Z. et al. discussed RDF data management and SPARQL query methods for patent information [4]. Lee S. et al. used SPARQL for graph mining in RDF triplestores to achieve overall in situ graph analysis [5]. Charalampos. et al. investigated the use of SPARQL for querying incomplete information in RDF [6]. Ma R. et al. discussed techniques for SPARQL queries on RDFs with fuzzy constraints and preferences [7]. Barbieri D. F. et al. investigated methods for querying RDF stream data using C-SPARQL [8]. Anelli V. W. et al. discussed the problem of combining RDF and SPARQL with CP theory and reasoning about preferences in a Linked Data environment [9]. Arenas M. et al. analyzed the usage of an expression language for querying the semantic web [10]. It follows that the combination of semantic web and shipping information will improve the efficiency and accelerate the development of shipping industry.

Shipping RDF Model Construction

331

Therefore, in this paper, firstly the semantic modeling of shipping RDF is realized by taking the information resources in shipping field as an example, and the serialization of RDF is realized by using XML storage. After that, the Jena development environment is used to implement the creation, reading, parsing and querying of shipping RDF data. Finally, the conclusions of this paper are given.

2 Shipping RDF Semantic Modeling 2.1 Ontology Concepts The term “ontology” originates from the field of philosophy and represents a “science of existence”. This term was later widely used in computer science to specify the normative definition of all objects in a domain and the relationship between them, also known as “formal ontologies”. The purpose of an ontology is to provide an explicit and normative description of a conceptual model that is shared in the domain. The design of the ontology should ensure that it is domain-centric and independent of the application scenarios it has, so that the ontology can be reused and extended to the maximum extent possible. The ontology works in two main ways: (1) Performing logical reasoning. Ontologies based on description logic allow to perform reasoning using a reasoning machine that allows to obtain hidden new knowledge. Description logics are used to formally represent the definition of entities and represent them as logical axioms, while ontologies contain axioms. (2) For connecting knowledge from different sources. Any custom ontology can refer to any other ontology, and in addition, ontologies allow defining equivalence relationships between classes. With the help of ontologies, it is possible to define models with higher expressiveness using classes and objects the way they are used in programming languages such as Java. An ontology consists of a number of entities, which can be classes, properties, and individuals. Ontologies and object-oriented programming have many common elements, but also use different vocabularies to represent the same or similar content. The mapping between the elements of object-oriented programming and those of formal ontologies is shown in Table 1.

332

W. Guan and Y. Liang Table 1. Correspondence between object-oriented programming and formal ontologies Object-oriented programming

Formal ontologies

Module

Ontology

Class

Class

Class name

IRI

Object

Entity

Attribute

Property

Datatype

Datatype

Instance

Individual

Value of an attribute

Relation

2.2 Building RDF Model First, a description of a common business scenario in the shipping domain is given. The cargo owner first contacts the cargo agent and informs the agent of their cargo information. The agent judges whether he can deliver the goods for the cargo owner based on the specific situation of the cargo and the liner information he has. If the shipment is possible, the contract of intent is signed and the shipping company is contacted to prepare the shipment. If the cargo and shipping schedule of the cargo owner are not in line with the company’s existing business, the cargo cannot be shipped for the cargo owner, and the cargo owner has to find another cargo agent to ship the cargo. After the shipping company gets the order from the freight forwarder, it makes a shipping offer to the freight forwarder according to its existing ships, containers and routes. The freight forwarder adds its own profit as appropriate, then makes a final offer and sends it to the shipper. If the cargo owner accepts this offer, the contract is signed. If the freight forwarder’s offer is not accepted, further negotiations are required, or the shipper chooses another freight forwarder. If the cargo owner accepts the offer, after signing the contract, the specific cargo information is sent to the shipping company, which makes the final delivery plan based on its own ships, containers and routes.

Shipping RDF Model Construction

333

According to the introduction of shipping management process above, seven entities such as cargo owners, lenders, ships, containers, terminals, shipping lines and shipping companies are abstracted, and there are also interrelationships among these seven entities. According to the actual shipping information, two-dimensional tables of these seven entities are established, namely, cargo owner table, ship lender table, shipping company table, ship table, container table, route table, and terminal table. Among them, the container entity includes the voyage attribute, and the voyage attribute is the master code of the route entity. The route entity includes ship code, shipping company code, origin, transit and destination attributes, which in turn are the master codes of the ship entity, shipping company entity and terminal entity respectively. According to the interconnection between the above entities, the resources in this RDF model can be identified, including: container resources, route resources, terminal resources, ship resources and shipping company resources. From the shipping information given by Dalian port, a record is selected as the basis for constructing the RDF model. The master code of the container entity is its ID number, and its ID number is also the resource of this RDF model, and its attributes include container code and remarks. In the container table, the voyage is the master code of the route table. In the route RDF model, the voyage is its resource, and the ship code, ship company code, voyage period, starting time, starting point, transit location, transit time, ending point, and ending time are its attributes. Among these attributes, ship code, ship company code, starting point, transit point, and end point are again resources. Because ship code, shipping company code, starting point, transit point, and ending point are also resources, they also contain their respective attributes. 2.3 RDF Serialization Graphics are useful tools for people to understand things, but the semantic web needs machine-readable and processable representations. Therefore, you can use the XMLbased representation of RDF documents. An RDF document is represented as an XML element with the label rdf:RDF, and its content is a series of descriptions using the label rdf:description. Each description gives a statement about resources. According to the XML syntax given above and the encoding specification for RDF, the shipping RDF graph is expressed in XML form, and the result is as follows.

334

W. Guan and Y. Liang

Shipping RDF Model Construction

335

The first line indicates the use of XML. This line is omitted in the following examples, but keep in mind that this line is required in any RDF document based on XML syntax. The rdf:description element describes the resource http://someplace/0001. In this description, the attribute is a label, and its content is the value of the attribute. There are three attributes under the resource http://someplace/0001, and the linecode attribute is another resource, so the resource is still used here to mark. Similar to the http://someplace/0001 resource, the http://someplace/0001 resource also has multiple attributes, some of which are also resources, and the attributes of these resources will be parsed in sequence in the following XML statement. Multiple descriptions must appear in a certain sequence, that is, XML syntax forces serialization between descriptions. However, according to the abstract data model of RDF, the order of description (or resource) is irrelevant. This again shows that the graph model is a real RDF data model, while XML is only a possible serialized representation of the graph model.

3 Shipping Semantic Information Retrieval To represent the established RDF data in the computer, the RDF model must first be created in the computer. 3.1 Parsing RDF Data One RDF model includes many fact statements. Each triplet record is a description of a resource. A statement consists of three parts: the subject is the resource described by the statement, the predicate expresses the properties of the statement, and the object is another resource or a literal. We can either edit the RDF model in JAVA, or we can create the RDF ourselves according to the XML syntax. The created RDF should be readable by the computer and the result should be the same as the one created in JAVA. First, create the vc-db-1.rdf file in the local path, the contents of this file for the three RDF records defined in advance, and then read these three records in JAVA. Although the JAVA program can display the contents of the RDF file, a specific explanation of the contents of the RDF file is not given. For the average user is unable to understand the meaning of this XML representation, so it is necessary to parse the XML form of the same file, using a triadic representation method. Through this way, the user will be able to learn about the meaning of the RDF content. The specific parsing steps are as follows.

336

W. Guan and Y. Liang

The “m” variable is the RDF model established previously. The Jena model interface contains a listStatements() method, which returns a StmtIterator. StmtIterator is an iterator for iterating all statements and has a nextStatement() method, which returns the next statement in the stack. The statement interface gives connections to combine subjects, predicates, and objects. The object of the triple can be a resource or literal. The result returned by the getObject() method is an RDF node that is a superclass of all resources and literal. The object of the lower level is a specific type, and you can use “instanceof” method to test exactly which type it belongs to and process it as needed. 3.2 Querying RDF Data In this part, we use the established shipping information RDF file for query. The specific query process is as follows. (1) Query all RDF instances The information query statement is as following. SELECT ?s ?a ?NOTE1 WHERE {?s ?a ?NOTE1}; The query results are shown in Fig. 1.

Shipping RDF Model Construction

337

Fig. 1. Query all RDF instances

(2) Query the value of CARGOCODE for container number 0001 The information query statement is as following. SELECT ?cargocode WHERE { < http://www.w3. org/2001/ship-rdf/3.0#cargocode > ?cargocode}; The query results are shown in Fig. 2.

Fig. 2. Query the value of CARGOCODE for container number 0001

(3) Query all LINECODE data The information query statement is as following. SELECT ?NOTE1 WHERE {?s < http://www.w3.org/2001/ship-rdf/3.0#lin ecode > ?NOTE1}; The query results are shown in Fig. 3.

338

W. Guan and Y. Liang

Fig. 3. Query all LINECODE data

(4) Query all information of route 061E The information query statement is as following. SELECT ?PROPERTY ?NOTE1 WHERE { ?PROPERTY ?NOTE1}; The query results are shown in Fig. 4.

Fig. 4. Query all information of route 061E

3.3 Evaluation Indicators and Discussion of Results Different retrieval tasks have different evaluation metrics, and the same task sometimes has different evaluation metrics depending on the focus. The evaluation metrics for information retrieval generally include: Precision, Recall, and F1 Score value. (1) Precision Precision is for the positive samples with correct prediction, not all the samples with correct prediction. It is the number of positive cases with correct prediction divided by the total number of positive predictions. It is calculated as follows. Precision =

TP TP + FP

(1)

In the above formula, TP refers to True Positive, while FP refers to False Positive.

Shipping RDF Model Construction

339

(2) Recall Recall rate is an indicator to measure the coverage rate. It is the number of correctly predicted positive cases divided by the total number of actual positive cases. It is calculated as follows. Recall =

TP TP + FN

(2)

In the above formula, FN refers to False Negative. (3) F1 Score F1 score is used to measure the accuracy of binary classification model, and is a metric used to adjust the accuracy and recall rates, varying from 0 to 1. It is calculated as follows. F1 =

2TP 2 · Precision · Recall = 2TP + FN + FP Precision + Recall

(3)

Based on the analysis of the results of RDF semantic retrieval, the following rules can be summarized. First, semantic retrieval is smarter than ordinary text retrieval in that it can obtain more semantically related query results than those containing simple keywords. Second, recall rate and precision rate are contradictory measures. The recall rate reflects the ability of the model to identify positive samples, while the precision rate reflects the ability of the model to distinguish negative samples. When the precision rate is high, the recall rate is often low and vice versa. Third, when both recall rate and precision rate are high, the value of F1 will also be high. In the case where both are required to be high, F1 can be used to measure the semantic retrieval results.

4 Conclusions As a platform supported by next-generation web technology, the semantic web gradually turns Internet into a huge global knowledge base, supporting information semanticization, intelligent search engine and intelligent reasoning through standard semantic specification. Through the combination of semantic web technology and shipping information management shown above, we can find that semantic web technology has largely promoted the integration of shipping information, improved the efficiency of shipping management, and reduced many unnecessary and redundant data. What is more critical is to make many data which are not recognizable by machines before become recognizable by machines and can be reasoned and processed, so as to realize real-time, agility, efficiency and visualization of shipping information management, which is the objective demand to improve the core competitiveness of shipping information. The shipping information chain can use this platform to integrate all participants of shipping supply chain and realize the knowledge collaborative shipping information service system based

340

W. Guan and Y. Liang

on the integration of Agent technology. According to the current shipping information and the existing semantic web technology, the future combination of semantic web and shipping information will develop in a highly intelligent direction, and then realize the shipping knowledge collaborative system structure. Acknowledgements. This work was supported by the Social Science Planning Fund Project of Liaoning Province “Research on the Realization Path and Guarantee Mechanism of Government Data Opening in the Context of Big Data” (Grant No. L19CTQ002).

References 1. Kim, S., Jeon, T.H., Rhiu, I., et al.: Semantic scene graph generation using RDF model and deep learning. Appl. Sci. 11(2), 8–26 (2021) 2. Shaik, M.H.: Semantic information retrieval: an ontology and RDF based model. Int. J. Comput. Appl. 156(9), 34–38 (2016) 3. Mozahker, Z., Shin, O.K., Park, H.C.: Management of vessel information by using RDF and SPARQL. J. Digital Contents Soc. 21(11), 1939–1946 (2020) 4. Mozahker, Z., Kim, J.R., Shin, O.K., et al.: RDF data management and SPARQL query for patent information. J. Korean Inst. Inf. Technol. 18(8), 31–39 (2020) 5. Lee, S., Sukumar, S.R., Hong, S., et al.: Enabling graph mining in RDF triplestores using SPARQL for holistic in-situ graph analysis. Expert Syst. Appl. 48(4), 9–25 (2016) 6. Nikolaou, C., Koubarakis, M.: Querying incomplete information in RDF with SPARQL. Artif. Intell. 237, 138–171 (2016) 7. Ma, R., Jia, X., Cheng, J., et al.: SPARQL queries on RDF with fuzzy constraints and preferences. J. Intell. Fuzzy Syst. 30(1), 183–195 (2015) 8. Barbieri, D.F., Braga, R., Ceri, R., et al.: Querying RDF streams with C-SPARQL. SIGMOD Record 39(1), 20–26 (2010) 9. Anelli, V.W., Leone, R.D., Noia, T.D., et al.: Combining RDF and SPARQL with CP-theories to reason about preferences in a linked data setting. Semantic Web 11(5), 1–29 (2018) 10. Arenas, M., Gottlob, G., Pieris, A.: Expressive languages for querying the semantic web. ACM Trans. Database Syst. 43(3), 1–45 (2018)

Application of 3D Virtual Reality Technology in Film and Television Production Under Internet Mode Zhenping Gao(B) The Film and Television Department, Wuxi City College of Vocational Technology, Wuxi, Jiangsu, China [email protected]

Abstract. Connecting TV art with science and technology, and applying virtual reality technology to film stunt production will completely change the presentation form of films and the characteristics of film pictures in the past, and directly affect the expression of TV results. Virtual reality technology is used in the scene design process of TV art works. The designer shows the film stunt scenes through the three-dimensional technical mode. TV Science and technology should be deeply integrated with the background of modern film and television art. The traditional TV product design method can no longer meet the aesthetic requirements of modern TV, and new technology should be applied to the creativity of artistic works. From the early picture creation to the post production of live photography, although there are some differences between the traditional graphic expression techniques and the application technologies in modern virtual reality scenes, it can still provide more reference for the post production process and methods of domestic films in the future. Keywords: Internet mode · 3D virtual reality · Film and television production

1 Introduction Applying virtual reality technology to the scenes, characters, special effects and other aspects of TV creation, as the key technical means for the special production of sci-fi TV pictures at home and abroad, this paper will focus on the current status of the application of virtual reality technology, and also focus on the analysis of the new characteristics of virtual reality technology in Contemporary TV pictures. The practical application of virtual reality technology in TV stunts will incorporate a large number of new scientific and technological means. Through the scene model method in 3D stereoscopic painting and the three-dimensional mapping method in 3D space, various integrated design methods will be gradually integrated and formed, as shown in Fig. 1. The use of virtual imaging technology will change the traditional way of film and television production, and will completely change the way of shooting before production in the past, Thus forming a new direction for the development of film and television industry. As shown in Fig. 1:Film and Television Production Mode. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 341–349, 2023. https://doi.org/10.1007/978-3-031-31775-0_35

342

Z. Gao

Fig. 1. Film and Television Production Mode

Fig. 2. Create three views

Application of 3D Virtual Reality Technology in Film

343

The application of virtual reality technology will form the picture of film and television shooting. At the same time, it can greatly improve the recording effect of TV works, thus reducing the manufacturing cost, saving manufacturing resources, improving the mastery of the entire picture by the choreographer, director and cameraman, allowing artists to add the performance process to the entire virtual reality picture, thus enhancing the viewing effect.As shown in Fig. 2.

2 Introduction to 3D Virtual Reality Technology 3D virtual reality technology is a new comprehensive science and technology that emerged at the end of the 20th century. It integrates digital image information processing, computer graphics, multimedia science and technology, sensor science and technology and many other information contents [1]. 2.1 3D Virtual Reality System Features The visual environment generated by 3D virtual reality system is usually threedimensional, with a harmonious and friendly human-computer interaction interface, breaking the rigid and passive state between people and computers in the past. The human-computer interaction in the virtual reality control system is a close to natural interaction. Users can not only use the computer keyboard, mouse and other devices to achieve interaction, but also use special helmets, data gloves and other sensor devices to achieve interaction. At the same time, the computer can also adjust the image and sound provided by the control system through the movement of the user’s head, hands, eyes, language and body. Users can use their own technology, such as their own language, body movement or action, and can also inspect or operate objects in the illusory natural environment. The three-dimensional space stereoscopic image provided by the computer can also make the user in the illusory natural environment and be a part of the illusory natural environment. Users can get visual, hearing, tactile and other feelings from the illusory natural environment, and then get an immersive experience. The interaction between users and various objects in the virtual environment is just like that in the real world. To sum up, 3D virtual reality technology has the characteristics of friendly interaction, strong immersion, multi perception and stimulation, as shown in Fig. 3[2–4].

344

Z. Gao

Fig. 3. Schematic diagram of binocular overlap

2.2 3D Virtual Reality System Composition 3D virtual reality technology mainly involves environment system and corresponding software and hardware. In the environmental information system, the most critical part is the environmental model. The typical polygon simulation method is one of the key development directions in the field of virtual reality simulation. In addition, the modeling method based on graphics has made rapid progress in recent years. A group of sampled graphics can be used to build the modeling of virtual environment, so that 3D virtual reality technology can be used and popularized on computers [5]. 2.3 Application of Virtual Reality Technology in Role Building Virtual reality technology can be combined with the scenes of human daily life, and can be displayed virtually in the painting software system. Actors can use the software system to display human facial movements and expressions, as well as those actors’ actions that cannot be displayed. Virtual reality technology has also been used by many directors in the production of various film creative scenes. For example, when filming “game story”, there were more than 100 editors and directors of the film. Most of the characters in the film were animation characters, and many images were produced by using virtual reality technology. At present, due to the rapid development of virtual reality technology, the high-tech is more and more widely used in the production of TV art works. At the same time, scientific and technological personnel also use emerging means to create animation plots. In the production of TV dramas, they can explore the performance of some animal art images in the illusory society, as shown in Fig. 4 [6–8].

Application of 3D Virtual Reality Technology in Film

345

Fig. 4. Roaming HUD layout

3 Application of Virtual Reality Technology in Scene Atmosphere 3.1 Domestic Film Application Many domestic system movies and animes use virtual reality technology, such as the return of the great sage and the story of catching demons. They also use virtual reality technology to express 3D animation content. For example, animation such as the return of the great sage is an excellent 3D film and television art work in the domestic animation industry. At the same time, because this animation has also expanded the field of Chinese Adult Animation to a greater extent, in order to become a 3D stereoscopic animation film work, the virtual reality technology will be introduced. However, due to many problems such as resources and experience, its creativity is still a long way from the overseas level [9]. The virtual character setting in the animation usually adopts the virtual reality technology, which has a strong dynamic grasping ability, and adopts the orientation technology, including the dynamic sequence of the character’s body, video monitoring and analysis, motion information, and the human model of the fictional character. It will use the most popular montage technology at present. However, in this film and television work, the role setting is not perfect, and due to the production time, the design of the hair and facial expression in the great sage is not detailed enough [10]. In the works of art, due to the virtual natural environment created, and the relevant materials and surface lines of the sensing objects such as optics are not applied, the design work is not very clear, and the degree of recognition is also less. Da Sheng has a stronger ability to show the chaotic fight with the villain in the film, and has a more comprehensive application of real-time collision detection technology, so that viewers can fully immerse themselves in the film and television works, and even immerse themselves. Although there are defects in the role setting of the film novel, and the detailed processing of some scenes is not appropriate, it is sufficient in the character physique carving and the climax fighting scenes in the film, and it will not deviate from the essence of the film novel, combining

346

Z. Gao

the innovative technologies such as virtual reality with the creation of traditional art works [11–13]. 3.2 Hollywood Film Application Hollywood in the United States has a complete film production technology system in the world, and the production technology of TV works is also second to none in the world. It has created a large number of outstanding scientific film and art works, such as transformers and Marvel’s series of scientific films. Among the art works of transformers, the film also won an Oscar for its excellent robot manufacturing ability. The production process of the video work is also quite complicated, with a large number of scenes of collision and battle between various metal mechanical equipment. All actions of the robot during battle and the movement modes of various parts during change have been carefully designed, otherwise it is impossible for consumers to trust [14]. 3.3 Scene Design Features and Changes The construction of the scene of the film must first be based on the design work characteristics of the story writers of the film, and understand the writing style of the writers and directors, so as to build a realistic background consistent with the vigorous development level of the current market economy. Secondly, when setting the scene, we must unify the style of the writers, balance the scene in the film with the personality of people’s roles, and connect the thinking consciousness and means of the writers. Creative design requires a lot of imagination. In order to integrate the information of virtual resources, powerful imagination should be used to scientifically modify the existing materials and produce new creative art. This scene design method is a product design method through comparison, connection and combination. At present, some science fiction films also use a lot of imagination to combine a variety of things in product design, so as to create a new film product design method and enrich the designer’s thinking [15].

4 The Change of Virtual Reality Technology to Scene Design When virtual reality technology is used in the special effects shooting of film creation, it is necessary to create a virtual situation first, and then build the scene mode according to the situation factors. Creating illusory scenes that are close to the actual situation is also a key skill in film shooting. The actual information can be displayed by setting the scene model and rendering the atmosphere. First of all, creating a model is the main link of building a situation, and then injecting data and information into the scene mode to draw a virtual actual scene. Since the phantom scene modeling technology focuses on or multi-dimensional information, it is necessary to construct the three-dimensional spatial information according to the actual needs. At the same time, it is also necessary to have the corresponding information-based modeling technology means. To build the physical model and action model, the physical model, action mode and auditory mode must be used in the construction technology means. Therefore, it is necessary to calculate the pixel difference of side view omentum in field visualization. Suppose two things A1 and

Application of 3D Virtual Reality Technology in Film

347

A2 appear at the front of both eyes at the same time, then the e of the two things and the pixel difference of the side looking omentum β The relationship is as follows: β = eH/(E2 + eE)

(1)

where: set the distance between the right and left eyes as H; The vertical distance between the right eye and article a is set as E. If the reference object is set as object A1, people’s cognitive value for the distance E is: e = βE2/(H − βE)

(2)

The virtual reality environment for scene system design is generally constructed by Vega. The specific steps are as follows: 1) Use the camera head/shooting equipment on Vega platform to obtain the environmental material pictures to be designed, and then use the drawing box program to establish the scene image visualization by using the immersive visual modeling method based on the realistic model technology. There are a large number of scene visualization pictures in this scene. 2) The obtained background material images are edited and converted. This process is generally simulated by stereo vision model. In order to obtain the 3D data object file about the environment visualization information, the visualization information surrounding the environment can be completed by creating 3D objects with cad/3dmax tools. 3) Due to the large amount of design project information, the information used in the actual engineering design must be optimized, which interferes with the efficiency of engineering design. Therefore, when we design the engineering scene on the c++ application library model and Vega platform, we must adopt the immersive visualization technology based on realistic modeling. Through visual modeling, we can form an immersive visualization environment with high interaction, Thus, the designer can get an immersive engineering design experience, which can greatly improve the efficiency of engineering design. In the past, geometric icons such as points and lines could not accurately describe the scene vector field in architectural design. However, with the emergence of virtual reality technology and human-computer interaction technology, geometric icons such as points and lines can be used to accurately describe the scene vector field. The visual modeling methods used in this paper are mainly divided into five types: target, source, filter, mapping and mapping. Modern digital scene painting technology has completed the deep integration with 3D art. There are several common digital scene drawing tools: 1. replacement. Substitution is a kind of technology in digital scene painting. In on-site production, the reality of the picture is often not ensured. However, in order to create a realistic state, digital scene painting technology can be used to achieve substitution in post production. The scene painting can be made of hand-painted materials, or directly spliced with real materials, or even combined. In the actual scene drawing work, we must carefully observe the image and master the distance of the lens. In

348

Z. Gao

landscape painting, we must draw the foreground, middle and foreground under the condition of meeting the perspective conditions, so as to produce a perfect artistic conception. 2. scenario extension. The situation development method is to expand the scene environment on the basis of real photography to create rich scene and picture effects. In real film and television shooting, actual close range is often constructed. In order to create real long-range effects, interactive situation development method must be used. 3. character matting. In the production of stunt lens, synthesis is usually the last step, and whether the production result is ideal or not is closely related to this link. In the process of synthesis, we must make good use of software functions to truly show all elements. Therefore, before production, we must fully understand the whole picture and lens structure. On the character roto, first of all, it should be processed according to the original material. In order to obtain a clean channel, it is necessary to restore the true character image through matting. When matting, it is necessary to first process the edge and corner details, and then process the scene after matting, and use the layer technology to create a three-dimensional scene effect.

5 Conclusion The organic integration of film and television art and virtual reality technology has changed the form and nature of the film, resulting in the visual effect of frightening the soul. With the large-scale promotion of virtual reality technology, the film scene is more fantastic and the scene is more magnificent, thus greatly improving people’s viewing experience. Acknowledgments. This work was financially supported by Jiangsu Pr1ovince Education Science “13th Five-Year plan” Key funding project (Grant No. B-a2018/03/12): “Empirical and Theoretical Research on achievement Sharing of Vocational Education Community in Mixed Ownership Mode -- A Case study of Characteristics of Taobao Film and Television College of Wuxi City Vocational and Technical College” fund.

References 1. Jun, L., et al.: Effect of 3D slicer preoperative planning and intraoperative guidance with mobile phone virtual reality technology on brain glioma surgery. Contrast Media Mol. Imaging 2022 (2022) 2. Hameed, B.M., et al.: Application of virtual reality, augmented reality, and mixed reality in Endourology and urolithiasis: an update by YAU Endourology and urolithiasis working group. Front. Surgery 9, 866946 (2022) 3. Huanmei, L., Xin, N.: 3D indoor scene reconstruction and layout based on virtual reality technology and few-shot learning. Comput. Intell. Neurosci. 2022 2022 4. Henry, R., et al.: The current and possible future role of 3D modelling within oesophagogastric surgery: a scoping review. Surg. Endosc. 36, 5907–5920 (2022) 5. Jarrett T.H., et al.: Exploring and interrogating astrophysical data in virtual reality. Astron. Comput. 37, 100502 (2021)

Application of 3D Virtual Reality Technology in Film

349

6. Gargi, J., Abraham, J.: Virtual reality and its transformation in forensic education and research practices. J. Vis. Commun. Med. 45(1), 18–25 (2021) 7. Mahrous, A., Elgreatly, A., Qian, F., Schneider, G.B.:. A comparison of pre-clinical instructional technologies: natural teeth, 3D models, 3D printing, and augmented reality. J. Dental Educ. 85(11), 1795–1801 (2021) 8. Kuberan, P., et al.: Virtual reality three-dimensional echocardiographic imaging for planning surgical atrioventricular valve repair. JTCVS Tech. 7, 269–277 (2021) 9. Carl, G., Chad, M., Bonnie, L.: 3D, virtual, augmented, extended, mixed reality, and extended content forms: the technology and the challenges. Inf. Serv. Use 40(3), 225–230 (2020) 10. Ruess, P., Wingartz, N.: Virtual Reality als Instrument zur Gewinnung von Nutzerfeedback zu Technologieszenarien am Beispiel urbaner Mobilität virtual reality as an instrument for obtaining user feedback on technology scenarios using the example of urban mobility. HMD Praxis der Wirtschaftsinformatik 57(1), 230–243 (2020) 11. Technology - remote sensing; new findings from university of hamburg in the area of remote sensing described (Preserving the knowledge of the past through virtual visits: from 3d laser scanning to virtual reality visualisation at the istanbul catalca incegiz caves). J. Technol. (2020) 12. Technology - transportation technology; data from technical University Munich (TU Munich) provide new insights into transportation technology (Virtually the Same? Analysing Pedestrian Behaviour By Means of Virtual Reality). Transp. Bus. J. (2020) 13. Downer, T., Gray, M., Andersen, P.: Three-dimensional technology: evaluating the use of visualisation in midwifery education. Clin. Simul. Nursing 39, 27–32 (2020) 14. Pence, H.E.: How should chemistry educators respond to the next generation of technology change?. Educ. Sci. 10(2), 34 (2020) 15. Million insights; virtual reality in gaming market to grow based on increasing demand for novel gaming technologies and Platforms Till 2025 | Million Insights. J. Eng. (2020)

Simulation Research on Realizing Animation Color Gradient Effect Based on 3D Technology Zhenping Gao(B) The Film and Television Department, Wuxi City College of Vocational Technology, Wuxi, Jiangsu, China [email protected]

Abstract. At present, the rapid development of 3D technology has turned it into an important help in Chinese animation works, helping animation creation save a lot of time and cost, and optimize the visual enjoyment of viewers. Due to the huge environmental impact of the market, 3D technology has not been widely used in 2D animation. Through the research on the advantages of 3D technology for animation creation, and with the help of scholars’ research results on the wide application of 3D technology in China’s two-dimensional animation, this paper points out more development directions for the development of China’s animation in terms of animation forms, optimization of creative teams and visual effects, creates animation works with more national characteristics, and promotes the development of the animation industry. Based on the actual situation that 3D technology is mainly used in two-dimensional animation in China, this paper further explores the advantages that 3D technology provides for animation creation. Finally, it gives three prospects for the application of 3D technology in animation in the future, so as to achieve a better development vision for animation art through the scientific and technological revolution. Keywords: 3D technology · Animation color · Simulation study

1 Introduction Animation art is a type of comprehensive visual art, and its own development is closely related to science and technology. Modern technology has created more art forms for animation art, and animation art is also one of the promoting factors for the development of science and technology. The development of 3D technology first created a new art form of 3D animation for animation art. In recent years, it has been gradually used in animation, overcoming some drawbacks in animation and bringing more diversified development possibilities to animation.

2 Application of 3D Technology in Animation In order to discuss the future development trend of 3D technology in two-dimensional animation, we must first clarify the actual use of this technology in two-dimensional animation at this stage [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 350–359, 2023. https://doi.org/10.1007/978-3-031-31775-0_36

Simulation Research on Realizing Animation Color Gradient Effect

351

2.1 Application in Scenarios With the iterative updating of 3D software, 3D style designs emerge in an endless stream. The powerful rendering plug-in and rich 3D animation effectors have greatly improved the productivity and visual expression of designers, and designers’ creativity has also been further developed. The development of communication and hardware technology further supports designers to apply more and more 3D vision and animation that compare performance and flow to practical projects. The 3D style can restore the material, light effect and even the laws of physical movement of the real world to the greatest extent. Under the whims of the designer, like the magic brush Ma Liang’s brush, it perfectly combines the abstract and the reality, presenting a “surreal” effect with great brain holes. This kind of surrealism is especially reflected in the dynamic design. It brings the viewers wonderful visual impact by presenting the dynamics that cannot be realized in reality in the virtual space.Particle beam is one of the material special effects that can best represent the sense of science and technology. It can highlight the scientific, advanced and cool visual effects on the screen. It emits multiple particles through the particle emitter. By calculating the particle’s motion speed, particle life and its motion trajectory, it assigns luminous attributes to its trajectory to achieve the beam effect. Common 3D software, such as C4d and 3Dmax, can complete simple particle effects, and more in-depth can also complete more cool effects through Krakatoa, X-particles and other plug-ins. Against the dark background, the light effect better touches the user’s visual nerve, and uses multiple particles and the diversity of their complex and regular motion tracks to reflect the cool visual style of the screen science fiction.3D technology at this stage of two-dimensional animation, mostly used in the construction of animal environment. For example, in Disney’s animation work “pony king” at the beginning of this century, the director used 3D technology to create a large picture of the green lawn in the large picture of the war horse galloping on the green grass, creating a lifelike lawn movement space for the horses. In the work “the son of a sea animal”, which was released at the end of 2020, the scene of the heroine anheliuhua running on the street was set with 3D technology. It can be seen that 3D skills are generally used to represent the changes in the scene caused by the movement of characters in the use of animation scenarios. The same example happened in the little prince of Egypt (see Fig. 1). There are scenes in the film that need to show more public movements, and 3D skills are used. However, different from the other two films listed above, the scene in the king of Egypt “uses 2D hand drawing, and the crowd uses the group animation skills of 3D software”. The biggest difference to be solved in this way is the great vision used in the king of Egypt. Only human body movement needs to be presented in the camera, and the whole scene will not change. Therefore, in the case that such a huge personal movement needs to produce multi-dimensional visual dynamic effects while taking into account various cost factors, the use of 3D technology will be the best choice. These two films focus on the scene changes brought about by the character movement, and the focus is to contrast the character movement with the scene changes [2, 3].

352

Z. Gao

Fig. 1. The distant people in the prince of Egypt are produced by 3D technology

2.2 Application in Special Effects Due to the vigorous development of 3D technology, in recent years, animation art has gradually produced a large number of cases in which 3D technology is used to reprocess the special effects in traditional two-dimensional animation. Because animation art is different from fantasy, magic, science and other types of live action movies, there are often more animation special effects in the picture because of its own artistic aesthetic characteristics. For example, in the big adventure of digital baby, when the protagonist is in the barbecue shop, because he is close to the real environment during barbecue, the visual effect of rising smoke generated during barbecue in life is created (see Fig. 2) [4]. Similarly, because the 3D effect of exhaust gas will also produce an abrupt feeling in the two-dimensional visual picture, the smoke and baking pan related to exhaust gas are also manufactured through 3D technology. The production process of this special effect looks more realistic and natural due to the use of 3D technology [5, 6].When drawing a square, the width and height of the square are 300 at 100200. Because the canvas is not so large, it presents a rectangle that uses createLinearGradient to create a linear gradient color. The word itself also has the meaning of linear gradient. The parameters in it are: starting from 100200 coordinates, knowing 400500 coordinates, Use addColorStop to set the gradient of color. There are three parameters in it: stop and color. Stop is from 0.0 to 1.0. Color is the color of linear gradient. Here, a function is created to realize the effect of dynamic gradient. Because stop is from 0.0 to 1.0, set a value in this range and + = 0.01 every time the condition is met. This can achieve an excessive effect, (A: Can such a function be implemented?) My answer is: no, it needs to be used with requestAnimationFrame. This is an animation function that is dedicated to canvas. You can use it to display dynamic effects.With the continuous improvement of people’s quality of life, people’s demand in real life is also increasing. In order to ensure the safety of unstable and uncertain events, action virtual simulation is needed to operate, so as to achieve their desired purpose. Unity3D software developed by UnityTechnologies is a

Simulation Research on Realizing Animation Color Gradient Effect

353

multi-platform development tool with comprehensive functions, which can realize many contents, such as 3D animation and motion virtual simulation. Motion virtual simulation technology is to realize real-time simulation of virtual characters and environment, and solve the problem at low cost while ensuring safety. In the context of the comprehensive and rapid development of today’s high-tech technology, human body virtual simulation technology has been rapidly expanded and widely used in various fields, with a very good development prospect. In the field of entertainment, with the rapid development of virtual simulation technology, many ecommerce malls have applied new technology games such as 3D animation, human motion capture and human motion recognition technology. For example, in 2019, Nintendo published “Fitness Ring Adventure”, which is a body-sensing game. It detects the pedal circle speed through the pressure sensor, as well as the degree of knee bending, and carries out various sports in the virtual scene, not only for leisure and entertainment, but also to help people exercise. In the field of sports, the body posture of athletes in the process of competition is a very important part of the results. It is very important whether they can reach various standard values of a posture. In the past, athletes were usually videotaped, and then experienced referees were asked to evaluate the results of the game through video, but in this way, everyone’s subjective thinking and experience were used to judge, which must be biased. In modern technology, athletes’ posture is captured, collected and processed by computers, and finally compared with an ideal posture data to obtain a data with small error to determine the result of the game, which greatly improves the fairness of the game.

Fig. 2. The barbecue device and gas in digital baby adventure are made by 3D technology

354

Z. Gao

2.3 Application in Color In the creation of animation and film and television works, there is a certain correlation between color and text symbols. At the same time, color can also be organically integrated with pictures, music and so on [7]. Color is a “code” in the creation of film and television animation, which can give full play to the special role of expression to express meaning. The ideographic feature is also an important feature of color, which can condense the film and television language. Its ideographic and collection functions are endowed by daily life experience. In people’s production and life, it has had a crucial impact on color. When appreciating various film and animation works, People have had a good understanding of color expression through their daily life experience, and formed various wishes and ideas in the process of color transformation. Color can also play a hint of the story [8]. Similar to the explanation, the effect of color in the story is usually expressed by means of large areas of color. Viewers can also understand the turning points and changes of the story in advance through the overall color changes of the animation. A large number of emotional messages are conveyed through color, and designers can change the brightness and warmth of color to make the change of the whole story no longer sudden. At the same time, they can also achieve the effect of enriching the story and understanding the plot development to a certain extent, so as to make the logical development of the whole story more coherent. This is also well reflected in the return of the great sage. At the beginning of the film, sun Dasheng squats on the rock to talk with the heavenly soldiers and generals. Behind him, the turquoise background is mixed with a little haze, filled with a sense of gloom and depression (see Fig. 3) [9, 10]. In the meantime, snowflakes continued to drift by the great sage, adding a bit of coolness. As the heavenly soldiers and generals entered the painting, another bright and warm orange came into view in the gray color. Although the green hue is cold, it is still more active and promising than black, deep purple, deep blue and red. The use of these colors also conveys that although the great sage is not a very evil person, he is not an eternal disaster facing disaster [11]. A similar scene also takes place in the film. After a relaxing trip with Jiang liuer and others, Dasheng is about to face the mountain demon’s Inn in the wild. The designer created a scene of an inn from the perspective of the great sage. The most obvious feature in the overall picture of the scene is that Jiang liuer turned back to greet the great sage with a smile, while behind him is the inn in the center and the Cangshan Mountain and the sunset in the distance. The afterglow of the sunset colored Jiang liuer’s cheeks, and divided the scene in the distance into two parts. The left zero point five side was mainly red and yellow, and the right zero point five side gradually became blue and purple, both cold and gray, which meant that the crisis was coming, and the harmonious and warm atmosphere was even more tense [12, 13].As shown as Table 1: Shape Detection Relationship.

Simulation Research on Realizing Animation Color Gradient Effect

355

Fig. 3. Stills of the return of the great sage

Table 1. Shape Detection Relationship shape

Index F

particle shape

circular

1

1:2.2

square

1.27

2.228

Regular hexagon

1.103

1.52

3 3D Scene Data Organization 3.1 Scene Model Selection It has solved the problem of how to efficiently organize 3D scene data, as well as the implementation of data representation methods and related algorithms for 3D objects in indoor scenes. Maintaining a good organization of scene data can not only efficiently store and utilize, but also facilitate the efficient implementation of real-time and high reality planting algorithm through fast traversal method [14]. Most game engines have the specific implementation of scene graph. The scene graph organizes many nodes in

356

Z. Gao

a graph or tree structure. A node can contain multiple child nodes. The nodes do not actually store the geometric model data. The actual geometric model, camera or light are connected to each node through reference, and the node contains the node status. Node type and geometric transformation information geometric transformation will change the display position, size and shape of the actual geometric model in the scene. As shown in the Fig. 4, this data organization mode is conducive to the reuse of model data and saves storage space. If the same wall exists many times in the scene but has different positions, only one copy of the geometric data is actually stored. However, it is referenced many times and linked to different nodes. Deleting nodes will not delete the real model objects. Only the model whose reference count is can be deleted from the file. The mode of scene data organization not only determines the efficiency of data storage, but also affects memory management, visual cropping calculation, level of detail calculation and collision detection [15].

Fig. 4. 3D scene data diagram

Where: fg* is the window handle of the object to be filled; P) 97:9q is an al-) mlamn structure array, which is used to store the vertex positions and color information of triangles or rectangles. Al-) mlamn is defined as follows:

Simulation Research on Realizing Animation Color Gradient Effect

357

]5al-) mlamn j^, al-) mlamn j^d, al-) mlamn uoooo where: 13@ RS) 97:9q is the number of all vertices divided into triangles or rectangles; Pt94h is a structure array, which represents the vertex order of triangles or rectangles that make up the graph. In triangle mode, it is filled with triangles divided into; In rectangle fill mode, fill with a divided rectangle. 6L + G- M@AYAL - + @The 6dm structure represents a triangle fill. The statement is as follows:

Of which: 13@RST94H Represents the number of rectangles or triangles; 13t219 indicates the filling mode; Three fill modes:

After compiling and running, an oblique ellipse with a yellow background and a red background will appear in the client area. As shown in Fig. 5.

Fig. 5. Compilation and running background

3.2 Color Model Selection Color mode and ribbon design are the basic knowledge of terrain color gradient and rendering. In the field of computer graphics and image processing, there are mainly RGB and HSL color modes. RGB mode is one of the main color models supported by OpenGL. However, the three color components of RGB color mode do not well reflect the change law of people’s visual perception of color, and cannot easily express people’s perception of the change law of color characteristics such as hue, saturation and

358

Z. Gao

brightness. If you want to describe the color bands whose colors change continuously with the spectral order, such as red, orange, yellow, green, cyan, blue and purple, RGB mode requires R, G and B3 components to change color in turn, and the change law is very complex, while HSL mode requires only linear h component change. Therefore, multi-color gradients can only use the color bands of the HSL model. Because OpenGL does not support HSL color mode, if you want to use HSL mode for multi-color gradient of terrain, you must complete the interactive switching between HSL mode and RGB mode. (1) RGB to HSL conversion step: let R, G and B represent the red, green and blue color fractions of a color in the RGB color space, and R, G and B ∈ [0,1]; h. S, l are the hue, saturation and brightness of the color relative to the pigment in the HSL space, and H ∈ [0360), s, l ∈ [0, 1]. If Max and min are taken as the standard deviation and minimum value of R, G and B respectively, the conversion formula from (R, G, b) to (h, s, l) is as follows: ⎧ ⎪ 0 ⎨ g−b + 0, max = randg ≥ b 60 ∗ max−min (1) h= ⎪ ⎩ 60 ∗ g−b + 360, max = randg < b max−min Using the above steps, the calculated TR, thermogravimetric analysis and TB represent the red, green and blue components of color in RGB space respectively. Note that the values of trance, thermogravimetric analysis and TB are all in the range of zero to one. In color gradient rendering, eye-catching color bands are usually selected. Therefore, in HSL mode, the saturation is usually one point zero and the contrast is zero point five. At this time, if the color ring of “red, orange, yellow, green, green, blue and purple” is divided into six sections, the proportion of color components in each color section on the color ring changes into positive and negative correlation in RGB mode.

4 Conclusion The results of color gradient rendering are mainly related to the animation grid unit size, rendering ribbon settings, and the interpolation mapping between color and distance. According to the smooth transition mechanism between colors of OpenGL and the characteristics of RGB and HSL color modes, it can provide two-color gradual linear interpolation in RGB mode or multi-color gradual linear interpolation in HSL mode, so that the color gradual rendering technology can maintain the smooth transition between colors at any resolution, thus achieving a smoother terrain color gradual rendering effect, Greatly enhanced the terrain color rendering strength of the 3D system. Acknowledgments. This work was financially supported by Jiangsu Province Education Science “13th Five-Year plan” Key funding project (Grant No. B-a2018/03/12): “Empirical and Theoretical Research on achievement Sharing of Vocational Education Community in Mixed Ownership Mode -- A Case study of Characteristics of Taobao Film and Television College of Wuxi City Vocational and Technical College” fund.

Simulation Research on Realizing Animation Color Gradient Effect

359

References 1. Xu, L.: Application of virtual reality technology in 3D animation production. Int. J. Educ. Technol. 2(3) (2021) 2. Jiaofei, H., Guangpeng, Z.: Test items of 3D printed copper alloy parts based on virtual reality technology. Comput. Intell. 38(1) (2021) 3. Mutian, N., Hung, L.C., Zhiyuan, Y.: Embedding virtual reality technology in teaching 3d design for secondary education. Front. Virtual Reality 2, 661920 (2021) 4. Luchuan, J.: Research on 3D simulation of swimming technique training based on FPGA and virtual reality technology. Microprocess. Microsyst. 81, 103657 (2021) 5. Qing, X.: Application of 3Ds Max and virtual reality technology in 3D submarine scene modeling. Microprocess. Microsyst. 80, 103562 (2021) 6. Technology - virtual reality technology; researchers from northeast forestry university report recent findings in virtual reality technology (Distribution of Landscape Architecture Based On 3d Images and Virtual Reality Rationality Study). J. Eng. (2020) 7. SPEED 3D Inc.; “Label location system with virtual reality technology” in patent application approval process (USPTO 20200226837). J. Eng. (2020) 8. Yang, H.: Digital watermarking optimization algorithm of 3D CT mechanical image based on virtual reality technology. Comput. Inf. Mech. Syst. 3(2) (2020) 9. Xing, L.: Three-dimensional visualized urban landscape planning and design based on virtual reality technology. IEEE Access 8, 149510–149521 (2020) 10. Lu, L.: Design of 3D reconstruction system of indoor environment based on virtual reality technology. Int. J. Intell. Inf. Manage. Sci. 8(4) (2019) 11. Bochen, Z.: Research on the integration of photographic images and photography art based on 3D virtual reality technology. Concurrency Comput. Pract. Exp. 31(10), e4749 (2019) 12. Jiang, R.: Using untiy 3d game development platform to develop low cost online real estate display system. Adv. Mater. Res. 3265(981–981), 213-217 (2014) 13. Sun, F.J., Chen, H., Liu, H.J.: Research of visualized 3d substation simulation based on virtual reality technology. Appl. Mech. Mater. 3252(568–570), 1834–1838 (2014) 14. Panigrahi, S.R., Jayaram, S., Jayaram, U., Zbib, H.M., Mesarovic, D.S.:. Immersive 3D visualization of the collective behavior of particles and crystal dislocations using virtual reality technology. Model. Num. Simul. Mater. Sci. 4(3), 79 (2014) 15. Cheng, S., Chen, G., Chen, Q., Xiao, X.: Research on 3D dynamic visualization simulation system of toxic gas diffusion based on virtual reality technology. Process Saf. Environ. Prot. 87(3), 175–183 (2009)

Cross-Modal Retrieval Based on Deep Hashing in the Context of Data Space Xiwen Cui1,2(B) , Dongxiao Niu1,2 , and Jiaqi Feng1 1 School of Economics and Management, North China Electric Power University,

Beijing 102206, China [email protected] 2 Beijing Key Laboratory of New Energy and Low-Carbon Development, North China Electric Power University, Beijing 102206, China

Abstract. The advent of the Big Data era has led to the heterogeneity of data from multiple sources, and traditional database management techniques are overstretched in the face of the increasing complexity and variability of data. As a result, the concept of data spaces has been developed. The multiple and heterogeneous nature of data in the current context makes it necessary to provide a variety of query methods in the data space. As heterogeneous data contains various types of data structures, and traditional information retrieval mainly targets text documents to establish indexing relationships, it cannot provide queries to meet the needs of multiple sources of heterogeneous data. Therefore, this paper compares the current advanced algorithms used in cross-modal retrieval based on the data space to further understand cross-modal retrieval. Keywords: Data space · Cross-modal retrieval · Deep learning · Deep hashing

1 Introduction Most of the networks that exist in the web today are heterogeneous information networks containing comprehensive structural information and rich semantic information [1, 2]. As the volume and type of data increases and becomes more complex, higher demands are placed on data management. Based on this, data space, a data management technology, has emerged. Data space is a new data management concept that differs from traditional data management and is a subject-oriented data management technology. Similar to traditional data management technology, data space management is also faced with the research of various technologies such as data model, data integration, query and indexing. It can provide users with more convenient data management services [3]. The heterogeneity of data in the dataspace requires the dataspace to be different from traditional data management techniques and needs to provide query services suitable for heterogeneous data from multiple sources. The diversity of queries also requires retrieval methods that are highly flexible and adaptable. The general query methods are information retrieval, structured data query and XML query [4–6], of which information retrieval methods are more widely used in the field. However, traditional information © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 360–369, 2023. https://doi.org/10.1007/978-3-031-31775-0_37

Cross-modal Retrieval Based on Deep Hashing

361

retrieval only builds inverted indexes for individual text documents and cannot establish associations between heterogeneous data. Therefore, this paper investigates cross-modal retrieval that can provide more flexible query methods based on the characteristics of heterogeneous data in the data space, in order to provide assistance to future generations in the data space query stage.

2 Related Theoretical Concepts (1) Data space The concept of data space was introduced by Franklin et al. in 2005 [7]. A data space is a mapping around data to user associations and data to business processes. Its data set is a collection of all controllable data related to the subject, which includes both objects and relationships between objects. The data in the data space changes dynamically according to the subject’s needs, while data that does not meet the requirements is eliminated. Data spaces are a relatively new area of research and there is an increasing amount of research into their associated technologies [8, 9]. Subjects, datasets and services are the three elements of a data space. The subject is the owner of the data space, which can be an individual or a group. The dataset is the collection of all controllable data associated with the subject. The subject manages the data space through services and has functions such as classification, query and indexing.Data spaces can be studied in terms of search engines, infrastructure services facilities, data security issues, data modelling and spatial integration.As an extension of the Linked Database, the use of the data space also includes functions such as data integration, storage, processing, analysis and presentation, but the large number of data sources involved and the variety of data formats make the management of the data space much more complex. Some of the more representative technologies are iMeMex and SEMEX and MyLifeBits [10–12]. There are a number of fundamental differences between data spaces and traditional database systems [13]. A brief summary of the three characteristics of the data space is shown in Table 1.

Table 1. Data space features Feature

Content

Feature 1

Relevance

Feature 2

Monitoring

Feature 3

Securing

362

X. Cui et al.

1) Subject relevance Unlike traditional data storage methods, the data space actually refers to the subject data space. With the changing needs of the subject, data items are continuously incorporated from the public data space into the subject data space. The data in it are all relevant to the subject and there is a threshold of relevance such that only when the relevance is greater than a certain level will the data be included in the subject data space. Once the relevance is reduced, the data will be removed from the subject data space. 2) Real-time monitoring Due to its dynamic nature, the data in the data space needs to be monitored in real time. There is a large amount of distributed storage of heterogeneous data in the data space, which is subject to change over time and space. At the same time, the needs of the data space itself change, so the data space needs a very good monitoring system to monitor the changes in real time. 3) Security The data organisation based on the underlying architecture of the data space uses fine-grained hierarchical access control technology and fine-grained hierarchical protection security algorithms. Compared to the coarse-grained storage approach in traditional databases, the security management techniques in the data space are more reliable. (2) Cross-modal retrieval Information Retrieval (IR) is one of the most important methods of querying databases. Its objective is to fulfil the needs of the target customer and provide relevant Contents. The general process of cross-modal retrieval is shown in Table 2.The specific steps are as follows. 1. 2. 3. 4.

Processing of the different modal data. Performing feature extraction. Feature fusion as well as matching of the acquired features. The retrieval results are obtained through the representation model and related algorithms to rank them.

3 Cross-Modal Retrieval Methods Based on Deep Hashing Cross-modal retrieval algorithms based on deep learning consist of two main types. The first type is a deep learning optimisation algorithm. The second type combines hashing algorithms with deep learning algorithms, where the strengths of both complement each other to optimise learning. It is another form of development of deep learning in the field of cross-modal retrieval.

Cross-modal Retrieval Based on Deep Hashing

363

Table 2. Cross-modal retrieval process Step

Content

Step 1

Processing of data with different modalities

Step 2

Extraction of individual modal features

Step 3

Feature fusion

Step 4

Matching features, calculated results

Step 5

Sorting the results

Step 6

Output search results by sorted results

Semantic Hashing methods were the first deep learning-based hashing algorithms [14]. However, in early deep hashing algorithms, feature learning and hash functions were performed separately until the advent of convolutional neural networks. Inspired by CNN, a paper published by Mr. Pan Yan’s research group in collaboration with Mr. Yan Shuicheng proposed a method called Convolutional Neural Network Hashing [15]. Since then, deep hashing algorithms have had a rapid and diverse development. As there are many different kinds of deep hashing algorithms, this paper classifies deep hashing methods into unsupervised, semi-supervised and supervised deep hashing methods according to whether label information is used or not, and introduces some related algorithms proposed in recent years, in order to help solve problems related to data management in data spaces.The specific classification diagram is shown in Fig. 1.

Fig. 1. Classification diagram

3.1 Unsupervised Deep Hashing Hash code learning is a discrete optimization problem with constraints. Therefore, most hashing transforms the problem into continuous learning, hash codes were hand-crafted [16–18].

364

X. Cui et al.

However, manually extracted features may not be suitable enough for engineering with hash coding learning. Based on this, Jiang et al. proposed Deep Cross-Modal Hashing (DCMH) for cross-modal retrieval applications [19]. Since DCMH was first proposed to combine hashing with deep learning and proved its feasibility, many crossmodal research efforts based on deep hashing have been carried out. Unsupervised hashing allows samples to be unlabelled and is applicable to unlabelled large sample image searches [20]. The domain adaptation problem refers to the fact that training and testing have different spreads [21]. Venkateswara, H. et al. proposed domain adaptive hashing (DAH) networks to learn information-rich hash codes to address the problem of unsupervised domain adaptation [22]. The model performed classification by learning the information hash codes of the source and target domains.He recommended using a deep network to output binary hash codes for classification purposes. This is because he believes that the hash can be used for the creation of a unique loss function for data on the unlabelled target domain. Also, during prediction, the model can compare the hash of the test samples with the hash of the training samples to obtain a more robust category prediction.The Domain Adaptive Hash (DAH) network is shown in Fig. 2.

Fig. 2. DAH structure diagram [22]

Matrix decomposition-based cross-modal hashing had been highly successful, but suffers from the problem of not exploring the inter- and intra-modal neighbourhood structure present in the original data and discarding discrete constraints in learning the hash function to impose a relaxation strategy to produce suboptimal solutions [23]. To address this issue, Wu et al. proposed a new unsupervised deep cross-hash (UDCMH) to deal with the above limitations [24], which is the first work to implement matrix decompositionbased cross-modal hashing in an unsupervised deep learning framework,it performed unsupervised deep hashing for large-scale cross-modal retrieval via a binary latent factor model.The original model structure is shown in Fig. 3.

Cross-modal Retrieval Based on Deep Hashing

365

Fig. 3. UDCMH structure diagram[24]

It can be seen that current unsupervised deep hashing methods rely heavily on the features extracted by deep learning. At the same time, unsupervised deep hash learning methods have difficulty in obtaining high-quality hash codes due to the lack of similarity supervision information. How to improve the quality of hash encoding is an important area for future research. Each model has its own merits and is worthy of consideration by all scholars. 3.2 Semi-supervised Deep Hashing Unsupervised methods are clearly more likely to satisfy the need for large amounts of unlabelled data than supervised learning, as acquiring labelled data can be expensive and labour intensive. Supervised methods can usually have excellent results. Therefore, the semi-supervised method is a compromise option, drawing on the strengths of each side to get excellent learning ability and results.The semi-supervisory model is also currently growing [25]. Semi-supervised training of a small amount of labelled data together with a large amount of unlabelled data. This gives better results [26]. Based on this, Wang et al. proposes a semi-supervised discrete hash that combines labelled and unlabelled data, using the unlabelled data to facilitate learning [27].The structural diagram of the model is shown in Fig. 4. Currently, semi-supervised cross-modal depth hashing based on semi-supervision is still in the less researched area and will be a worthwhile research and discussion method for various scholars in the future.

366

X. Cui et al.

Fig. 4. Semi-supervised discrete hashing framework diagram [27]

3.3 Supervised Deep Hashing Models with supervision achieve better performance by learning the relationship between label information and semantics. How to reduce the modal gap to improve accuracy remains a key bottleneck. A self-supervised adversarial hashing model was proposed by Li et al. The results show that the use of adversarial networks possesses good performance [28]. The model took into account the problem of multiple labels and was able to use its own labels as supervisory information. The simultaneous used of two adversarial networks, compared as the difference between the original data before conversion to hash codes, reduced the loss of high-dimensional data. There are differences between features of different modalities, which leads to crossmodal hashing to stitch together this of non-corresponding features to complete the retrieval. Based on this, Peng et al. proposed a double-supervised attention network (DSADH) based on semantic prediction loss [29]. The method applies cross-modal attention blocks to efficiently encode rich and relevant features to learn compact hash codes.The structural diagram of the model is shown in Fig. 5. The model used cross-modal attention blocks to map image and text features to the same embedding space. The attention block can focus on relevant features and ignore irrelevant features.The cross-modal attention block of the model was used to learn the mask weights using cross-modal features. Assume that the output of the graphical network for image samples and text samples are pi and qi , respectively. The mask weights of image data Mi (x) and text data Mj (x) with the same size were formulated as: Mi (x) = fmask (pi ) Mj (x) = fmask (qi )

(1)

Cross-modal Retrieval Based on Deep Hashing

367

Fig. 5. DSADH structure diagram[29]

The model formulated the output of image activation and text activation by using cross-modal attention blocks, as shown in the following equations. Hi (x) = (1 + Mi (x)) · pi

(2)

Hj (x) = (1 + Mj (t)) · qi

(3)

where Hi (x) and Hj (x) were the activation outputs of the cross-modal Attention layer for both image and text data. Finally, Hi (x) and Hj (x) were fed to a fully connected layer to generate the final representation of the image and text instances.

4 Conclusion Based on the retrieval functions required in the data space, this paper compares the advanced algorithms currently used in cross-modal retrieval into unsupervised deep hashing, semi-supervised deep hashing and supervised deep hashing, and concludes that all three models have merits and that the retrieval of data spaces can draw on and integrate the merits according to specific needs. Acknowledgements. This work was supported by the National Key R&D Program of China (grant number 2020YFB1707801).

References 1. Molaei, S., Farahbakhsh, R., Salehi, M., Crespi, N.: Identifying influential nodes in heterogeneous networks. Expert Syst. Appl. 160, 113580 (2020)

368

X. Cui et al.

2. Chairatanakul, N., Liu, X., Murata, T.: Pgra: projected graph relation-feature attention network for heterogeneous information network embedding. Inf. Sci. 570, 769–794 (2021) 3. Möller, J., Jankowski., Hahn, A.: Towards an architecture to support data access in research data spaces. In: 2021 IEEE 22nd International Conference on Information Reuse and Integration for Data Science (IRI), pp. 310–317 (2021) 4. Taan, A.A., Khan, S., Raza, A., Hanif, A., Anwar, H.: Comparative analysis of information retrieval models on Quran dataset in cross-language information retrieval systems. IEEE Access 9, 169056–169067 (2021) 5. Afrati, F., Damigos, M.G., Stasinopoulos, N.: SQL-like query language and referential constraints on tree-structured data. In: 25th International Database Engineering & Applications Symposium (IDEAS 2021). Association for Computing Machinery, New York, NY, USA, pp. 1–10 (2021). 6. Subramaniam, S, Haw, S.C., Soon, L.K.: Improved centralized xml query processing using distributed query workloadt. IEEE Access 9, 29127–29142 (2021) 7. Franklin, M., Halevy, A., Maier, D.: From databases to dataspaces: a new abstraction for information management. Sigmod Record 34(4), 27–33 (2005) 8. Beverungen, D., Hess, T., Kster, A., Lehrer, C.: From private digital platforms to public data spaces: implications for the digital transformation. Electron. Mark. 32, 493-501 (2022) 9. Agostinetti, N.P., Kotsi, M., Malcolm, A.: Exploration of data space through trans-dimensional sampling: a case study of 4D seismics. J. Geophys. Res.-Solid Earth 126(12), e2021JB022343 (2022) 10. Dittrich, J.P., Salles, M., Kossmann, D., Blunschi, L.: iMeMex: escapes from the personal information jungle. In: International Conference on Very Large Data Bases. VLDB Endowment, pp. 1306–1309 (2005) 11. Dong, X., Halevy, A.: A platform for personal information management and integration. In: Ancient Greek philosophy, pp.119–130 (2005) 12. Gemmell, J., Bell, G., Lueder, R., Drucker, S., Wong, C.: Mylifebits: fulfilling the memex vision. In: Acm Multimedia System J. pp.235–238 (2002) 13. Curry, E.: Dataspaces: fundamentals principles and techniques. In: Real-time Linked Dataspaces: Enabling Data Ecosystems for Intelligent Systems Cham, pp. 45–62 (2020). https:// doi.org/10.1007/978-3-030-29665-0_3 14. Salakhutdinov, R., Hinton, G.: Semantic hashing. Int. J. Approximate Reasoning 50(7), 969– 978 (2009) 15. Xia, R., Pan, Y., Lai, H., Liu, C., Yan, S.: Supervised hashing for image retrieval via image representation learning. Proc. Natl. Conf. Artif. Intell. 3, 2156–2162 (2014) 16. Fayadh, A., Saban, ¸ Ö., Ammar, A., Polat, K.: An effective hashing method using W-Shaped contrastive loss for imbalanced datasets. Expert Syst. Appl. 204, 117612 (2022) 17. Kumar, S., Udupa, R.: Learning hash functions for cross-view similarity searchBronstein. In: Twenty-Second International Joint Conference on Artificial Intelligence, pp. 1360–1365 (2011) 18. Bronstein, M., Bronstein, A., Michel, F., Paragios N.: Data fusion through cross-modality metric learning using similarity-sensitive hashing. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 3594–3601 (2010) 19. Jiang, Q.Y., Li, W.J.: Deep cross-modal hashing. In: IEEE Conference on Computer Vision & Pattern Recognition, pp. 3270–3278 (2017) 20. Gattupalli, V., Zhuo, Y., Li, B.: Weakly supervised deep image hashing through tag embeddings. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 10375–10384 (2019) 21. Ganin, Y., Ustinova, E., Ajakan, H., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(1), 2096–2030 (2015)

Cross-modal Retrieval Based on Deep Hashing

369

22. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 5385–5394 (2017) 23. Wang, D., Wang, Q., Gao, X.: Robust and flexible discrete hashing for cross-modal similarity search. IEEE Trans. Circ. Syst. Video Technol. 28(10), 2703–2715 (2017) 24. Wu, G., et al.: Unsupervised deep hashing via binary latent factor models for large-scale crossmodal retrieval. In: Twenty-Seventh International Joint Conference on Artificial Intelligence, pp.2854–2860 (2018) 25. Bhunia, A. K., Chowdhury, P.N., Sain, A., Yang, Y., Xiang, T., Song, Y.Z.: More photos are all you need: semi-supervised learning for fine-grained sketch based image retrieval. In: 2021 IEEE/CVF Conference On Computer Vision and Pattern Recognition, CVPR 2021, pp. 4245–4254 (2021) 26. Vishnu, B., David, C., Ewan, B., Michael, K., Rahul, S., Matthew, B.: Pulsar candidate identification using semi-supervised generative adversarial networks. In: Monthly Notices of the Royal Astronomical Society, vol. 505, no. 1, pp. 1180–1194 (2021) 27. Wang, X., Liu, X., Hu, Z., Wang, N., Du, J.X.: Semi-supervised discrete hashing for efficient cross-modal retrieval. Multimedia Tools Appl. 79, 25335–25356 (2020) 28. Li, C., Deng, C., Li, N., Liu, W., Gao, X., Tao, D.: Self-supervised adversarial hashing networks for cross-modal retrieval. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, p. 18311749 (2018) 29. Peng, H., He, J., Chen, S., Wang, Y., Qiao, Y.: Dual-supervised attention network for deep cross-modal hashing. Pattern Recogn. Lett. 128, 333–339 (2019)

Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology Peng Yu1 , Shaohua Song1 , Xiangdong Jiang1 , Hailin Gong1 , Ying Su2(B) , and Bin Wu2 1 Benxi Power Supply Company, Benxi 117200, Liaoning, China 2 College of General Education, Shenyang City University, Shenyang 110112, Liaoning, China

[email protected]

Abstract. With the progress of society and the transformation of contradictions, people’s desire for a better life is getting stronger and stronger, and in the construction of society and life, the demand for electricity is also increasing. The digitization of the power grid makes the national grid closely connected, and the power supply is more stable and reliable. The purpose of this paper is to study the target monitoring algorithm of digital power grid field operation based on mixed reality technology. Firstly, the composition of virtual reality technology and the combination of three-dimensional GIS technology and smart grid are studied. Secondly, the design of the interactive technology in the digital power grid field operation target monitoring system is introduced. The interaction technology includes the design of speech recognition technology and gesture recognition technology. Finally, the key algorithm of this paper is introduced. The target monitoring experiment in three-dimensional space is carried out based on the visual target monitoring technology, which proves that the algorithm in this paper can be applied to the real experimental scene and has a good recognition rate and real-time performance. Keywords: Mixed Reality Technology · Digital Power Grid · Field Operation · Target Monitoring

1 Introduction Once an accident occurs in the power grid, it will have a major impact on the production and human life of all levels of society, and even lead to social paralysis or chaos. From the technological revolution of the second industrial revolution to today, the world has become more and more dependent on electricity, which is closely related to production and human life [1, 2]. Electricity is highly valued by governments and people of all countries. How to do a good job in the management of power supply network field operations has become the focus of the power industry [3]. Mixed reality technology provides an important path for digital grid field operation target monitoring [4, 5]. There is a large amount of algorithmic research on target detection, and some scholars draw inspiration from the work of moving target defense (MTD) and consider a dynamic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 370–379, 2023. https://doi.org/10.1007/978-3-031-31775-0_38

Monitoring Algorithm of Digital Power Grid Field Operation Target

371

monitoring strategy that makes it difficult for attackers to prevent the only behavioral signals that indicate HVT status. Identify. The problem of finding differential immune configuration sets for MTD in the context of power grids is formulated, and then an algorithm for computing it is proposed. To find the optimal movement strategy, model the MTD as a two-player game and consider the Stackelberg strategy. The effectiveness and scalability of the proposed method are demonstrated with the help of IEEE test cases [6]. Other scholars have proposed a DC sub-grid for electrical regulation in a digital grid system. The sub-grid consists of positive DC voltage lines and negative DC voltage lines, which are respectively connected to a pair of inverters in the digital grid router. The adaptive hysteresis current control technique is applied to control the sub-grid lines to keep the voltage at a constant value while adapting to the given power [7]. Therefore, it is of practical significance to study the target monitoring algorithm of digital power grid field operation based on mixed reality technology. In this paper, VRGIS is used to build a 3D panoramic digital model of the station line, and a unified time scale digital scene including people, equipment and environment to measure the 3D position number is constructed, which can automatically locate, track and monitor the inspection personnel in real time and transmit event messages in time., you can digitally and comprehensively understand the current status of realtime monitoring on-site operations, accurately monitor the working status of on-the-job personnel, monitor the operation of operating equipment, and truly realize real-time lean safety management.

2 Research on Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology 2.1 Hybrid Reality Technology (1) Virtual and real fusion technology The core task of virtual-reality fusion technology is to load the virtual digital information into the real scene in a way that does not contradict and is real, and adjust the illumination of the fused world, so as to achieve the same realistic effect on both, and bring the user a real feeling. HoloLens2 is a headworn hybrid reality device. By using the holographic technology developed by Microsoft, combined with multiple sensors on the helmet, it can project the hybrid reality world superimposed by virtual objects and real scenes into the user’s eyes and complete the hybrid reality experience. The flow chart of virtual-reality fusion technology is shown in Figure 1: (2) Real-time interaction technology Hybrid reality systems not only need to integrate virtual objects with the real world, but also need to realize direct interaction between users and virtual objects in order to have a better user experience. In the past, people generally used keyboard, mouse and other commonly used handheld tools to interact with computers. With the development of hardware devices and the improvement of hybrid reality technology, related MR devices began to appear more direct interaction methods, such as human posture, voice, gesture, etc. By defining human gestures or gestures, when the hybrid reality system recognizes the corresponding action instructions, it triggers

372

P. Yu et al.

the corresponding events, thus realizing natural human-computer interaction. Next, we will introduce the most important gesture interaction technology in humancomputer interaction as an example.

Virtual scene

Scene collection

Virtual model rendering

Real world information

Virtual and integration module

real

devices displaying Fig. 1. Virtual reality fusion technology

Before hand gesture interaction, first establish a hand gesture recognition database in advance. When the computer recognizes the corresponding hand gesture through the camera, it will retrieve and calculate it with the database. If it is the same as the preset hand gesture, it will be converted into commands that the computer can recognize, and trigger the corresponding recognition event to complete human-computer interaction. Gesture capture methods are mainly divided into the following two ways. The first is sensor-based gesture acquisition, which can obtain the gesture direction, position and other information recorded by the sensor in the glove by wearing the data glove. However, gloves and supporting hardware equipment will make the whole system bulky, and it is not convenient to use in natural places. The second way is to use machine vision. With the development of computer vision, gesture detection and tracking can be well realized by using image processing methods and in-depth learning knowledge. Moreover, visionbased gesture recognition does not require additional hardware equipment, and has the characteristics of low cost and nature. 2.2 The Composition of Virtual Reality Technology The power supply operator provides specific input signals to the computer through input systems such as power codes. After the VRGIS software receives the input signal, it interprets the input signal, then updates the VRGIS database accordingly, adjusts the VRGIS View stream, and converts the current VRGIS view, so that the user can see the

Monitoring Algorithm of Digital Power Grid Field Operation Target

373

actual results in time. The system consists of input part, output part, virtual environment database and virtual reality software [1, 8]. 2.3 Combining 3D GIS Technology with Smart Grid The application of virtual reality technology in the graphic management system of the smart grid GIS platform, as one of the achievements of high-tech, has high technical content and strong application value to meet the market demand. Therefore, the development of VRGIS will promote the 3D GIS visualization of the power grid. It is an indispensable direction for the development of smart grid systems in the future [9, 10]. The core technology of the GIS platform graphics management system is Skyline software, which is an excellent 3D digital construction software. Skyline not only has powerful 3D digital imaging technology, but also can load 2D and 3D power equipment data such as remote sensing image data, scanned ground data data, and digital height data to create a real 3D power grid scene. Skyline is currently the software of choice for creating large-scale realistic 3D digital scenes [11]. The unique feature of Skyline software is the rapid integration of distributed and disparate real-time transmission data sources without preprocessing to rapidly create real-time 3D interactive scenes [12]. 2.4 On-Site Operation Target Monitoring of Power Grid In this paper, the background difference method is used to detect the power grid field operation target. After obtaining the background difference image, the maximum inter-class variance method is used to obtain the threshold to binarize the background difference image, and then the open operation is used to filter out the small noise points in the binarized image [13]. For the noise points that cannot be filtered out by the open operation, the connected area marking method is used to filter out the noise points, and finally the area and size of the moving target are obtained. The background difference formula is: d (x, y) = |f (x, y)b(x, y)|

(1)

In the formula, d(x, y)——the pixel gray value of the image (x, y) point after the difference; f(x, y)——the pixel gray value of the current frame image (x, y) point; b( x, y) - the pixel gray value of the background image (x, y) point. In order to distinguish the foreground area and the background area in the image after background difference, it is necessary to select an appropriate threshold to binarize the image after background difference. In this paper, the maximum inter-class variance method is used to solve the threshold value of the image after background difference [14, 15]. The maximum inter-class variance method is a method for obtaining the threshold value in image segmentation, and it is introduced into the background difference image to obtain the binarization threshold value [16]. In this algorithm, a row of continuous foreground pixels in a binary image is called a target segment. The image is scanned from top to bottom and from left to right. The connected area is judged by using the 8-neighborhood connectivity criterion. The discrimination rule overlapping with the target segment of the upper and lower lines is defined as: the current target segment is represented as (x1s, x1e), and the target segment

374

P. Yu et al.

of the upper and lower lines is represented as (x2s, x2e), where Xs represents the column coordinates of the starting point pixel of the target segment., Xe is the column coordinate of the end point pixel of the target segment. Then the overlapping criterion of the two target segments is: X1s − 1 ≤ X2e , X1e + 1 ≥ X2s

(2)

According to the size of the moving target area in the experiment, a threshold is set, which is set to 3000 in this paper. If the number of pixels in the connected area is less than the threshold, the connected area is considered to be a false target formed by noise, and such noise points are filtered out, otherwise the connected area is considered to be a moving target [17]. For some samples with large differences between abnormal targets and background, Poisson fusion algorithm is used for image fusion during synthesis. Poisson fusion algorithm is a famous image editing algorithm, which can achieve more natural fusion effect. It is widely used in image fusion and image restoration. The image synthesis process is shown in Fig. 2.

Mark target boundary and type

Enter target and label

Use Poisson fusion for specific samples

Location label of target for training target detection network

Composite sample

Fig. 2. Image synthesis process

Poisson fusion algorithm regards image fusion problem as solving the minimization problem of Eq. 3. ¨ min |∇f − v|2 withf |∂ = f ∗ |∂ (3) f



Monitoring Algorithm of Digital Power Grid Field Operation Target

375

where: f represents the fused image; V is called the guide field, and the gradient field of the original foreground image is taken.

3 Investigation and Research on the Monitoring Algorithm of Digital Power Grid Field Operation Target Based on Mixed Reality Technology 3.1 Digital Power Grid Field Operation Target Monitoring System The structure of the three-dimensional simulated substation is divided according to different levels, and the maintenance and operation work requirements within the jurisdiction of different areas are incorporated into the design steps, and a three-dimensional real-time wireless positioning system for the four major production areas of power supply, transportation, distribution, transportation and inspection is constructed according to the actual application on site. Specifically, it is mainly divided into presentation layer, application layer, network layer and perception layer. On the display layer, specific information such as three-dimensional stations and three-dimensional distribution of personnel can be displayed in the large-screen system for dispatching operation and maintenance personnel to monitor security in real time in the smart substation. The application layer deeply transmits and stores real-time data such as personnel identity and location information, vehicle management and control information, etc. in the lowerlevel applications. The function of the network layer is relatively single, and the wireless local area network is used to transmit relevant information. The perception layer first collects the video images of the movement of the first element of the production site and the inspection information of the equipment. 3.2 3D Space Target Monitoring The connection between the 3D world coordinates and the camera origin has one and only one intersection point with the plane, but the camera origin and the point on the plane cannot locate the specific position of the 3D world, and only one ray can be obtained. After binocular vision is adopted, the three-dimensional world coordinates can be restored through the intersection of these two rays. The shooting range of the binocular camera is limited, but we can arrange multiple cameras so that any position in the space can be captured by at least two cameras, and then according to the results of the calibration of the pair of cameras, the target detection algorithm for motion and The binocular vision algorithm can obtain the spatial coordinates of the final three-dimensional world.

376

P. Yu et al.

4 Investigation and Analysis of Monitoring Algorithm for Digital Power Grid Field Operation Target Based on Mixed Reality Technology 4.1 Analysis of Augmented Reality Interaction Sub-module The interaction sub-module first performs corresponding gesture recognition, voice recognition and control recognition on the received data according to different data types, and finally obtains the corresponding gesture command ID, voice command ID and control command ID. The obtained command IDs will be correspondingly transmitted to the AI module, the graphics computing submodule, the user management submodule, etc. through the message transmission submodule according to the type of the command. In the interaction sub-module, speech recognition technology and gesture recognition technology are relatively key technologies, and the characteristics of natural interaction are important means to realize wireless control. (1) Augmented reality voice interaction In this paper, the system needs to support the interactive control of multi-user voice commands, so it needs to achieve universal voice interaction that does not distinguish users. The speech recognition technology we use is based on the speech recognition of Microsoft’s MicrosoftSpeechSDK5.1. It is a software development kit provided for some speech applications and speech engines on the Windows platform, including the Win32 Compatible Speech Application Programming Interface (SAPI), Microsoft’s Continuous Speech Recognition Engine (CSR) and Microsoft’s Speech Synthesis Engine (TTS). The structure is shown in Fig. 3. (2) Augmented reality gesture interaction The system uses vision-based gesture recognition technology. The current gesture recognition technologies are mainly divided into: sensor-based (data gloves and motion sensors) and vision-based gesture recognition systems. The advantage of the former is that the speed and matching accuracy are much higher than the latter, but for users, wearing data gloves or motion sensors will still bring some inconvenience to users. Vision-based gesture recognition provides users with a more natural and direct interaction method to a certain extent.

Monitoring Algorithm of Digital Power Grid Field Operation Target

speech recognition application

377

speech synthesis application

API SAPI runtime DDI

speech recognition engine

speech synthesis engine

Fig. 3. Microsoft Speech SDK5.1 structure diagram

4.2 System Application Analysis We also divide the images into three categories, each category extracts 100 frames of images at the same time and consecutively from the video of the digital power grid field operation target monitoring system, a total of 400 images for the experiment. During the experiment, the original images and The identified images are saved for analysis and re-experimentation. The first category is for each target to squat and stand freely; the second category is for each target to freely intersperse and move; the third category is for each target to freely move. We will calculate and analyze the recognition rate of the algorithm and the time-consuming of the algorithm. The experimental results are shown in Table 1. Table 1. Experimental results Experimental group

Average recognition rate (%)

The average time spent by the algorithm for a single frame of picture (ms)

free squat

88.65

10.29

Freely interspersed mobile groups

90.34

11.88

free exercise group

91.39

11.27

The experimental results show that the recognition rate of the target tracking algorithm in the three-dimensional space is 88.65% and 90.34% in the vertical layout and the horizontal layout, respectively, and the algorithm efficiency is shown in Fig. 4. The average time spent per frame is between 10ms and 13ms, which can achieve real-time effects.

378

P. Yu et al.

Average recognition rate (%)

value

The average time spent by the algorithm for a single frame of picture (ms)

88.65 10.29

90.34 11.88

91.39 11.27

free squat

Freely interspersed mobile groups

free exercise group

Experimental group Fig. 4. Algorithmic efficiency

5 Conclusions Video image monitoring emerged in substations, and has developed rapidly within the jurisdiction of regional power grids or centralized control stations. This digital power grid field operation target monitoring system comprehensively applies graphics and image processing, virtual interconnection technology, and develops applications according to the needs of electric power enterprises, which can improve the municipal company’s level of new lean and intelligent operation and inspection. Workers can share the video of the inspection site according to their needs, and the operators can check the operation status of the equipment in multiple ways; they can also use the real-time control of the three-dimensional space target monitoring algorithm to warn the dangerous behaviors such as misoperation in advance. The saved video data can be well documented, avoiding the possibility of false paper data.

References 1. Prinsloo, G., Dobson, R., Mammoli, A.: Smart village load planning simulations in support of digital energy management for off-grid rural community microgrids. Curr. Altern. Energy 2(1), 2–18 (2018) 2. Moretti, G., Orlandini, S.: Hydrography-driven coarsening of grid digital elevation models. Water Resour. Res. 54(5), 3654–3672 (2018) 3. Kerur, P., Chakrasali, R.L., Kleit, et al.: Integration of block chain and digital grid routers for load balancing and bidirectional power flow in microgrid. Smart Grid 4(1), 11–17 (2019)

Monitoring Algorithm of Digital Power Grid Field Operation Target

379

4. Al-Rubaye, S., Rodriguez, J., Al-Dulaimi, A., et al.: Enabling digital grid for industrial revolution: self-healing cyber resilient platform. IEEE Network 33(5), 219–225 (2019) 5. Rojas, J., Reyes-Archundia, E., Gnecchi, J., et al.: Towards cybersecurity of the smart grid using digital twins. IEEE Internet Comput. 2021(2), 1–6 (2021) 6. Saleh, S.A., Richard, C., Onge, X.S., et al.: Comparing the performance of protection coordination and digital modular protection for grid-connected battery storage systems. IEEE Trans. Ind. Appl. 55(3), 2440–2454 (2019) 7. Magdy, G., Shabib, G., Elbaset, A.A., et al.: A novel coordination scheme of virtual inertia control and digital protection for microgrid dynamic security considering high renewable energy penetration. IET Renew. Power Gener. 13(3), 462–474 (2019) 8. Meske, C., Osmundsen, K., Junglas, I.A.: Designing and implementing digital twins in the energy grid sector. MIS Q. Exec. 20(3), 183–198 (2021) 9. Reddy, M.S., Rao, A.S., Prakash, C.S.: MLR institute of technology campus energy monitoring and controlling system with interconnection of grid and solar power. Int. J. Eng. Technol. 9(2), 424–428 (2020) 10. Khodamoradi, A., Liu, G., Mattavelli, P., et al.: Analysis of an on-line stability monitoring approach for DC Microgrid power converters. IEEE Trans. Power Electron. 34(5), 4794-4806 (2018) 11. Ferreira, E.F., Barros, J.D.: Faults monitoring system in the electric power grid with scalability to detect natural/environmental catastrophes. Int. J. Therm. Environ. Eng. 16(1–2), 37–45 (2018) 12. Kulkarni, N., Lalitha, S., Deokar, S.A.: Real time control and monitoring of grid power systems using cloud computing. Int. J. Electr. Comput. Eng. 9(2), 941–949 (2019) 13. Madhan, A., Shunmughalatha, A.: Power flow monitoring in smart grid using smart energy meter. Int. J. Pure Appl. Math. 119(12), 1829–1837 (2018) 14. Jack, K.E., Dike, D.O., Obichere, J., et al.: The development of an android applications model for the smart micro-grid power pool system monitoring and control. Int. J. Electr. Power Eng. 13(2), 12–18 (2020) 15. Menke, J.H., Hegemann, J., Gehler, S., et al.: heuristic monitoring method for sparsely measured distribution grids. Int. J. Electr. Power Energy Syst. 95, 146–155 (2018) 16. Perryman, T., Sandefur, C., Morris, C.T.: Developing interpersonal and counseling skills through mixed-reality simulation in communication sciences and disorders. Perspect. ASHA Special Interest Groups 6(2), 1–13 (2021) 17. Aiello, S., Aguayo, C., Wilkinson, N., et al.: Developing culturally responsive practice using mixed reality (XR) simulation in paramedicine education. Pacific J. Technol. Enhanced Learn. 3(1), 15–16 (2021)

Design and Analysis of Geotechnical Engineering Survey Integrated System Based on GIS Yan Wang(B) Yunnan Technology and Business University, Kunmimg, Yunnan, China [email protected]

Abstract. Based on the development and application of geographic information technology, based on geological exploration data, the development and development of geotechnical engineering investigation information system is carried out by using Microsoft SQL Server and Microsoft Visual e technology. The system can realize various functions such as management, analysis, calculation, threedimensional geological modeling and visualization of geotechnical survey data. With the rapid development and further advancement of computer technology, local information system technology as a practice is being developed worldwide and applied in many fields. The purpose of this paper is to study an integrated design system for GIS-based geotechnical research. Through the data analysis of technical research results, according to the requirements of data management and the need to increase the amount of data, a general data system for geotechnical research is established and applied to case studies. Keywords: GIS Technology · Geotechnical Engineering · Engineering Investigation · Survey Integration

1 Introduction With the deepening of redevelopment and opening, the rapid development of urban construction and the continuous changes of urban landscape, a large number of construction projects have brought great depth and breadth to geotechnical research projects [1, 2]. These technical research results are valuable biological information sources, which not only play an important role in urban planning and construction, but also have high research value. “Real-time” information on “data” has become an important function [3, 4]. In order to give full play to the influence of existing technical research data and convert “dead data” into “real-time data”, some cities have developed information technology systems [5, 6]. In recent years, the continuous development of computer technology has gradually promoted the application of geographic information system in geotechnical engineering. Geotechnical engineering is a comprehensive applied discipline that takes rock mass and soil mass as construction environment, building materials and building components, and then studies their rational utilization, improvement and transformation, and penetrates © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 380–388, 2023. https://doi.org/10.1007/978-3-031-31775-0_39

Design and Analysis of Geotechnical Engineering Survey

381

into many fields of national economic construction. Various types of geotechnical engineering are built on or in the rock-soil medium. Compared with other projects, geotechnical engineering is concealed, and the complexity and variability of rock and soil itself and the physical and mechanical characteristics of the formation process bring inconvenience to designers and constructors. The development of GIS technology system and the interweaving and penetration of emerging disciplines have laid a solid foundation for the informatization and visualization of geotechnical engineering. In the research of GIS technology in the field of geotechnical engineering information, the traditional engineering data has been changed from drawings, charts, The difficulty of modification and query caused by the paper-based text storage management method makes it even more difficult to carry out comprehensive performance analysis and resource sharing. With the continuous expansion of the fields involved in modern geotechnical engineering and the continuous expansion of the scope of in-depth space, GIS technology is also playing an irreplaceable role. Judging from the technologies and processes used, geotechnical research is at an important stage of developing and responding to new challenges and opportunities [7, 8]. Tsugawa JK introduces the basic concepts of rheology to geotechnical engineers, with a special emphasis on viscoelasticity under simple shear stress, which explains with reasonable accuracy the well-known phenomena in soils that vary over time. Geotechnical testing procedures are discussed based on rheological concepts, terminology is clarified, application examples of rheology in geotechnical engineering are given, and soil rheological parameters determined by traditional geotechnical testing and concrete testing are discussed Review [9]. JAMD Brito discusses the definition and development of judgment in geotechnical engineering. The basis of judgment is analyzed in detail and the heuristics and biases that lead to judgment failure are identified. The importance of expert judgment and codification is emphasized, and methods for improving judgment are described [10]. The establishment of an integrated system of geotechnical engineering survey is of great significance to the study of urban community education system, community planning, land evaluation, disaster mitigation and prevention, and feasibility of construction projects [12]. This paper first discusses the theory and technology of geographic information system (GIS). Then, by analyzing the management requirements and data usage of engineering research results, the overall design of the database system is carried out, and a general geotechnical engineering research database is designed and created on this basis. Finally, according to the established database, the secondary development technology is used to realize the system.

2 Research on Integrated System Design of Geotechnical Engineering Exploration Based on GIS 2.1 Geographic Information System A geographic information system is a computer information system mainly used to manage distributed data. It uses intelligent graphics to input, manage, present and analyze large amounts of data corresponding to local areas, and plays a very important role. It

382

Y. Wang

is a fictional science based on geography and information science, and it is produced under the strong material demand for population, resources and environment in modern society [13, 14]. Geographic information system is a computer technology system based on geospatial data, using the method of analyzing local models, providing extensive local information and capabilities in time, and conducting local research and local planning. It has the following three characteristics: the ability to collect, manage, analyze and develop much local information is life and energy; for geographic research and geographic decision-making, the ability to use geographic modeling methods as a medium for comprehensive analysis and power prediction.; supported by computer programs for geographic data management and for performing local analysis, manipulating field data, creating useful information, and completing computer programs for success [15, 16].GIS can be divided into the following five parts, as shown in Fig. 1:

Personnel

Data

Hardware

Software

Process

Fig. 1. Process characteristics of GIS

From the outside, GIS is a computer application and software system: what really matters is the geospatial information model built by the computer system and local data, a simple and cost-effective local system that relies heavily on information. Geographic Information System (GIS) is an information system with computer as the core, which carries out geographic information management, geospatial analysis and process simulation and prediction for the geographic environment. Geographical environment and geological environment are basically similar in the characteristics of three-dimensional spatial model, and in terms of the identifiability of geological disaster development and spatial distribution characteristics, it is necessary for urban engineering geological environment analysts to find new technical ways, for management, processing And analysis of geological environment information, especially geological hazard information, indicates an excellent development direction. 2.2 System Design Principle The geotechnical engineering survey information system refers to the collection and management of engineering survey data and related test results, including data input, query, three-dimensional operation, analysis and calculation, chart output, etc., to facilitate user query, query, update and other operations, for planning, The survey and design departments provide various engineering geological data and engineering geological consulting and evaluation services. The main purpose of this system is to obtain comprehensive and accurate output of engineering survey data, charts, tables, text, etc.,

Design and Analysis of Geotechnical Engineering Survey

383

and to edit, improve, analyze, and calculate relevant engineering information, so as to automatically archive and update engineering survey data and improve work efficiency. Efficiency and resource utilization. The main functions of this system include: project management, data input and editing, data query, digital drawing, 3D operation, digital report, analysis and calculation, user management and assistance. In order to realize the major premise of system design, the construction of the whole system follows the design principles of software technology, because the construction of the system combines research technology, GlS technology, network and communication technology. The definition of scientific research institutions such as data technology should also be based on "useful, effective, advanced, reliable, and reliable", and on the basis of advanced and rigorous information technology, to establish "standardized, safe, and open" enterprise information system information. Control systems, thereby improving the production and performance levels of scientific research centers [17, 18]. Generally speaking, the design of scientific research projects should follow the following principles: Advancement is the premise of future practice, and advanced technologies, methods and materials should be used as much as possible to improve the technical level of the system. Therefore, in the overall design of the system, referring to the successful experience of other technical research information systems, the design is more intelligent and advanced. In terms of software development concept, the design, management and development are carried out in accordance with software technology and object-oriented, which ensures a high starting point for system development. Only by using advanced programs can it be accepted by users and increase its real charm. 2.3 Geological Model of Geotechnical Parameters There are many parameters of geological boreholes, how to classify and group them reasonably according to these numbers, it is necessary to establish a certain geological model (geological division); there are direct scoring method and fuzzy comprehensive evaluation method. In practical applications, people often need to evaluate a project, which is affected by several factors: calculating the design quality of a project, including appearance, planning, cost, and ingenuity; school education quality, such as the education quality of students in school. Family, class discipline, grades, etc. In order to solve the above problems, the method of comprehensive evaluation is always adopted. In practice, there is always a lot of uncertainty about the issues in question. In this way, it is natural to combine obfuscation techniques with a comprehensive classical evaluation process. Unlike classic comprehensive scales, complex hypotheses can be expressed with very simple numerical values and then customized and selected by total score (average) or weighted average. Set the factor F = {f1,…, fn} for the crisis object, create a set of definitions c = {c1,.., cn} and create a criterion table using special values or other methods: R = (rij )n ∗ m Use formula 1 to conduct comprehensive evaluation and other steps.

(1)

384

Y. Wang

3 Investigation and Research on Integrated System Design of Geotechnical Engineering Exploration Based on GIS 3.1 Software Components of the System The system uses Microsoft SQL Sever to develop the database, and the reasonable and orderly data management creates conditions for the effective query and use of engineering information in the future. When designing the database of geotechnical engineering investigation data, the characteristics of the data and the needs of users should be fully considered. System data has the characteristics of time and space, attribute characteristics, time characteristics and so on. The database includes standard soil layer information table, routine test table, engineering information table, measuring point information table, measuring point attribute table, measuring point attribute table, field test table, load test table. According to a certain logical relationship and method, these materials are sorted and managed by computer, which has laid a good foundation for the future system implementation. The system development software can be determined according to GIS professional software. The system adopts the development tools provided by GIS professional software for secondary development. Operating system software: MierosoftWindows2000; professional software: MapInfo professional 6.0, Microsoft Exeel2000; development software: MapBasie6.0, VisuaIBasic6 .0. MapnIfo Professioanl is a global, powerful, comprehensive, and intuitive tabular information management system that provides a new approach to solving problems in client/server computing environments. MapnIfo Professioanl provides deep and in-depth visual analysis capabilities that can help users create groupings in different databases, place them in the same area, and quickly identify easily overlooked relationships between data and data systems. 3.2 Database Design Generate Terrain Enterprise’s customer/user data, design a database by collecting daily practice data, and organize into a data system that conforms to the requirements of the data management system model. The process of establishing the data system code is as follows: those who have the national standard code obtain the national standard code; those who have the ministerial code obtain the ministerial code; if there is no unified code, the user can define it by himself. The database adopts a three-tier structure.

4 Analysis and Research on Integrated System Design of Geotechnical Engineering Survey Based on GIS 4.1 Functional Module Design Analysis According to the design principle of the system, the functional modules of the integrated system of geotechnical engineering survey developed in this paper mainly include: database operation module, data input and output module, project management module,

Design and Analysis of Geotechnical Engineering Survey

385

database operation module

Data input and output module

Project Management Module Geotechnical Engineering Survey Integration System

query retrieval module

Map browsing module

space conversion module

system configuration module Fig. 2. System function modules

query and retrieval module, map browsing module, space conversion module, system configuration The system function module is shown in Fig. 2. Database operation module: Based on the database operation module of Microsoft Access2003 file type, it realizes operations such as adding, deleting, querying and updating MDB database records. Data input and output module: realize data storage, query result output, map data output, etc. Engineering management module: Realize operations such as creating, editing, and deleting geotechnical engineering, and perform associated management of information tables such as site, drilling, and test data corresponding to the project. Query retrieval module: Realize multi-attribute and multi-condition related query retrieval. Map browsing module: Use GIS components to realize the display and browsing of basic area bitmap, compressive modulus contour map and compressive modulus pseudo-color map. Spatial conversion module: Use the coordinate attributes of geotechnical engineering projects, sites, boreholes, geotechnical tests, etc., to convert them into spatial entities, and realize the superposition and positioning of engineering data on the thematic map. System configuration module: perform basic configuration of the system, including interface customization, map configuration, database settings, etc.

386

Y. Wang

The spatial structure characteristics of geological bodies are mainly manifested as: strata are distributed continuously and layered in space with bedding characteristics, and the main characteristics of the same strata are geological age, geological origin, lithological characteristics, etc.; Sequence overlapping can be one field area or one or more field areas. The spatial distribution of rock masses is discontinuous, inhomogeneous and uncertain, and there are complex relationships among rock masses that scour each other. Determination of stratigraphic boundaries is an important issue in geological volume models. Division and division of geological bodies. Geological stratification refers to the scientific and reasonable interpretation of drilling data and the scientific and reasonable division of various drilling data. The determination of layers includes: rationally combining strata, determining the contact surface between layers, and determining the thickness of strata. The integration of formations is mainly due to the difference in the accuracy required by the 3D model of the real geological body when analyzing and judging the drilling data, so some rock formations with similar properties must be integrated. The division of geological bodies lays a theoretical foundation for the establishment of a three-dimensional geological body model. Since the attribute information of each geological layer is consistent, the overall geological body is distributed according to the sequence. The model can be divided into two stages, namely, the representation of the crust and the geological body. Structure. Since the research object of this paper is a simple geological body, the principle of surface simulation can be referred to without considering special geological structures such as faults. Then, choose the appropriate voxel model, and fill the entity in the ground plane, so as to obtain a geological entity. After a comprehensive analysis of the geological entity structure, the TIN model was selected to construct the 3D stratigraphic boundary, and then the triangular prism unit was used to construct the internal structure of the rock mass, and the rock mass surface-rock mass model of the 3D geological body was obtained. 4.2 Engineering Data Analysis This module is mainly responsible for project management, and realizes the creation, editing and deletion of projects, as well as the import and export of data information. The format of the import is consistent with the original table provided, and there must be a data number. The import is based on the existing data. Cumulative imports. Based on the rail transit project of Metro Line 1 in M City, some road and bridge projects and housing construction projects, 263 survey reports are included in this paper. Various types of engineering data are shown in Fig. 3.

Design and Analysis of Geotechnical Engineering Survey

387

Fig. 3. Engineering category statistics

5 Conclusions GIS has become a field informatization technology, as a component, such as a computer system that collects, processes, analyzes, interprets and transmits world data between different users and different systems, it is widely used in parts of the national economy and society Many spaces for living. This paper is based on the research and development of the general technology research database system, and intends to combine technology and data technology to manage the research results data in the field of scientific research geotechnical. Build a general technical research results management platform, establish information management of survey results and plans, increase the data volume of research results, and discuss the establishment of a database to provide support for the data application of the next geological research.

References 1. Joshi, A., Kiran, R.: Gauging the effectiveness of music and yoga for reducing stress among engineering students: an investigation based on Galvanic Skin Response. Work 65(6), 1–8 (2020) 2. Getahun, M.A., et al.: Experimental investigation on engineering properties of concrete incorporating reclaimed asphalt pavement and rice husk ash. Buildings 8(9), 115 (2018) 3. Akingboye, A.S., Osazuwa, I.B., Mohammed, M.Z.: Electrical resistivity tomography for geo-engineering investigation of subsurface defects: a case study of Etioro-Akoko highway, Ondo State, Southwestern Nigeria. Studia Quaternaria 37(2), 101–107 (2020)

388

Y. Wang

4. Abija, F.A., Oboho, E.O.: Subsurface engineering geological investigation and prediction of axial pile capacities for the design and construction of deep foundations in the Calabar River Channel Nigeria. J. Min. Geol. 57(1), 141–154 (2021) 5. Dunford, C.N., Pickard, A.C.: What’s the problem? Issue investigation and engineering change on legacy products. INCOSE Int. Sympos. 30(1), 499–514 (2020) 6. Pekozer, G.G., Akar, N.A., Cumbul, A., et al.: Investigation of vasculogenesis inducing biphasic scaffolds for bone tissue engineering. ACS Biomater. Sci. Eng. 7(4), 1526–1538 (2021) 7. Krahl, P.A., Carrazedo, R., Debs, M.E.: Mechanical damage evolution in UHPFRC: experimental and numerical investigation. Eng. Struct. 170(Sep.1), 63–77 (2018) 8. Singh, S., Singh, S.K., Mali, H.S., et al.: Numerical investigation of heat transfer in structured rough microchannels subjected to pulsed flow. Appl. Therm. Eng. 197(7–8), 117361 (2021) 9. Tsugawa, J.K., Romano, R., Pileggi, R.G., et al.: Review: rheology concepts applied to geotechnical engineering. Appl. Rheol. 29(1), 202–221 (2019) 10. Brito, J.A.M.D.: Judgement in geotechnical engineering practice. Soils Rocks 44(2), 1–26 (2021) 11. Abelskamp, G., Santamarina, J.C.: Academia during the Covid-19 pandemic: a study within the geotechnical engineering research community. Int. J. Innov. Educ. Res. 9(1), 574–587 (2021) 12. Amorosi, A.: The contribution of constitutive modelling to sustainable geotechnical engineering: examples and open issues. Rivista Italiana di Geotecnica 2020(2), 5–25 (2020) 13. Perrone, S.T., Traynor, C.: Mapping the way of St. James: GIS technology, spatial history, and the middle ages. Church Hist. Relig. Cult. 101(1), 3–32 (2021) 14. Symochko, L., Hoxha, E., Hamuda, H.B.: Mapping hot spots of soil microbiome using GIS technology. Agric. Forestry 67(1), 191–203 (2021) 15. Panthi, M.F., Hodar, A.: GIS technology and its application in fisheries sector. Agric. Environ. 2(4), 22–25 (2021) 16. Hamzah, M.L., Amir, A.A., Maulud, K., et al.: Assessment of the mangrove forest changes along the Pahang coast using remote sensing and GIS technology. J. Sustain. Sci. Manag. 15(5), 43–58 (2020) 17. Akhmetova, G., Tokarev, P.: GIS technology application for identification of peat bog soils for updating the digital soil map of Karelia. InterCarto InterGIS 26(2), 66–78 (2020) 18. Saadi, S., Mondal, I., Sarkar, S., et al.: Medicinal plants diversity modelling using remote sensing & GIS technology of Chilkigarh, West Bengal India. Trop. Plant Res. 7(2), 440–451 (2020)

Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm Di Cui(B) Safety and Environmental Protection Division, China Waterborne Transportation Research Institute, Beijing, China [email protected]

Abstract. Globalization and economic integration have led to a significant increase in the number of containers being used for transportation. With this growth comes the challenge of efficiently and accurately classifying containers in order to streamline logistics and improve productivity. This paper proposes a solution to this problem by using the C ++ programming language to model containers and develop an intelligent automatic classification module. The proposed system is based on the mean clustering algorithm and is designed to optimize the automatic classification and storage of containers within container yards. The internal mechanisms and principles of the system are thoroughly described in this paper, and the results of data tests demonstrate that the system is highly efficient in automatically classifying containers. In addition to improving efficiency, the proposed system has the potential to significantly reduce errors in container classification, thereby enhancing overall logistics performance. The approach outlined in this paper could have important implications for the management of container yards and the broader transportation industry. Keywords: Mean Clustering Algorithm · Container Yard · Automatic Classification · Storage System

1 Introduction This paper uses C ++ programming language to construct an automatic classification model for containers. This software module adopts intelligent logic code module to collect comprehensive information of containers, and then calculates various useful parameters according to the information, such as calculation of loading rate and container distribution mode. Among them, the space allocation model is the first stage, this program model calculates the amount of each allocation box; Container interception number, number, layer number and other intelligent allocation, these parameters are automatically arranged by software programs, so as to solve the problem of automatic allocation of containers. The automatic classification and storage system of container yard on account of mean clustering algorithm effectively improves the efficiency of automatic classification. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 389–399, 2023. https://doi.org/10.1007/978-3-031-31775-0_40

390

D. Cui

As for the research of mean clustering algorithm, many scholars at home and abroad have studied it. In foreign studies, RNiranjana proposed numerical AGM and MeanShift clustering algorithms for dark web traffic analysis and classification, and analyzing patterns detected from clusters can provide traces of various attacks, such as Miraibot, SQL attacks and brute force attacks [1]. QubaaA proposed the DJIPhantom4Pro drones to be used, in addition to using a specialized Pix4D program to process drone images and make mosaics for them. Multiple flights were conducted using drones to survey multiple sites in the region and compare them with satellite images from different years. Uav data classification adopts k-means clustering algorithm [2]. BellaryMZ proposed segmentation accuracy using clustering algorithms, and the huge demand for magnetic resonance imaging (MRI) in the medical field helps doctors analyze and detect related diseases. MRI is not only an effective technique for pathology assessment, but also a valuable method for tracking disease progression. The ability of MAGNETIC resonance imaging to provide rapid THREE-DIMENSIONAL visualization makes it widely used in the diagnosis and treatment of different organ pathology, and then in the treatment process [3]. Containers need to be inspected at every customs, dock, import and export point, etc. The container itself contains a lot of attribute value information, such as container number, digit is generally 11, box code, identification code, registration number and so on. When entering and leaving the dock or loading and unloading containers, the system first carries out automatic classification of containers, so as to improve the efficiency of automatic classification of containers. The loading and unloading speed of the container is maximized when the container is automatically classified and loaded in large quantities at the same time. The automatic container yard classification and storage system on account of mean clustering algorithm is beneficial to the progress of automatic container yard classification technology.

2 Design and Exploration of Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm 2.1 Mean Clustering Algorithm K-means clustering algorithm is a cluster analysis algorithm for iterative solution. The procedure is to divide data into K groups before selecting K objects as the initial cluster center, and then calculate the distance between each object and each seed cluster center [4, 5]. Assign each object to its nearest cluster center (see Fig. 1). The basic steps of k-means clustering algorithm are described as follows: Input parameters: data set X with n data objects and the number of clusters K. Output parameter: K clusters that make the clustering criterion function converge [6, 7]. Step1 data set X has n elements, and K elements are selected as clustering centers to initialize the centers. Step2 calculate the distance between each data and the clustering center, allocate the data to the nearest clustering center according to the nearest neighbor principle, and calculate the error and the criterion algorithm E.

Automatic Classification and Storage System of Container Yard

391

Step3 obtain the mean value of each data to the center of the cluster, and use the new center to obtain the error square and the criterion algorithm E. Step4 compare the E value obtained twice and the difference between the two values and the threshold value. If less than or equal to, the function converges and goes to the next step. If not, the second step is returned. In general, to avoid step 4 entering an infinite loop, the algorithm extracts and sets a larger number of iterations, setting this value as the threshold. Step5 output the clustering results that meet the termination conditions[8, 9].

Fig. 1. The basic steps of k-means clustering algorithm

2.2 Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm This paper uses C + + language to establish the algorithm model of container. The factors set by the algorithm model include a number of attributes, such as ship, container model, port of destination and weight stratification, as shown in Fig. 2. Certain data samples are used to classify these attributes. In the software field, the related principles of container can be set, such as the container with the same attribute value can be replaced, so that the container stacking can be solved very efficiently [10, 11]. Through the setting of each attribute value, the automatic classification of containers is automatically set according to the situation of stockpiling realistic containers. In the software model, it is assumed that the types of containers are consistent, and then the containers are divided into different layers according to their belonging ships, destination ports and weight. General software will set up a small module, this small module represents the information module will be loaded on the same ship, marked as a group on the software module [12, 13]. In the actual container classification, containers belonging to different ships generally will not be placed in the same stack; A cluster

392

D. Cui

Fig. 2. Attributes of an out bound container

according to the consistent setting of the destination port; Containers can be divided into batches according to their weight. After these principles have been set up in the software, the simulation of automatic classification of containers will be carried out according to these principles. According to common sense, when the container batch is the same, i.e. the batch is the same, these operations can be completely eliminated if they are to be decanted, because the same batch does not need to be decanted. In the case of batch, the container doesn’t need to know exactly where the container is, the software just needs to know whether a batch of containers is automatically placed in the specified location. These attributes can be set internally on the software interface, and each specific parameter value can be set, so as to realize the intelligent classification of containers stored on site.

Fig. 3. Classification of outbound containers

Automatic Classification and Storage System of Container Yard

393

According to the classification of containers in the software above, as shown in Fig. 3, the loading sequence of several containers is pre-set on the software interface. In accordance with the order of automatic classification of software attribute values, the state of storage is set as initialization, and its value is non-empty and knowable. At this stage, the number of containers will be displayed on the software. The software interface can query all attribute values of containers with one key, such as belonging ship, destination port and weight stratification. The number of ships at this stage, the port where the ships dock and so on are pre-determined. The software model automatically classifies the containers intelligently according to the input value of each attribute value, and then automatically loads the containers onto the ship.

3 Research on the Effect of Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm Samples  X = {X1 , X2, · · · Xn } were divided into K classes,  the clustering center is C = c1 , c2 , · · · cj,... cK that individual xi will represent dij xi , Cj connected the phase contrast size of center Cj distance, the subscript x number of individuals in representative samples, I j c mark number on behalf of the center, all of the individual and individual space belonging to the clustering center of mass of all combined with algorithm function j said the algorithm model is: J =

K  j=1

i,j∈ci

  dij xi , Cj

(1)

The target algorithm can test the result of individual clustering, and if the result value is small, the cluster of the center will be more independent. The general method is to reduce the value of the target algorithm for clustering adjustment [14, 15]. If it is the minimum value, it is generally the best clustering mode. European distance xi was used to represent the similarity between individuals and (j) cluster centers Cj . xi represents the individual data belonging to J, nj represents the number of individual data of group J, then the target algorithm expression can be expressed as:  K K  (j) (2) J = Ji = xi − cj 2 j=1

j=1

i,j∈cj

At this time, the clustering center is: cj =

1 nj (j) x i=1 i n

(3)

394

D. Cui

3.1 Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm According to the above attribute values, the algorithm expression of relevant reference variables for automatic classification of containers is as follows: Xwdsp = {

1weight is w,

d, and the container is in s stack P layer 0other ∀w ∈ W ; d ∈ D; s ∈ S; p ∈ P arrival station is

(4)

Feature extraction is an important part of automatic container classification, and image recognition is needed. Image recognition determines the accuracy of classifier[ 16]. The quality characteristics of the identification results should include the following four aspects: (1) distinguishable, can accurately distinguish different characters, its characteristics differ greatly; (2) Reliability. If the recognized characters are of the same kind, the extracted features should be similar; (3) Good independence, the extracted features are not correlated; (4) Less quantity and fewer extracted features can greatly reduce the difficulty of identification and improve the efficiency of identification. Both character boundary feature and centroid feature are structural feature sets of body. The extraction of boundary feature is closely related to character, and the character features involved are very comprehensive. The centroid feature is one of many features, which cannot identify characters and can be connected with other features to distinguish characters. The left boundary point refers to the middle part from the background point to the non-background point from left to right; The algorithm expression of the left boundary point l is: l = (i, j), and if h(i, j − 1) = 0 and h(i, j) = 1 In the above formula, h(I, j) represents the marking value of the pixel point in row I and column J of the identified image; 0 stands for background point; 1 is the pixel point. The algorithm expression of the right boundary point r is: r = (i, j), and if h(i, j − 1) = 0 and h(i, j + 1) = 1 Centroid expression for a character: 

 i, j =



M10 M01 , M00 M00

 (5)

where, M is the image moment, which is divided into the image moment of the subscript number according to the subscript number.

Automatic Classification and Storage System of Container Yard

395

4 Investigation and Analysis of Automatic Classification and Storage System of Container Yard on Account of Mean Clustering Algorithm 4.1 System Overall Architecture Design 4.1.1 System Goal and Requirement Analysis When designing the automatic classification and storage system for container yards, it is necessary to clarify the system’s goals and requirements in order to better guide the system’s design and development. The system’s goals mainly include: achieving automated container classification and storage, improving the efficiency and accuracy of container yard operations. The system’s requirements mainly include: high accuracy of classification, real-time performance, stability, reliability, etc. 4.1.2 System Overall Structure Design The system overall structure design refers to the design and planning of the overall framework and components of the automatic classification and storage system for container yards. Based on this, the system can be divided into the following parts: Container information collection module: mainly responsible for collecting relevant information of containers, such as size, weight, container type, etc. Container classification module: based on the mean clustering algorithm, automatically classify the containers. Container storage module: store the classified containers according to the prescribed method. System monitoring module: real-time monitoring of the system’s operation and feedback to the system maintenance personnel. 4.1.3 System Module Division and Function Description (1) Container information collection module Collection of container information: responsible for collecting relevant information of containers, such as size, weight, container type, etc. Preprocessing of container information: preprocess the collected information to meet the requirements of the subsequent classification algorithm. (2) Container classification module Implementation of mean clustering algorithm: automatically classify the containers based on the mean clustering algorithm. Output of classification results: output the classification results for subsequent container storage operations. (3) Container storage module

396

D. Cui

Planning of storage locations for containers: plan the storage locations for containers based on the classification results. Storage operation for containers: store the classified containers according to the prescribed method. (4) System monitoring module Monitoring of system operating status: real-time monitoring of the system’s operating status, including hardware, software, and other aspects. Handling of system exceptions: monitor and handle system exceptions to ensure system stability and reliability. 4.2 System Implementation and Analysis In this paper, MyEclipse is used for simulation experiments, the programming language is C ++, and the effect of mean clustering algorithm is simulated on the software interface. The simulation object of this paper is the container yard automatic classification and storage, using the improved algorithm to improve the intelligent automatic classification of containers. The operating system is Windows 2010, the processor is Pentium dualcore, and the memory is 12 gigabytes. In practice, in order to facilitate yard management and various operations, it is necessary to divide the yard into boxes with different classification attributes. Of course, according to the different needs of each container yard, the specific classification basis is also different. In theory, containers with the same classification attribute should be stacked in the same container area, and containers with different classification attributes should not be stacked on each other as far as possible. However, in this paper, containers with different classification attributes have to be stacked on each other for the scattered cargo container yard with tight use. Therefore, in this paper, the stacking operation is carried out in the way of location reuse in the movable box area, that is, heterogeneous boxes can be flexibly stacked in the same box area or the same box pile as needed. In this paper, the container feature sequence table is designed, as shown in Table 1. Table 1. Characteristic sequence of some containers Judgement sequence number Characteristic number Characteristic attribute table column 1

U053

Container type, container owner, arrival agent, cargo name

2

U076

Container type, container owner, arrival agent

3

U086

Box type







When the k-means clustering algorithm is used to implement the algorithm, the final values of different numbers of clusters will be obtained, and the validity of the

Automatic Classification and Storage System of Container Yard

397

best clustering can be judged by analyzing the final values. The validity of clustering means to evaluate the rationality of clustering results and clarify the standard division of clustering. This indicator is usually used to determine which clustering results are good. Now some common indicators such as CH and Wint have been obtained to determine whether cluster detection is effective. The average clustering algorithm is applied to the design experiment in the automatic container classification system, and a data set (DT) is used in the experiment to verify the effectiveness of the algorithm.In order to facilitate observation, a two-dimensional data set is used in the experiment, as shown in Fig. 4, where: The DT has 100 records, 2 categories, evenly distributed.Fig. 5 shows the clustering results of the data set, where the step size is (0.05, 0.05), a = 0; different clusters in the figure are identified by different identifiers.

1.0 0.8

0.6

0.4

0.2

0

0.2

0.4

0.6

0.8

1

Fig. 4. DT data set

As can be seen from the figure, two complete clusters were obtained from the DT, and there are also complete clustering results for clusters with Gaussian distribution. Figure 5 shows the container clustering results.From the clustering results, it can be seen that the shape and number of classes are in line with the a priori knowledge of each data set, thus verifying the effectiveness of the improved algorithm. The accuracy of the algorithm in this paper is much higher than that of the original clustering analysis algorithm, and at the same time, it overcomes the shortcoming that the original algorithm is easy to fall into the local optimal solution, and solves the practical application problems well. However, when analyzing the actual data, it is found that

398

D. Cui

1.0 0.8

0.6

0.4

0.2

0

0.2

0.4

0.6

0.8

1

Fig. 5. DT clustering results

the accuracy of the average clustering algorithm based on systematic clustering tends to decline when the data volume exceeds a certain number. Therefore, the analysis of the special station network is divided into two parts, and in order to get the most reasonable regional division algorithm, the operation time will be longer than the original clustering algorithm, as shown in Table 2. Table 2. Comparison results Theoretical method The clustering algorithm used in this paper Original clustering algorithm

Accuracy rate

Theoretical method The clustering algorithm used in this paper Original clustering algorithm

88%

Theoretical method The clustering algorithm used in this paper Original clustering algorithm

35%

5 Conclusions This paper uses software to model the container in advance. Pre-set the attributes of the container, such as vessel, weight layer, batch, etc. The software processes the data of the container through internal programs and allocates intelligent space for the container

Automatic Classification and Storage System of Container Yard

399

according to site requirements. The software interface will display all kinds of information of the final container according to the data after automatic classification processing. The automatic classification and storage system of container yard on account of mean clustering algorithm significantly improves the efficiency of automatic classification of container yard.

References 1. Niranjana, R., Kumar, V.A., Sheen, S.: Darknet traffic analysis and classification using numerical AGM and mean shift clustering algorithm. SN Comput. Sci. 1(1), 1–10 (2020) 2. Qubaa, A., Al-Hamdani, S.: Detecting abuses in archaeological areas using k-mean clustering analysis and UAVs/drones data. Sci. Rev. Eng. Environ. Sci. 30(1), 182–194 (2021) 3. Bellary, M.Z., Fameeza, F., Musthafa, D.: The MRI knee pain classification using CNN algorithm and segmentation using clustering algorithm. Turk. J. Comput. Math. Educ. 12(10), 306–315 (2021) 4. Zahoor, J., Zafar, K.: Classification of microarray gene expression data using an infiltration tactics optimization (ITO) algorithm. Genes 11(7), 819 (2020) 5. Sümeyya, L.K.N., Aytar, O., Gentrk, T.H., et al.: Dermoskopik Grüntülerde Lezyon Blütleme lemlerinde K-ortalama Kümeleme Algoritmasnn Kullanm. Gazi Üniversitesi Fen Bilimleri Dergisi Part C Tasarım ve Teknoloji 8(1), 182–191 (2020) 6. Alanazi, R.S., Saad, A.S.: Extraction of Iron oxide nanoparticles from 3 dimensional MRI images using k -mean algorithm. J. Nanoelectron. Optoelectron. 15(1), 1–7 (2020) 7. Ariyanto, R., Tjahjana, R.H., Udjiani, T.: Forecasting retail sales on account of cheng fuzzy time series and particle swarm optimization clustering algorithm. J. Phys. Conf. Ser. 1918(4), 042032–042032 (2021) 8. Shaheen, M., Rehman, S.U., Ghaffar, F.: Correlation and congruence modulo based clustering technique and its application in energy classification. Sustain. Comput. Inform. Syst. 30(2), 100561 (2021) 9. Hartomo, K.D., Nataliani, Y.: A new model for learning-based forecasting procedure by combining k-means clustering and time series forecasting algorithms. PeerJ Comput. Sci. 7(2), e534–e534 (2021) 10. Omar, T., Alzahrani, A., Zohdy, M.: Clustering approach for analyzing the student’s efficiency and performance on account of data. J. Data Anal. Inf. Process. 08(3), 171–182 (2020) 11. Rahman, M.M., Kawabayashi, S., Watanobe, Y.: Categorization of frequent errors in solution codes created by novice programmers. SHS Web Conf. 102(7426461), 04014–04014 (2021) 12. Grange, S.K.: Temporal and spatial analysis of ozone concentrations in Europe on account of timescale decomposition and a multi-clustering approach. Atmos. Chem. Phys. 20(14), 9051–9066 (2020) 13. Topta¸s, B., Hanbay, D.: A new artificial bee colony algorithm-based color space for fire/flame detection. Soft. Comput. 24(14), 10481–10492 (2019). https://doi.org/10.1007/s00500-01904557-4 14. Nasiri, A., Omid, M., Taheri-Garavand, A.: An automatic sorting system for unwashed eggs using deep learning. J. Food Eng. 283(1), 110036 (2020) 15. Kim, D.H.: Structural design of an automatic container fixing device for use on container chassis. J. Korean Soc. Manuf. Technol. Eng. 29(1), 59–65 (2020) 16. Sahara, A., Saputra, R.H., Hendra, B.: Object separation system on account of height differences automatically. J. Phys: Conf. Ser. 1807(1), 012017 (2021)

Design and Implementation of Intelligent Traffic Monitoring System Based on IOT and Big Data Analysis Yongling Chu(B) , Yanyan Sai, and Shaochun Li Department of Information Engineering, Yantai Vocational College, Yantai, Shandong, China [email protected]

Abstract. With the increasing rapidly of urban road traffic, traffic jam,causing traffic trouble, pollution of the environment and other problems are becoming increasingly serious. Therefore, intelligent transportation, which can understand the traffic conditions in real time and reasonably plan the traffic routes, is particularly critical to solve the above problems. At the same time, the economic loss of storage equipment and the technical constraints of data analysis and processing caused by massive traffic data collection to the background system also need new technical ideas and means to solve. At present, IOT technology, big data technology and machine learning technology are increasingly mature, and the combination of the three has natural application advantages in the monitoring and management of intelligent transportation. This paper proposes a solution to intelligent traffic condition detection and prediction by combining IOT, big data technology and machine learning technology. The Internet of things technology is used to solve intelligent transportation’s data collection, the big data technology is used to solve the storage question of massive data, and the machine learning technology is used to judge and predict the operation state of intelligent transportation. A set of monitoring systems that can be used for intelligent transportation are designed and implemented. The availability and effectiveness of the system are verified through experiments and tests. Keywords: IOT · Big data · Machine learning · Intelligent Transportation

1 Introduction On the basis of intelligent transportation, smart transportation uses high and new technologies such as Internet of things, artificial intelligence, cloud computing, big data and mobile Internet to collect and classify traffic information and provide real-time traffic information services for traffic managers and pedestrians [1]. Smart transportation uses a large number of data models and adopts data mining and other data processing technologies to realize the systematization, real-time, interactivity and universality of traffic information [2, 3]. With more participation of information technology in traffic planning and application, the traffic industry has a clear trend towards digitalization and intelligence © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 400–410, 2023. https://doi.org/10.1007/978-3-031-31775-0_41

Design and Implementation of Intelligent Traffic Monitoring System

401

[4]. Smart transportation is more and more important in the life of people and economic growth. Through a series of digital information techniques such as IOT, big data and artificial intelligence, intelligent transportation realizes the collaborative interconnection of people, vehicles, roads and information, making the transportation process safer, smoother, more convenient and more efficient, and improving the transportation efficiency of transportation resources [5].

2 Common Technologies of Intelligent Transportation 2.1 Internet of Things Application Technology The IOT is a meshwork, it can connect anything with the internet through RFID, infra-red smart sensor, GPS system, ray scanner and other sensor equipment to collect information, and exchange information and communicate according to the agreed protocol, so as to bring about smart distinguish, location, following, supervisory controlling and supervising. That is, the IOT means “the Internet of things” [5]. The mainstream IOT technology architecture takes the ISO/OSI model of the Internet as a reference, and constructs the IOT technology architecture framework through hierarchical division [6]. The typical three-tier IOT technology architecture is divided into perception layer, meshwork layer and application layer, as shown in Fig. 1.

Fig. 1. Structural IOT technology system framework

Wireless sensor network is a very important direction and branch of the Internet of things. It is a comprehensive application technology of communication-skill, sensorskill, processing information skill, flushbonading technology, hadoop technology and

402

Y. Chu et al.

other technologies. Through the wireless sensor network, the objects in its coverage area can be sensed, and the real-time data of the monitored objects can be acquired; At the same time, after acquiring the data, the data can be processed accordingly as required; Finally, the processed effective data is sent to the user through the network, and the data storage is realized. This paper uses wireless sensor network technology to collect, transmit and store data [7]. 2.2 Big Data Analysis and Machine Study Big data refers to massive data. Usually, these data cannot be processed by ordinary data processing software; it is a huge and complex data. Machine learning is an interdisciplinary discipline. It can repeatedly train the model to obtain training experience, and then simulate the analysis function of human brain to automatically calculate and update the algorithm, so as to achieve the goal of optimizing program performance [8]. Machine learning is an important direction of big data analysis. The two are mutually reinforcing and interdependent. Machine learning needs not only reasonable, applicable and advanced algorithms, but also sufficient data [9]. Deep learning is one of the most commonly used branches and fields in machine learning research. Deep learning can simulate human brain for thought analysis, then establish learning methods and mechanisms, and simulate human brain to deepen learning and interpret data [10]. It is a kind of unsupervised learning. There are three common algorithms for deep learning: convolutional neural network, cyclic neural network and generative countermeasure network. Among them, CNN belongs to feedforward neural network, which includes convolution calculation and has a deep structure. It is the most typical algorithm in the field of deep learning. At present, CNN has become the most classical and commonly used algorithm in big data analysis, especially in model training, pattern classification, intelligent computing and other fields. Because the CNN network can directly input the original image without preprocessing, it can perform several complex processing processes, improve the speed and efficiency of the algorithm, and thus has been widely used [11].

Fig. 2. CNN structure diagram

It can be seen from the Fig. 2 that first input the original picture in the first floor, and excute convolution operation, the convolution operation is shown in Formula 1.  xit−1 ⊗ kijt−1 + btj (1) yjt = i

Design and Implementation of Intelligent Traffic Monitoring System

403

where, yjt represents the output result of the j-th network node in layer t, xit−1 represents the matrix of the ith output in the t-1 layer, kijt−1 represents the convolution kernel connecting the ith input matrix and the jth position between the t-1 and the t-1, btj represents the offset of the j-th position in the t-th feature map. After the convolution operation, we can obtain a characteristic map with a depth of 3 in the second layer.Pooling operation is performed on the feature map of the second layer, and the pooling operation meets the requirements of formula 2.   i − k  +1  (2) o= s  where i represents the input size, o represents the output size, the convolution kernel size is k, and s represents the step size of each sliding step. After the pooling operation, we can obtain a characteristic map with a depth of 3. Repeat the operations, and finally get a characteristic map with a depth of 5. Finally, expand and connect these 5 matrices into vectors according to rows, and transfer them to the full connection layer [12]. After calculation in the full connection layer, the classification function can be realized, and its calculation formula is shown in Formula 3.  yjt = Wijt−1 × xit−1 + btj (3) i

where, yjt represents the output result of the jth network node in layer t, Wijt−1 represents the weight value between the i-th feature in layer t-1 and the jth neuron in layer t, xit−1 represents the ith eigenvalue in the t-1 layer, btj represents the offset of the jth position in the t-th feature map.

3 Design and Implementation of Intelligent Traffic Monitoring System 3.1 Architecture Design The design of smart traffic surveying system contains three modules, which are wireless sensor network, big data platform and application layer functions. The big data platform layer includes the application of machine learning algorithm [13]. The intelligent vehicle is the kernel of the smart traffic system. Through the configuration of advanced sensing element, monitors, actuators and other equipments, the calculator, modern sensor, message fusing, news report, artificial intelligence, autogenous control and other technologies are intensively applied to realize the smart message switching with human, cars and roads by using the sensing system with vehicles and message terminals. The overall framework of the intelligent transportation system is shown in Fig. 3. 3.2 Design of Wireless Sensor Network Layer The functional design of the wireless sensor network layer includes the topology of the wireless sensor network, the preprocessing of sensor data, the design of data storage

404

Y. Chu et al.

Fig. 3. Intelligent transportation system architecture

server and how the data of the sensor network layer is transmitted to the upper layer [14]. The data transmission relationship of the whole wireless sensor network is shown in Fig. 4.

Fig. 4. Data transmission relationship of line sensor network

From the above figure, we can see that the design of the wireless sensor network layer is divided into three parts, namely, the design of the sensor node, the design of the wireless gateway’s design and the server’s design.

Design and Implementation of Intelligent Traffic Monitoring System

405

The core MCU of the wireless sensor node in this paper uses STM32L051C8T6 with ultra-low power consumption. The MCU transmits the processed vehicle data and traffic information to the LoRa transceiver module through the SPI bus, while the LoRa transceiver module sends the vehicle data and traffic information to the sink node through the LoRa wireless, thus completing the collection function of vehicle and personnel information in intelligent transportation. The main performance comparison of several commonly used LoRa chips is shown in Table 1. Table 1. Lora chip function compare table Chip model

Frequency band (MHZ)

Bandwidth (KHZ)

transmission speed (kbps)

Spread spectrum factor

sensitivity (dbm)

SX1276

137–1020

7.8–500

0.018–37.5

6–12

−111–148

SX1277

137–1020

7.8–500

0.11–37.5

6–9

−111–148

SX1278

137–525

7.8–500

0.018–37.5

6–12

−111–148

SX1279

137–960

7.8–500

0.018–37.5

6–12

−111–148

According to the frequency band of China, the LoRa transceiver module used in this experiment uses SX1278 as the main chip, and its spread spectrum is 6–12. The communication protocol format of the system is shown in Table 2. Table 2. Format of communication protocol Field

Show

Number of bytes

Frame header

AA

1

Address

The underground acquisition node is 00 if it is not network access

1

Order

01 query, 02 network access, 03 early warning, 04 cycle sending data

1

Data bits

Data of sending

N

Check bit

Accumulate the address command and data bits

2

3.3 Realization of Machine Realization Model The machine implementation model is mainly divided into the preparation of original data, kmeans clustering and labeling data, model construction and model training. The model training uses the large data set of vehicle images to train and optimize the three CNN networks offline. The source images in the training set are 12x12 RGB images[15]. After cnn1 training, the images obtained from CNN training are converted into scales

406

Y. Chu et al.

and used as the input source of cnn2. After cnn2 processing, the filtered vehicle and pedestrian information is output; After the processing of CnN3, the speed and accuracy of face classification and recognition are further improved. In this paper, NetVLAD pooling is used. This method does not use under-sampling or global mapping in traditional networks to obtain features, but mainly considers the features of the last layer of convolution output.In CNN training, the back propagation formula of its error is shown in Formula 4.   l−1 l (4) = δx=m,y−m ∗ wrot1800 δx,y The gradient calculation formula is shown in Formula 5. N −m N −m ∂E ∂xijl N −m N −m ∂E ∂E = = yl−1 i=0 j=0 ∂x l ∂wab i=0 j=0 ∂x l (i+a)(j+b) ∂wab ij ij The final loss function is shown in Formula 6.         Lossclass = −yclass 1 − yclass 1 − log yˆ class log yˆ class i i i i i

(5)

(6)

where, yˆ class represents model classification information, yclass indicates the correspondi i ing tag data. The flow of K-Means clustering algorithm is shown in Fig. 5.

Fig. 5. Process of K-Means clustering algorithm

Design and Implementation of Intelligent Traffic Monitoring System

407

The formula for the sum of squares of errors is shown in Formula 7. K  i=1

bj ∈Ci

  D2 bj , ui

(7)

where, bj is an object in space, ui is the ith cluster center vector, D is Euclidean distance, and its definition is shown in formula 8. 1   2 2  d  (8) biji − bji D bi , bj = i=1

4 Analysis of Experimental Results In this paper, filtering conditions are added to K-Means clustering algorithm to accelerate (j)

(j)

the algorithm.Make Imax = maxi=1,2,......Mj Ii , add filter condition are the following inequalities.     (j) (j) (j) (j) aL ≤ Imax < β Imin + ε ≤ β as + ε

j

Imax (j) Imin

< β, then there

(9)

It can effectively and accurately judge the vehicles and personnel in intelligent transportation. At the same time, this paper improves the pooling of NetVLAD and designs a new pooling to achieve the aggregation of “local features”. The calculation formula is shown in Formula 11. N ak (xi )(xi (j) − ck (j)) (10) V(j, k) = i=1

where, j is the dimension, k is the category, and j of V is the input and corresponds to c. If xi belongs to the k of the current category, then ak = 1, otherwise ak = 0。 The improved NetVLAD is embedded into the CNN network as a pooling layer, as shown in Fig. 6.

Fig. 6. Imporved NetVLAD in CNN

Applying the improved training model and algorithm to the intelligent traffic monitoring system has greatly improved the detection precision and detection speed of the vehicle and pedestrian detection model in the intelligent traffic. The test results are shown in Fig. 7 and Fig. 8:

408

Y. Chu et al.

Fig. 7. Rendering before model ptimization

Fig. 8. Rendering after model optimization

From Fig. 7 and Fig. 8, we can see that after image preprocessing and batch normalization, the improved algorithm is applied to the smart traffic monitoring system, which improves the accuracy of vehicle detection and the training speed of the model. The smart traffic monitoring system using the improved machine learning algorithm realizes the efficient detection and analysis of vehicles, and can effectively predict and improve traffic congestion, traffic accidents and other problems.

Design and Implementation of Intelligent Traffic Monitoring System

409

5 Conclusions Smart transportation is a significant means to assure the sustainable development of urban transportation and is the future trend and direction of people’s travel and lifestyle. This paper integrates IOT technology, big data technology and machine learning technology, uses wireless sensor network to receive and transmit data in intelligent transportation, uses big data distributed storage to store the collected data, and uses machine learning technology to judge and predict the operation status of intelligent transportation, which has guiding significance for the development of intelligent transportation, Provide powerful technical support for the building of smart transportation and smart city. Acknowledgements. 1. 2021 Shandong Institute of Higher Education Special Topics in Higher Education Research "The Construction of the Monitoring and Evaluation Model of Higher Vocational Teaching Quality Based on Big Data under the Background of ’Double Higher Education’ (CX-0006) "; 2. 2022 Shandong Province Key topics of culture and art "Research on the development of regional animation industry cluster supported by VR technology in Shandong animation innovation education"; 3. 2022 annual scientific research planning project of the fifth council of the China Association of Vocational and Technical Education “Research on the construction and application of new forms of teaching materials under the industrial Internet technology system” (ZJ2022B149).

References 1. Atiquzzaman, M., Yen, N., Xu, Z.: Big data analytics for cyber-physical system in smart city. In: Advances in Intelligent Systems and Computing, BDCPS 2019, 28–29 December 2019, Shenyang, China (2020) 2. Sangkertadi, S., Syafriny, R., Wuisang, C., et al.: Influence of landscape material properties on microclimate change in a tropical area with case in Manado City. IOP Conf. Ser.: Earth Environ. Sci. 1007, 012006(2022) 3. Bashir, A.A., Mustafazbey, A.: Modelling and analysis of an 80-MW parabolic trough concentrated solar power plant in Sudan. Clean Energy 6(3), 16 (2022) 4. Hernandez, M.M., Banu, R., Gonzalez-Reiche, A.S., et al.: RT-PCR/MALDI-TOF diagnostic target performance reflects circulating SARS-CoV-2 variant diversity in New York City. J. Mol. Diagnost. 24(7), 738–749 (2022) 5. Nugroho, U., Falah, K.T.: Transportation modelling using PTV Vissim for the adjacent junction in Sampangan Semarang City. IOP Conf. Ser. Earth Environ. Sci. 969(1), 012080 (5pp) (2022) 6. Sinaga, M., Madaningrum, F.R., Siagian, R.T., et al.: Study on waste bank capacity building plan and development strategies in Semarang City. IOP Conf. Ser. Earth Environ. Sci. 896(1), 012082 (8pp) (2021) 7. Samsuri, et al.: Spatio-temporal pattern of urban forest vegetation density, Medan Baru city, Indonesia. IOP Conf. Ser. Earth Environ. Sci. 918(1), 012021 (11pp) (2021) 8. Samourkasidou, E., Kalergis, D.: Tanzimat reforms and urban transformations in Ottoman Port-Cities. Sociol. Stud. 11(6), 14 (2021)

410

Y. Chu et al.

9. Chumachenko, I., Davidich, N., Galkin, A., et al. Information support of modeling of gravity function of employees of city service enterprises. Array. Municipal Econ. Cities 3(163), 165–172 (2021) 10. Makandar, S.D., Alawi, R., Huwaina, A., et al.: Smart restorative materials. Int. J. Sci. Res. 9(6), 1–3 (2020) 11. Alghazali, K., Teoh, B.T., Sam, S.S., et al.: Dengue fever among febrile patients in Taiz City, Yemen during the 2016 war: clinical manifestations, risk factors, and patients knowledge, attitudes, and practices toward the disease. One Health 9 (2020) 12. Sam, R.T., Umakoshi, T., Verma, P.: Probing stacking configurations in a few layered MoS2 by low frequency Raman spectroscopy. Sci. Rep 10(1) (2020) 13. Siregar, R.T., Silitonga, H.P., Tinggi, S., et al.: Development of wastewater treatment (IPAL) communal plant In Village Pardomuan Nauli in the district Laguboti District Toba Samosir with the power of community participation. J. Adv. Sci. 29(6), 5672–5680 (2020) 14. Lv, Z.H., Bian, L.J., Feng, M.J., et al.: Capacity analysis of cable insulation crosslinking production process based on IoT big data. J. Phys. Conf. Ser. 2237(1), 012028- (2022) 15. Aj, A., Sb, B., Ss, C., et al. Assessment of factors affecting implementation of IoT based smart skin monitoring systems. Technol. Soc. 68 (2022)

Simulation of Passenger Ship Emergency Evacuation Based on Neural Network Algorithm and Physics Mechanical Model Dehui Sun1(B) and Muhammad Khan2 1 Department of Basic Teaching, Shandong Jiaotong University, Weihai, Shandong, China

[email protected] 2 Kohat University of Science and Technology, Kohat, Pakistan

Abstract. In recent years, the research on emergency evacuation of passenger ships has attracted much attention. The research on emergency evacuation mostly focuses on public places on land, while the research on emergency evacuation of passenger ships at sea is less. Based on the neural network algorithm, the social force model is improved by using the mechanical properties in physics, and a typical passenger ship personnel evacuation model is established by using the anylogic simulation software. There are significant differences in the passenger assembly time and congestion formation area under different evacuation modes. Therefore, the passenger ship personnel evacuation process should be analyzed in combination with the accident type and severity, This study provides theoretical basis and decision support for the design and evaluation of evacuation safety of large passenger ships. Keywords: Neural Network Algorithm · Physics Mechanical Model · Passenger Ship Emergency Evacuation

1 Mechanical Model in Emergency Evacuation Large ships have huge volume, complex structure and large passenger capacity. Once an accident occurs, it is very easy to cause huge casualties and property losses [1]. After a ship accident, it usually faces two situations: safe return to port and abandonment of the ship. In order to effectively evaluate and improve the design of passenger ship evacuation safety, it is necessary to establish a passenger ship evacuation model. Since group behavior is common in the process of personnel evacuation [2–4], many scholars have studied the impact of group behavior on personnel evacuation. Zhengxiazhong et al. Used the social force model to study the impact of small group behavior on the evacuation efficiency of subway station under the action of emergencies. The results show that small group behavior reduces the evacuation efficiency. When the cruise fire accident cannot be avoided, how to ensure the safety of passengers has become the top priority of the research goal. To evacuate safely, researchers should first analyze the characteristics and hazards of cruise fire. Generally speaking, cruise fire is different from land building fire in the following aspects: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 411–420, 2023. https://doi.org/10.1007/978-3-031-31775-0_42

412

D. Sun and M. Khan

(1) The probability of cruise fire is higher. Compared with land-based buildings, there are more fire hazards in cruise ships. In addition to dangerous combustibles such as curtains and quilts in the passenger cabin of cruise ships, aging equipment in the engine room, oil leakage from oil pipes, improper operation of personnel, etc. may cause fires. (2) The cruise fire spread more rapidly. The structure of the cabin and passage of the cruise ship is relatively narrow. A large number of combustibles gather in a small space. In case of fire, the fire load is greater, the smoke diffuses faster and the temperature rises rapidly. (3) It is difficult to put out the cruise fire. In case of fire, cruise ships usually sail at sea. Once the fire breaks out, they cannot get external rescue, but can only rely on their own limited fire-fighting equipment to put out the fire. (4) The tolerance of cruise fire risk is low. Modern cruise ships carry a large number of passengers. After a fire, it is easy to cause a large number of casualties. Therefore, the risk tolerance of cruise fire is lower. Although the cruise fire hazards are great, there are still few studies on the cruise fire and personnel evacuation at present. Therefore, this paper studies the relevant laws of smoke diffusion movement generated by a cruise fire and the evacuation behavior of passengers and crew groups under specific situations, which can further strengthen the cruise fire prevention and control in the process of design, construction and navigation, and guide passengers to evacuate safely in case of emergency. It has extremely important practical significance. There are few studies on the impact of group factors on passenger ship evacuation, especially in the optimization of passenger ship evacuation routes, evacuation efficiency and congested areas. In order to reasonably plan passenger ship evacuation routes, improve evacuation efficiency and mitigate the impact of congested areas, it is necessary to establish a passenger ship evacuation model considering group effects.

2 Neural Network Algorithm Neural network is composed of a large number of neurons, which are connected with each other and have high nonlinearity. The processing of neural network is determined by the number of neurons, the form of mutual connection, input and output. Neural network is an important technology of intelligent computing. It expands the processing of intelligent information and can handle complex nonlinear relations and logical operations [5]. It also provides an effective method to solve many problems. Back propagation neural network [6], also known as BP neural network, is the most widely used network among all networks and the basis for other network applications. It is widely used in data compression, classification information, action technology, pattern analysis and other fields. It has been proved mathematically that BP neural network can solve complex nonlinear mapping as long as there are enough neurons in the hidden layer [7–9]. Moreover, the self-learning ability of neural network can make a good decision plan by learning the correct data set. The construction of neural network in this paper is based on opennn neural network library, which is mainly used in the field of deep learning and machine learning, and forms network models with different functions according to

Simulation of Passenger Ship Emergency Evacuation

413

the needs of users. It is an advanced analysis software library [10] written in c ++, which has outstanding advantages in memory allocation and operation efficiency. In order to maximize efficiency, it is constantly optimized and parallelized (in Fig. 1 and Fig. 2).

Fig. 1. Initial structure diagram of neural network [10]

Fig. 2. Perceptron and logical activation function diagram [10]

Before training the neural network, it is necessary to determine the initial values of the parameters in the network. If the initial value of the parameter is too large, the gradient value of the loss function relative to the parameter will be large. Each time the gradient descent is used to update the parameter value, the parameter update amplitude will become larger, causing the loss function to oscillate near the minimum value, resulting in gradient explosion and non-convergence of the neural network. If the initial value of the parameter is too small, the gradient value of the loss function relative to the parameter is very small, and the change of the parameter value each time is also very small, resulting in the slow convergence of the loss function, the disappearance of the gradient and the convergence of the neural network to the local minimum. In general, a small random non-zero value is a good choice. Domestic and foreign scholars have carried out many studies on the emergency evacuation of pedestrians in public areas, and achieved landmark achievements, which mainly focus on the evacuation model, system simulation modeling, and evacuation scene optimization. First of all, in terms of evacuation models, scholars divide evacuation models into two categories: macro evacuation model and micro evacuation model according to

414

D. Sun and M. Khan

different observation angles. In the macro evacuation model part, Hughes (2000) used the partial differential equation of fluid mechanics theory to analyze the behavior law of crowd flow during evacuation. Treuille et al. (2006) proposed a real-time crowd model based on continuum dynamics through the simulation of crowds in multiple urban public areas. Zhang Qian (2014) built a comprehensive model of multi-exit group evacuation, and simulated the group evacuation behavior in the specific situation of Hangzhou subway station hall. In the micro evacuation model, the social force model and cellular automata model are the most widely used. The training strategy includes two steps: loss function and optimization algorithm. Loss function determines the performance of neural network. It defines the tasks that neural networks need to complete, and measures how neural networks fit data sets. The loss function used in this paper is normalized squared error (NSE). NSE divides the square of the difference between the output out of the neural network and the target tar in the data set by the normalization coefficient a. If the result is 1, the neural network will predict the data according to the mean value. If it is 0, it means that the data has been perfectly predicted. The expression of NSE is:  (out − tar)2 (1) NSE = A

3 Model Design and Application Social force model was first proposed by Helbing in 1995. It is also a mainstream evacuation model. First of all, the social force model is based on Newtonian dynamics, and the expression of each force reflects the different motivations and impacts of pedestrians, and then modeling and simulation. In the social force model, due to the comprehensive consideration of the factors affecting the individual, the modeling of the individual behavior is reasonable. Secondly, the model can realistically simulate the evacuation process of the crowd. The specific details are that when the pedestrian is evacuated, a single pedestrian will be affected by the self-driving force of the pedestrian, the force between pedestrians, the repulsive force between pedestrians and obstacles, and the attraction.At present, it has been widely used to describe the movement of people. The model includes the driving force of pedestrians, the repulsion or attraction between pedestrians fij and the repulsion force between pedestrians and obstacles fiw. The kinematic equation of person I can be given by the following formula. mi

v0 (t)ei0 (t) − vi (t) dvi t = mi i + j(=i) fij + w fiw dt τi

(2)

In the process of group evacuation, the members of the group will support each other and expect to maintain the same speed with each member and move towards the same goal. Therefore, this paper modifies the expected speed of group members. In the process of group evacuation, the expected speed of group member I is corrected as follows: 0 t= vigroup

1 n v0 t j=1,j=i j group n−1

(3)

Simulation of Passenger Ship Emergency Evacuation

415

In case of emergency, passengers often need a certain response time before taking evacuation action. However, the response time of each passenger is different due to individual factors and the location of the area. When the group is evacuated, all members expect to evacuate at the same time. At different age levels, adolescents will have reckless and impulsive behaviors when disasters occur; Middle-aged people are relatively calm and will find ways to get out of danger; Although the elderly have rich experience, they will also have corresponding panic because of their own inconvenience. In terms of gender, women are more likely to have strong tension and fear than men, and these emotions are also more likely to erupt than men. At the same time, men are more calm than women in the face of disasters. In terms of the number of different groups of people, a single pedestrian will generally think that following the public will improve his sense of security. The available evacuation time (taset) refers to the time from the occurrence of a dangerous situation to the loss of mobility of a pedestrian. The reason for the loss of mobility may be that the pedestrian gradually reaches the tolerance limit when exposed to a hazardous environment or suddenly encounters an unexpected event. The required evacuation time (trset) refers to the time from the occurrence of a dangerous situation to the evacuation of all affected people to a safe area, which is related to the characteristics of individual evacuation behavior and the level of evacuation safety management.In practice, the evacuation safety design index is regarded as the available evacuation time, and calculate the necessary evacuation time according to the actual situation. Only when trset < taset is ensured, the evacuation safety can be guaranteed. As for the available evacuation time, the specification requires that under the premise that only one line, one transfer station and its adjacent sections, at the same time, only one disaster accident occurs, the personnel shall be evacuated from the platform to the safety zone within 6 min. For the necessary evacuation time, the calculation formula adopted in the specification: 2 T = 1 + 0.9[A1Q(N1 +Q −1)+A2 B ≤ 6 min Q1 +Q2 T = 0.9[A1 (N −1)+A2 B ≤ 4 min TP = TP1 + TP2 + TP3 + TP4 + TP5 ≤ 6 min TC = TP + TC1 + TC2 + TC3 ≤ 6 min TRSET = TDET + TWARV + (TPRE + TTRAV )

(4)

In the process of evacuation, at the place where the traffic capacity decreases, the pedestrian speed decreases and the density increases, resulting in congestion on a certain section of the rear route. Therefore, the design principle of “capacity increase along the evacuation direction” can be followed, and the method of facility capacity comparison can be used to statically predict the possible bottlenecks on the evacuation route. At the same time, due to the subjectivity and randomness of pedestrian behavior, the flow distribution of evacuated people is not strictly in accordance with the carrying capacity, and the generation of bottlenecks depends on the dynamic distribution of people in a specific time and space. Therefore, it is also necessary to find and verify the high probability areas of bottlenecks with the help of multiple simulations.

416

D. Sun and M. Khan

4 Analysis of Simulation Results Based on the drawing information of the internal structure, evacuation passageway, number and position of stairs, muster station and lifeboat position of a certain cruise ship, the physical model of the cruise cabin is constructed, and the anylogic personnel evacuation simulation model is established. The schematic diagram, front view and partial view of the model are shown in the Fig. 3 and Fig. 4 below [11].

Fig. 3. Schematic diagram of passenger evacuation model in cruise cabin [11]

Fig. 4. Front view of passenger evacuation model in cruise cabin [11]

Evacuation scenario 1 refers to the situation that No. II staircase fails in case of fire. Figure 5 below shows the statistics of the number of evacuees in the passenger cabin of the cruise ship. The red line indicates the number of successful evacuees, and the blue line indicates the number of people not evacuated. It can be seen from the figure that the successful evacuation of 400 passengers and crew members of the cruise cabin took a total of 390 s. Figure 6 shows the evacuation results at each time point in the evacuation process of scenario 1, showing the current position of the evacuees during the current evacuation time. For the convenience of observation, walls, doors and other obstacles are hidden in the figure, and only the evacuees, stairs and emergency exits

Simulation of Passenger Ship Emergency Evacuation

417

are reserved. It can be seen from the figure that 93 people were successfully evacuated when the evacuation time was 98 s. Most of the passengers and crew gathered at No. II staircase. At this time, congestion occurred due to the narrow stairway. A small number of people evacuated by choosing No. I and III stairs; When the evacuation time was 205 s, 220 people were successfully evacuated. The remaining people to be evacuated were concentrated in the No. II staircase. The L1 and L2 floors had been evacuated; When the evacuation time was 300 s, 325 people were successfully evacuated. At this time, most of the cabin personnel had been evacuated. Moreover, the evacuation time of the last person passing through observation point a is 267 s, that of the last person passing through observation point B is 384 s, and that of the last person passing through observation point C is 165 s. To sum up, by comparing the congested area and the duration of the congestion with or without groups, it is concluded that the influence of group factors on the formation time and duration of the congested area is complex, which is mainly related to the position of the waiting members of the group and the response time of the members.

Fig. 5. Assembly time frequency distribution

Finally, according to the Fig. 7, the first floor is consisted with cabin 25, cabin 19, cabin 13, cabin 7 and cabin 1. During the emergency evacuation of passengers on passenger ships, the phenomenon of conformity and the uneven selection of evacuation channels often occur, and the arched flow of people is more likely to form at the stairway entrance, resulting in the problem of a sharp increase in the density of some areas, which leads to the occurrence of crowding, stampede and other events during the evacuation of

418

D. Sun and M. Khan

Fig. 6. Emergency evacuation heat map

Fig. 7. Emergency evacuation ship map

passengers. Therefore, during emergency evacuation, the ship owner should deal with the passenger flow blocking area and conduct efficient evacuation. According to the Table 1, the R2 ranges from 0.324 to 0.351. The F max is 138.254. The P is 0.000. In this paper, the generation of panic mood of pedestrians during evacuation is specially considered. Although the evacuation speed of passengers will be accelerated due to the existence of panic mood, the simulation results of this case show that the evacuation speed of passengers will be restrained due to the increase of the density of people during evacuation, that is, fast is slow. Therefore, in case of emergency

Simulation of Passenger Ship Emergency Evacuation

419

Table 1. Emergency evacuation data equation

Model summary R2

F

P

Linear

0.345

138.254

0.000

Logarithm

0.324

125.402

0.000

Secondary

0.349

70.078

0.000

Three times

0.351

46.781

0.000

Complex

0.331

129.515

0.000

Power

0.324

125.515

0.000

Index

0.331

129.495

0.000

evacuation, real-time detection and broadcast of all evacuation routes in the passenger ship is required, So as to achieve the optimal evacuation efficiency.

5 Conclusion Based on the hesitant fuzzy integration operator and acceleration formula, this paper abstracts the acceleration formula of four different groups of people under four different influences, and gives the actual case analysis. In the process of ship fire escape analysis, the interaction relationship between different groups and the interaction relationship based on acceleration were analyzed in detail. At the same time, after collecting new relevant data based on the questionnaire, the new interaction relationship was used to test the change of passenger acceleration in the process of fire emergency evacuation using the formula extracted in this paper. This paper provides a theoretical basis for passenger evacuation in case of fire, so as to establish effective and obvious emergency evacuation measures for ship fire. In this paper, the simulation software anylogic based on the social force model is used to establish the passenger ship evacuation model, and the effects of group factors and different evacuation methods on the passenger gathering time and congestion area of a typical passenger ship are studied. The conclusions are as follows: (1) There is a significant difference in the collection time of passengers under different evacuation modes after the accident, in which the collection time of passengers arriving at the nearest collection station is the shortest, and the collection time of passengers arriving at the collection station with boarding station function through the nearest collection station is the longest. Therefore, when determining the assembly time of passenger ship personnel, it is necessary to determine different evacuation methods according to different types of accidents, so as to obtain a reasonable assembly time. (2) Group behavior can inhibit the efficiency of passengers’ assembly under different evacuation modes, and increase the range of assembly time. At the same time, the

420

D. Sun and M. Khan

congestion area of passenger ships is the same under the influence of group or not, and the influence of group factors on the formation time and duration of the congestion area is more complex, which is mainly related to the location of the congestion area and the response time of team members. (3) Under the influence of group factors, due to the behavior of small groups waiting for the members of the same group to leave, the response time of the group is the maximum response time of all members in the group. Therefore, passengers will not leave at the same time, and group factors will reduce the duration of congestion.

Acknowledgements. Supported by the Shandong Jiaotong University Scientific Research Fund in 2020, Project Number: Z202010.

References 1. Xie, Q., Wang, P., Li, S., et al.: An uncertainty analysis method for passenger travel time under ship fires: a coupling technique of nested sampling and polynomial chaos expansion method. Ocean Eng. 195, 106604 (0029–8018) (2020) 2. Mawson, A.R.: Mass Panic and Social Attachment: The Dynamics of Human Behavior. Routledge (2007) 3. Bartolucci, A., Magni, M.: Survivors’ solidarity and attachment in the immediate aftermath of the typhoon Haiyan (Philippines). Plos Curr. 8 (S2157–3999) (2017) 4. Sime, J.D.: Affiliative behaviour during escape to building exits. J. Environ. Psychol. 3(1), 21–41 (S0272–4944) (1983) 5. Cheng, Y., Xubo, Y., et al.: Data-driven projection method in fluid simulation. Comput. Animat. Virtual Worlds 27(3), 415–424 (2016) 6. Hassoun, M.H.: Fundamentals of artificial neural networks. Proc. IEEE 84(6), 906 (2002) 7. Carattin, E.: Wayfinding architectural criteria for the design of complex environments in emergency scenarios. In: Evacuation and human behavior in emergency situations. Advanced Research Workshop (2011) 8. Kobes, M., Post, J., Helsloot, I., et al.: Fire risk of high-rise buildings based on human behavior in fires, pp. 7–9 (2008) 9. IMO. Revised guidelines on evacuation analysis for the new and existing passenger Ships (2016) 10. IMO. International Convention for the Safety of Life at Sea. IMO (2009) 11. Qimiao, X.: Simulation research on passenger ship evacuation based on social force model. J. Syst. Simulat. 7, 21–0226 (2021)

A Design for Block Chain Service Platform Jinmao Shi(B) Information and Telecommunication Branch, Inner Mongolia Power (Group) Co. Ltd., Huhehaote, Inner Mongolia, China [email protected]

Abstract. In this paper, a designing for blockchain service platform is applicable to enterprises, a decentralized, distributed, tamper-proof, and traceable trusted data recording technology platform can be constructed for supporting innovative business models with multi-subject participation that can be used in a weak credit environment. The platform that supports multi-subject participation is able to store data from different businesses, uniformly, on the blockchain. The data on the chain can be queried when extraction and verification are required. After setting the design objects, the business requirements and functional requirements are analysed as the base for design. The design scheme is conclude access management, operation management and system management totally 4 modules. The key technologies is about data uploading. The usage of computing resource in IaaS is measured as floating-point operations per second (FLOPS) when they are running as the catagery in load balance, database, cache for database, message queue, object storage, application for block chain, node for block chain, etc. Keywords: Management Information System · Distributed Bookkeeping · Blockchain Platform

1 Introduction Blockchain is a new type of data storage method jointly constructed by decentralized distributed network, cryptography, smart contract and other technology integration, which has the characteristics of data privacy, conclusion transparency, not easy to tamper, and traceability [1]. 1.1 Overview Some researchers focus on consensus algorithm, cross-chain, sub-chain and other underlying technologies. The other researchers focuses on intermediate layer technologies such as hash feature locking, distributed private key control, and authorized access to privacy data, and industry focuses on application layer key technologies such as distributed applications and smart contracts [2]. The blockchain industry has been widely applied in various fields, such as entity identity authentication, notarization & confirmation, voting, finance, e-commerce (B2B & B2C), healthcare, management of supply chain, copyright conservation, government service, public welfare service, energy and entertainment [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 421–430, 2023. https://doi.org/10.1007/978-3-031-31775-0_43

422

J. Shi

1.2 Blockchain Features Blockchain establishes a consensus system through cryptography, which verifies the uniqueness of values by adding time-stamped hash features to data. Distributed system. Through a distributed network, the blockchain network is organized to collaborate and all nodes in the network work together to maintain this system [4]. The blockchain network secures the system through cryptography. Smart contracts can computerize the business rules and program the information system. The user signs the contract matter with a private key, and the business rules are satisfied in the information system before the information system invokes the smart contract to execute the automated operation [5]. In the blockchain network, each node has equal rights, and each node in the blockchain is a unique unit in the whole chain, and all must abide by the same rules. In the design of the blockchain, individuals involved in the chain do not need special authentication, and as long as they participate in matters in the chain, they can be recorded as a unit in the blockchain data chain bar to the whole chain [6].

2 Design Objectives Based on the blockchain platform, carry out the exploration of “blockchain + business” application model, transform the current business system and explore innovative business. Establish the underlying infrastructure of blockchain, provide secure and controllable blockchain access capability to meet the up-linking needs of different business lines of enterprises, and build modules containing blockchain engine, block management, etc. [7]. It provides tamper-evident blockchain records for businesses involving multiple entities such as internal and external units in supply chain management, financial business, legal business, etc., and guarantees the safety and reliability of data. [8]. To provide technical reference for the subsequent construction of blockchain bottom chain underlying facilities and blockchain service platform [9].

3 Demand Analysis 3.1 Business Requirements The business requirements could be collected by following parts. That is shown by following Fig. 1. Supply chain business on the chain demand. By realizing cross-chain interaction and data sharing through specific blockchain with financial enterprises, administrative organs and quality inspection agencies, it can provide a panoramic view of supply chain links such as orders, transportation, sales, quality evaluation, etc., realize the whole process of supply chain with credible evidence and smart contracts, and realize services such as dynamic supply chain traceability. Blockchain-based supply chain management can solve the problems of asymmetric information of multi-body transactions and provide real-time control of the status of each link of the supply chain [10].

A Design for Block Chain Service Platform

423

Fig. 1. The business Frame of Platform for Block Chain

Demand for financial business on the chain. The future construction of “one main chain + multiple cross-chain” ledger, innovative financial sharing and collaboration model, the key financial information such as transaction data, electronic contracts, fundraising, investment behavior, etc., will be uploaded to the chain and stored as evidence to achieve a credible record of all financial-related data, which can realize financial control and financial operation monitoring and effectively control risks [11]. The demand of legal business on the chain. The blockchain platform is used to dock with the legal blockchain of notary institutions, appraisal institutions, law firms and other units to solve the problem that electronic evidence is easy to tamper, easy to forge and easy to disappear. The data such as transaction record data, electronic contract master, intellectual property proof, etc. will be stored on the chain to realize the whole process of electronic data storage, the whole chain bookkeeping, and the whole node visualization. It will shift the legal risk prevention and resolution gateway from “post-facto evidence collection” to “synchronous evidence collection” [12]. Data business up-linking requirements. For the new data sharing business, there are problems such as high risk of data leakage and low data value transformation. Using the technical characteristics of blockchain technology in tamper-proof and multi-node sharing, a credible shared data ledger is created, and the platform is used to carry out business exploration in data interaction and data sharing [13]. 3.2 Functional Requirements Underlying blockchain infrastructure requirements. The construction of the underlying blockchain technology platform needs to be carried out, and the platform provides secure and controllable access to innovative blockchain applications to achieve rapid development of blockchain applications. It contains specific requirements such as blockchain engine and block management. Operation management module requirements. Through blockchain + visualization technology, see the data information stored in the entire blockchain bottom layer in real time, including block information, transaction information, contract information,

424

J. Shi

account information, etc. Realize blockchain on-chain business scenarios, blockchain data real-time monitoring and control. Access management and cross-chain docking requirements. In addition to building its own blockchain network and adding the upstream and downstream of the business to the system for common witnessing, it is also necessary to introduce the judicial chain provided by external third-party authorities or the Internet Court to increase the external third-party witnessing and increase the credibility of the business as a whole, and after integrating the judicial chain, it can provide a reliable evidence chain when judicial tracing is subsequently conducted. Cross-chain docking between multiple chains is realized through services such as deposition preservation and cross-chain connection.

4 Design Scheme 4.1 Overall Design The design is based on IaaS and PaaS, the platform is set as SaaS. That is shown by following Fig. 2.

Fig. 2. The overall design architecture of platform

The access management module encrypts and stores data such as pictures and manuscripts involved in the process of chaining business systems through blockchain technology, solidifies the format, content, ownership information and storage time of electronic data in real time, generates data fingerprints and stores them on the chain. After storage, it is necessary to use blockchain preservation service for management. It also provides external blockchain communication, cross-chain gateway registration and event management services, and can monitor and manage the overall access according to specific business needs.

A Design for Block Chain Service Platform

425

The operation management module mainly manages different businesses that access and use blockchain and provides data analysis tools. Through the operation management module, users can achieve credible access to business terminal identity and cross-domain identity authentication management, provide the generation of identity credentials for identity authentication blockchain and the full lifecycle management of blockchain credentials, and provide digital identity authentication based on blockchain to ensure the security of data on the chain.. Through the authorization management specification of blockchain service management platform, the application parties that meet the access specification are authorized to access by means of keys, and the application access and authorization management provides functions such as adding, deleting and authorizing applications. The system management module is mainly for management purposes. After users log in through the portal system authentication, they see different business contents according to the permission configuration [14]. 4.2 Access Management Module Access party creation. The system will check the uniqueness based on the authentication information and automatically generate the corresponding platform ID and KEY for the subsequent chain operation, while the system automatically generates the blockchain transaction key and can download the key. Support search function, you can search by the application account address and the name of the uploading organization, and view the application account details, registration time, authentication time, uploading data volume and other information from the perspective of the uploading body. Cross-chain gateway registration management. The cross-link gateway registration management module provides functions such as “cross-link port management”, “crosslink service registration” and “cross-link connection service”. (1) Cross-link port management will manage all external blockchain service interfaces, with functions of adding external block link port, deleting external block link port, and disabling/enabling external block link port. (2) The function of cross-chain service registration is to register the business system with external blockchain interaction requirements, and it can add, enable and disable cross-chain for the business system. (3) Cross-chain connection service is the core part of the cross-chain module, providing cross-chain communication service. When there is a demand for external chain docking and integration, it is necessary to dock and adapt through the cross-chain connection service, which will unify the integration of external link port. External blockchain management. External blockchain link port service management, including external blockchain creation, modification, enabling, disabling, interface configuration and other functions. The blockchain account is created according to the blockchain name, blockchain URL, blockchain AK, blockchain SK and other information, and the functions of modifying, disabling and enabling the blockchain account information are supported. GET, POST), request type, etc., while supporting the interface connection test. Cross-chain management of the accessing party. The business application account that conducts cross-chain business is associated with the external blockchain account, and the related application configuration (on-chain business field data) is carried out,

426

J. Shi

and the business application account can also be enabled or disabled for the related cross-chain business operation. When the external chain account is associated with the application account, the system will automatically generate the corresponding ID and KEY of the external chain for subsequent external chain on-chain operations. Cross-chain event center management. Cross-chain event center manages the behavioral events of cross-chain communication of all business systems, including cross-chain business, cross-chain time, cross-chain data input, cross-chain data response, cross-chain abnormality monitoring, cross-chain data statistics, communication components and other functions. 4.3 Operation Management Module Application configuration. The administrator creates relevant templates through application configuration (templates are provided to business application access parties for selection), and can also modify, add or delete the configuration of relevant templates. Users can call the relevant configuration through the interface to standardize the user configuration application and conform to the system configuration template. Application Center. Users can create applications according to the built-in scenario application templates (built-in uplink application templates to regulate uplink data), and support business access parties to download templates for application, and business parties can also customize applications for use. Application management. The administrator manages all the applications on the blockchain through the blockchain application management function, and supports auditing the applications. The audited applications can be deactivated and regenerated to ensure the security of business applications. Certificate management. Through the application account, the user can upload the relevant business according to the upload specification. Users can add certificates, fill in relevant information and manually upload them. The interface will display the information of manually and automatically uploaded certificates, and the user can query the relevant certificate information by certificate number and upload time. Data display. The data display provides the function of displaying the data of blockchain uplink, which includes the block height, number of transactions, number of resident applications, number of users, number of certificates, number of resident organizations, number of underlying nodes of blockchain, operation indexes (CPU, memory, disk monitoring data, etc.) of blockchain nodes (computing and storage resources) from the perspective of platform services. The page provides an at-a-glance display of the amount of data on the chain everywhere. Proof of uploading. After the user has uploaded the chain, the platform will generate the proof of uploading after the successful block verification of the relevant information returned by the uploading chain. The proof of uploading contains key blockchain information such as uploading number, uploading data, transaction hash, height of the block where it is located, corresponding hash of the block where it is located, business application to which it belongs, name of the uploading organization, and uploading timestamp. Operation Log. The operation log interface shows the information of all operations in the system, including source, operation account, access party name, operation type,

A Design for Block Chain Service Platform

427

IP address, operation parameters and operation time. Users can query the operation information by operation account, access party name, source, operation type and IP address. Block management module. The block management module mainly manages the underlying main chain and blocks. All transactions on the chain are packaged into block chain data structure through consensus algorithm and will be stored in the block chain nodes. Main chain management. Blockchain management shows the status and content of all nodes on the blockchain of the system, such as: node address, public key, IP port, node status, node location, configuration information and creation information, and can monitor the survival status of the blockchain network. Administrators can modify and delete the relevant nodes, and download the corresponding configuration of the blockchain for direct deployment. At the same time, users can query the node address to check the node related information. Smart contract management. Meet the needs of business systems for smart contracts, including business smart contract view, creation, invocation, contract disable, contract destruction and other functions, business systems through the on-chain API or on-chain SDK for business smart contract invocation. Block Publicity. This module provides block chain out block information query display function. It contains information such as block height data, block hash data, block release time data, and the number of transactions contained in the block. And the block can be retrieved according to the block height and block out time. Block details. This module provides block details, mainly displaying block height, block hash data, number of transactions, outgoing block node data, outgoing block account data, outgoing block time data, hash data of the previous block, transaction fee data, transaction return root, status root, transaction root and other data. Transaction Publicity. This module provides blockchain transaction information query display function. It contains the information of the on-chain transaction number, the location of the block, the transaction hash, the application account, the application to which it belongs, the transaction time and so on. And the transaction information can be retrieved according to the transaction hash, the uplink number, the application account and the application to which it belongs. Transaction details. This module provides transaction details, mainly displaying data such as transaction number, transaction hash, location block height, location block hash, belonging application, transaction account, target account, transaction institution, transaction timestamp, and upload data. Support to display the specific contents of the on-chain deposition. Block generation and publishing. Satisfy the monitoring of the generation of preparatory blocks after the new blocks are generated. Enables administrators to realize dynamic real-time monitoring of the construction of preparatory blocks. Administrators can view the height, hash and other information of the prepared blocks on this page. Block status management. This function mainly provides block status query service to the outside world. Blockchain administrators can view the operational status of blocks including block hash data, parent hash block height data, timestamp and time used for

428

J. Shi

generation and other information data through this page. Block status monitoring ensures smooth block generation. Block validation. This module provides text validation and file validation functions. As a user to verify whether the text or file has been chained in the depository preservation service, if the chain is successful, the depository information will be returned; the platform uses sha256 to generate the data summary, if there is doubt about the generated summary, you can use any sha256 to calculate the data summary to see if it is consistent; when the user has successfully chained the depository, but the query verification has no results, consider whether the data format is different. Changing any character will lead to inconsistency of the generated data summary. 4.4 System Management Module Menu management. The system administrator can manage and maintain the system menu, including menu name content, icon display, sorting method, authority identification features, component path topology, state data, creation time data, etc. Queries can be made according to the menu name and status. Support menu addition, modification and deletion. Parameter management. System administrator can manage and maintain system parameters, including parameter master chain, parameter name, parameter chain name, parameter chain value, system built-in, notes, creation time. It can be queried by parameter name, parameter chain name, parameter built-in and creation time. It supports regular operations such as adding, modifying, deleting, checking, exporting and clearing cache of system parameters. Organization management. Display the organization information synchronized from the portal system and support the system administrator to manage and maintain the organization. Queries can be made according to conditions. Support operations such as adding, modifying and deleting. Role management. The system administrator can manage and maintain roles, including role number, role name, authority character, display sort, status, and creation time. It can be queried by role name, authority character, status, and creation time. Support the operation of adding, modifying, deleting, exporting, and switching role status of roles. Notification announcement. System administrator can manage and maintain notification announcements, including serial number, announcement title, announcement type, status, creator, and creation time. It can be queried by announcement title, announcement type, and creator.

5 Conclusions 5.1 Key Technologies Data uploading. The process of data uploading needs to complete the application-side interface call, middleware verification, uploading information verification, uploading information caching, joining the uploading task queue, calling the middleware smart contract interface, and interface callback in turn. Application-side interface dispatching, converting business data into middleware readable data format, calling middleware

A Design for Block Chain Service Platform

429

uplink transaction interface; middleware verification signature, signature verification and encryption and decryption of middleware application program interface (API); uplink information verification, checking whether uplink information meets regulatory requirements and whether uplink transactions reach the threshold; uplink information data caching: uplink information is saved in middleware database; join the uplink task queue to transform the uplink request into the uplink task queue; middleware smart contract interface call: write data to the blockchain by executing smart contract methods; interface callback: middleware returns transaction status information data. 5.2 The Usage of FLOPS The usage of FLOPS is based on the load of business. That is shown by following Table 1. Table 1. The values of usage of FLOPS in the IaaS during operation Service Name

The values of usage of FLOPS in the IaaS during operation LB

DB

CFDB MQ

T-flops in 1 instance 0.086 0.688 0.172

OS

ABC

NBC

Cinte

CIntr

0.172 0.043 0.688 0.344 0.172 0.172

With the advancement in designing and building technologies enables the block chain widely. In the meantime, block chain becomes increasingly a great burgeon as the infrastructure in modern company. We can draw the conclusion that the design is suitable for block chain service and the platform is valuable for users.

References 1. Michel, R., Suchithra, R.:ADOBSVM: anomaly detection on block chain using support vector machine. Measure. Sens. 24, 100503 (2022) 2. Naga Rani, B.V., Visalakshi, P.: Block chain enabled auditing with optimal multi-key homomorphic encryption technique for public cloud computing environment. Concurr. Comput. Pract. Exp. 34(22) (2022) 3. Sugitha, G., Solairaj, A., Suresh, J.: Block chain fostered cycle-consistent generative adversarial network framework espoused intrusion detection for protecting IoT network. Trans. Emerg. Telecommun. Technol. 33(11) (2022) 4. Priya, J., Palanisamy, C.: Novel block chain technique for data privacy and access anonymity in smart healthcare. Intell. Autom. Soft Comput. 35(1), 243–259 (2023) 5. Raja, L., Periasamy, P.S.: A trusted distributed routing scheme for wireless sensor networks using block chain and jelly fish search optimizer based deep generative adversarial neural network (Deep-GANN) Technique. Wireless Pers. Commun. 126(2), 1101–1128 (2022) 6. Pandey, R.K., Agrawal, V.K.: block chain technology in digital accounting. Int. J. Eng. Manag. Res. 12(3), 201–204 (2022) 7. Ranjith Kumar; S., Karthic, R.M., Nandhini, N., Kaliappan, P.L.: Block chain and edge computing base design secured framework for tender allocation. Electrochem. Soc. Trans. ECS Trans. 107(1) (2022)

430

J. Shi

8. Akilandeswari, R., Malathi, S.: Design and implementation of controlling with preventing DDOS attacks using bitcoin by Ethereum block chain technology. J. Transp. Secur. 15(3-4). 281–297 (2022) 9. Arulprakash, M., Jebakumar, R.: Towards developing a block chain based advanced Data Security-Reward Model (DSecCS) in mobile crowd sensing networks. Egypt. Informat. J. 23(3), 405–415 (2022) 10. Mahapatra, S.N., Singh, B.K., Kumar, V.: A secure multi-hop relay node selection scheme based data transmission in wireless ad-hoc network via block chain. Multim. Tools Appl. 1–31 (2022) 11. Govindasamy, C., Antonidoss, A.: Hybrid meta-heuristic-based inventory management using block chain technology in cloud sector. Int. J. Ad Hoc Ubiquit. Comput. 41(3), 147–169 (2022) 12. Chaoxian, X.: Research on online copyright protection in the perspective of blockchain technology. Electr. Intellect. Prop. Rights 11, 109–116 (2019). In Chinese 13. Intelligent Mutual Law Group: School of Law, China University of Metrology, China, “Problems, challenges and countermeasures of the development of internet courts - taking Hangzhou internet court as an example.” The Times Rep. 8, 96–97 (2019). In Chinese 14. Xuan, J., Zheng, S., Lv, Z., Du, Y., Pan, P.: Research and application of power grid control system based on blockchain. In: 4th International Conference on Electrical, Automation and Mechanical Engineering (EAME2020), pp. 400–411 (2020)

Resource Evaluation and Optimization of Wireless Communication Network Based on Internet of Things Technology Xin Yin(B) , Yong Yuan, Ruifeng Mo, Xin Mi, and Wenqiang Li Northern Institute of Automatic Control Technology, Taiyuan 030006, Shanxi, China [email protected]

Abstract. With the rapid development of 5G communication system and the continuous improvement of wireless access technology, more and more people can enjoy the convenience of wireless communication. How to utilize the limited wireless resources to meet the growing service demands has become a common concern of domestic and foreign researchers and mobile network operators. The purpose of this paper is to evaluate and optimize wireless communication network resources, and to improve the utilization efficiency of multi-user cognitive wireless power supply communication network spectrum resources from the perspective of optimizing the time slot structure of cognitive wireless power supply communication network. By introducing the Internet of Things technology, the system time slot structure is adjusted and optimized to improve the utilization efficiency of spectrum resources of the wireless power supply communication network, so as to achieve the purpose of improving the capacity of the cognitive wireless power supply communication network. Judging from the test results under outdoor mobile conditions, the CS64Kbps video call drop rate decreased from 9% to 1%, the call completion rate increased from 92% to 99%, the call waiting time was 100% over 30 min, and the attachment success rate increased from 99% increased to 100%. Keywords: Internet of Things Technology · Wireless Communication Network · Resource Assessment · Resource Optimization

1 Introduction With the full penetration of wireless communication services into all walks of life in human society, the wireless operation market continues to show a rapid growth trend. Especially with the commercial launch of 5G, the amount of global mobile data and the number of terminals connected to the IoT will explode. The prosperity of the mobile communication market is inseparable from the support of sufficient spectrum resources, and wireless communication networks without available spectrum can only be empty talk [1]. Therefore, with the vigorous development of the wireless communication industry, the existing licensed spectrum will become more and more crowded, and the cognitive reuse of spectrum resources is an important technical means to deal with the shortage of spectrum resources. In addition, according to different application scenarios of wireless © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 431–440, 2023. https://doi.org/10.1007/978-3-031-31775-0_44

432

X. Yin et al.

communication, combined with the characteristics of ground, air, and different task drivers, it is very important to research and design the best spectrum resource utilization scheme to realize the efficient reuse of precious resources to improve the performance of wireless communication systems [2]. Scholars at home and abroad have achieved a lot of research results on the evaluation and optimization of wireless communication network resources. Blocks included in the Sisavath C model include breakout board blocks, port blocks, and APP blocks. The main function is: the computer can display the temperature and humidity data collected by the assembly board through the browser, and control the LED switch state of the assembly board through the browser to display the collected temperature and humidity data. Visit the website. Meeting with husband at the same time. The port board controls the on/off state of the LED light through the application [3]. Almomani D We created a new Internet of Things (IoT) system focused on providing security through low cost, low battery, high speed systems. Regarding information management systems, it is designed to protect remote privacy and manage data collected through our systems. This work aims to develop a proactive system to manage most of the technical barriers in farms and households, such as dealing with farm management and home security systems, as well as cattle pens, rainwater, irrigation systems. As well as agriculture, food supplement programs and more. All these systems. Using a connected database, Infrared (IR). The system is used for monitoring. They send all retrieved information for maintenance. The Arduino will be used to program the system to keep the price at the desired average level [4]. Quionez Y proposes two architectures to enable long-distance communication between electronic devices and mobile devices using GSM/GPRS communication services and Twitter social network. This development is designed to manage adequate and healthy food for dogs, providing the required portion of food according to the dog’s daily energy needs. A nutritional analysis is also performed to calculate the proportion of daily healthy and balanced food based on daily energy requirements, taking into account different factors such as the dog’s size, breed and weight. Essentially, an electronic device has two parts: On the one hand, the electronic design is the Arduino board, the Sim900 module that sends and receives text messages, and the ESP8266 Wi-Fi serial transceiver module that allows the creation of the internet [5]. In this paper, the application of the Internet of Things technology to the evaluation and optimization of line communication network resources has certain innovative significance. Based on the existing research, this paper will “open source and reduce expenditure” for various resources to further improve the capacity of cognitive wireless power supply network. Specifically, in terms of open source, by adjusting the time slot structure of the spectrum opportunistic wireless power supply communication network, on the basis of overcoming the interference of dedicated RF source energy transmission to the main user, the energy collection capability of passive IoT terminals is enhanced; On the one hand, through the adjustment and optimization of the time slot structure of the wireless power supply communication network based on spectrum cooperation, the resource consumption of the secondary user assisting the primary user communication is reduced. Therefore, exploring and researching the wireless communication network resource

Resource Evaluation and Optimization of Wireless Communication Network

433

evaluation and optimization problems and methods has important research significance and application value.

2 Research on the Evaluation and Optimization of Wireless Communication Network Resources Based on Internet of Things Technology 2.1 Practical Application of IoT Technology (1) The field of smart city construction The main purpose of smart city construction is to improve human performance and facilitate production and life. At present, it is mainly displayed through 3D graphics system. It is based on urban infrastructure management, security monitoring, movement of people and vehicles, and alerting work orders. Urban components, public safety, geology and hydrology, etc. Among them, infrastructure management includes corresponding detailed information, operating status and alarm level, etc., and displays the overall statistics of urban components, so that the administrative divisions can maintain linkage [6, 7]. (2) Communication industry field As customers need sufficient bandwidth to support timely communication of corresponding devices in the use of cable TV, cellular data and related services, China Mobile, AT&T and equipment manufacturer Huawei are continuing to carry out IoT research in the communications industry. Therefore, many telecommunications companies want to identify themselves as data pipeline operators and new IoT product manufacturers, and have taken many countermeasures at the same time [7, 8]. (3) Smart home field Due to the continuous maturity of the Internet of Things technology, the integration of cross-domain and cross-industry technologies has become a reality, and many families have gradually realized the informatization of their lives through smart homes. By applying technologies such as the Internet of Things, big data, cloud computing, artificial intelligence and wireless communication in the smart home, Interconnect provides tools and services such as lighting control, home appliance control, indoor remote control, and public and robbery. Reminder: The concept of smart home IoT continues to deepen. People’s hearts. Obviously, in addition to traditional lifestyle services, smart homes can also provide more flexible information interaction and save transportation costs [9, 10].

434

X. Yin et al.

2.2 Optimization of Wireless Communication Network Resources In different occasions in the field of communication network, the word “optimization” has different connotations. A detailed description will be given below. A plan is a systematic, documented set of pre-established assumptions and actions to achieve a desired goal. The network design of the mobile communication system is to propose a set of network construction schemes with the lowest cost in advance in order to achieve the goals of communication quality, service area and user capacity. Organize according to business needs. It can be seen that the network configuration is completed during the network construction phase. The goal of wireless network design is to build a wireless network with the largest coverage and the largest capacity at the lowest cost while ensuring the quality of service [11, 12]. Wireless network design is highly valued by people, especially network operators. In the initial stage of mobile network construction, the construction process and the network expansion process, network design and planning are required. The network optimization of the mobile communication system is to adjust the network parameters and settings under the condition that the network resources are available, so as to improve the network performance and better meet the needs of users. It can be seen that network optimization is done in the network performance part [13, 14]. 2.3 Theoretical Model of Resource Optimization (1) Convex optimization The convex optimization problem plays a very important role in solving many practical optimization problems including wireless network resource management and control. Because the local optimal solution of convex optimization is the global optimal solution, and its existence is unique. It can be solved quickly using efficient algorithms or commercial software tools, such as Lagrangian duality or KKT conditions [15, 16]. (2) Game Theory Game theory is a theoretical and methodological system to study rational strategic choices by decision-making entities with conflicting interests. Game theory is sometimes called game theory. Generally speaking, game theory includes several concepts such as participant entity, feasible strategy combination, specific utility function, known information set, game equilibrium goal, etc. Among them, participant, strategy combination and utility function are the most basic elements. It is the research goal to determine the game equilibrium through the process of game strategy analysis, and the most important research problem is the existence and uniqueness of the game equilibrium. (3) Machine learning and artificial intelligence In recent years, a large number of researches have introduced machine learning technology into wireless network resource management and control. For wireless networks, for dense channel access, power allocation and interference management, user association, cell selection and handover management, harmonious coexistence of multiple services, wireless coverage expansion, energy management, real-time processing and ultra-reliable low-latency communication, etc. Traditional solutions

Resource Evaluation and Optimization of Wireless Communication Network

435

rely on optimization methods that take into account instantaneous CSI channel state information and user QoS requirements. However, since the resource allocation problem is usually not convex, the solutions obtained by traditional techniques are not globally optimal, and the solution to the problem may not be real-time either [17].

3 Model and Research of Wireless Communication Network Resource Evaluation and Optimization Based on Internet of Things Technology 3.1 System Model The MEC network model constructed in this paper includes one MEC server, one base station and N different types of mobile devices. Both the base station and the mobile device are equipped with a antenna, and the orthogonal frequency division multiple access method is used to transmit data in K channels. The base station and the MEC server are connected by optical fiber, so the transmission delay is ignored. The computing tasks of the mobile device can be processed locally or offloaded to the MEC server at the edge of the network through the base station. The system performs resource scheduling in a time of length Ts, where T is the maximum duration of system resource scheduling. In any time slot t, the amount of computing task data received by mobile device n is Ln(t) (bits). The computing tasks performed locally by the mobile device obey the first-in-first-out method. Bn(t) represents the queue length of the device n in the time slot t. The update expression of Bn(t) is: Bn (t) = max{Bn (t − 1) + Ln (t) − vn (t) − Dn (t), 0}

(1)

Among them, Dn(t) represents the amount of data processed locally by device n in time slot t, k0 processes the computing resources consumed by the unit bit task, vn(t) represents device n The size of the data uploaded to the MEC server in the time slot t, the expression is: up

vn (t) = Tn (t) · W log2 (1 + up

K k=1

2

cnk (t)

pnk (t)|hkn (t)| ) σ2

(2)

Among them, Tn (t) is the duration of uploading data; W represents the channel bandwidth.

436

X. Yin et al.

3.2 Construction of System Energy Consumption Optimization Problem The energy consumption of the MEC system includes the energy consumed when computing tasks are processed locally, the energy consumed in the upload process, and the energy consumed by the MEC-side processing. In time slot t, the energy consumption to process data locally is: Enloc (t) = ξ1 (fnu )2 k0 Dn (t)

(3)

where ξ1 is a constant factor, which is determined by the processing power of the mobile device. The energy consumption required to upload data is: up

up

En (t) = Tn (t)(

K

ck (t)pnk (t) + αpncir ) k=1 n

(4)

Among them, pcir n is the power, and a is a constant factor. 3.3 Simulation Parameter Settings In the main environment parameter settings in the simulation, the neural network contains 2 hidden layers. During the parameter update process of DNN, the learning rate is 8e−5, the size of the experience pool is 1000, and 200 samples are sampled from the experience pool in each batch. The frequency C of network parameter update is 20. The training step size is 300 rounds, and each round includes 600 time slots.

4 Analysis and Research of Wireless Communication Network Resource Evaluation and Optimization Based on Internet of Things Technology 4.1 Analysis of Optimization Effect After adjusting the antenna direction angle, test the actual movement to ensure that there is no call drop phenomenon. In order to facilitate comparative analysis, after adjusting the antenna, the change of pilot frequency ec/Io when base station A is switched from cell 432 to cell 431 is also recorded, as shown in Fig. 1. As shown in Fig. 1, the conversion time will increase from 10 s before optimization to around 20 s. The conversion rate is increased from 4–6 m to 20–25 m. The Ec/Io pilot frequency change is less than 431 cells, and it is maintained at −4–−2 dB in the later period. The pilot frequency Ec/Io fluctuates greatly in the 432 cells, and is maintained at −12–−6 dB.

Resource Evaluation and Optimization of Wireless Communication Network

437

0 1

2

3

4

5

6

7

8

9

10

-2

Ec/Io (dB)

-4

-6

-8

-10

-12

-14

time Scrambler 431

Scrambler 432

Fig. 1. Handover process from cell 432 to cell 431

4.2 Evaluation of the Overall Optimization Effect Through extensive testing and optimization of the existing environment and sound quality of the WCDMA experimental network, specific work experience and basis have been established for the optimization, analysis and evaluation of 5G wireless networks. As shown in Table 1, the test data shows that UE Rx Power and Agg Ec/Io pilots have good coverage in the entire network. The whole network backbone UE Rx Power > −80 dBm rate is 99%, and the Agg Ec/Io > −15 dB rate is 99%. Statistical analysis of all network traffic test data through optimization analysis software, the whole network coverage rate reaches 100% after optimization.

438

X. Yin et al.

Table 1. The performance evaluation indicators of the experimental network after optimization business type AMR12.2K voice service

CS64K videophone service

PS domain service

Performance

Test Results indoor fixed point

outdoor mobile

Connection rate

100%

99%

call drop rate

0%

0%

Call setup time (seconds)

6.13

6.98

long-term retention >30 min ability

>30 min

Connection rate

100%

99%

call drop rate

0%

0%

Call setup time (seconds)

6.33

6.97

long-term retention >30 min ability

>30 min

Attachment success rate

100%

100%

PDP activation success rate

100%

100%

Communication interruption rate

0%

0%

PDP activation time (seconds)

1.04

1.51

long-term retention >30 min ability

>30 min

As shown in Fig. 2, after optimization, the overall performance of the test network is significantly improved. Judging from the test results under outdoor mobile conditions, the CS64Kbps videophone drop rate decreased from 9% to 1%, the call completion rate increased from 92% to 99%, %, the call waiting time was 100%, and the minutes were 30. New PS function success rate increased from 99% to 100%, activation success rate increased from 98% to 100%.

business type

Resource Evaluation and Optimization of Wireless Communication Network

439

1.51 1.04

PS domain service

6.97 6.33

CS64K videophone service

6.98

AMR12.2K voice service

6.13

0 outdoor mobile

2

4 test result indoor fixed point

6

8

Fig. 2. Call Setup Time Comparison

5 Conclusions This paper proposes a dynamic load balancing mechanism based on access selection and service delivery. The access selection is modeled as a constrained access optimization problem, and a robust algorithm is to solve the access selection problem. Moreover, to reduce the damage to the system load balance caused by the burst operation of hotspot cells, this paper also introduces a service transfer strategy based on the base station load rate. The results show that the dynamic load balancing scheme proposed is superior to the schemes reported in the literature in terms of access blocking rate, load balancing and system utilization, and has the best performance.

References 1. Minh, V.T., et al.: Development of a wireless communication network for monitoring and controlling of autonomous robots. Int. J. Robot. Autom. 33(3), 226–232 (2018) 2. Eladl, A.A., Saeed, M.A., Sedhom, B.E., et al.: IoT technology-based protection scheme for MT-HVDC transmission grids with restoration algorithm using support vector machine. IEEE Access PP(99), 1 (2021) 3. Sisavath, C., Yu, L.: Design and implementation of security system for smart home based on IOT technology. Procedia Comput. Sci. 183(2), 4–13 (2021) 4. Almomani, D.A., et al.: Information management and IoT technology for safety and security of smart home and farm systems. J. Glob. Inf. Manag. 29(6), 1–25 (2021)

440

X. Yin et al.

5. Quionez, Y., Lizarraga, C., Aguayo, R., et al.: Communication architecture based on IoT technology to control and monitor pets feeding. J. Univ. Comput. Sci. 27(2), 190–207 (2021) 6. Al, A.: IoT Technology with fuzzy logic control integrated system to achieve comfort environment. Turk. J. Comput. Math. Educ. (TURCOMAT) 12(3), 1409–1414 (2021) 7. Yusuf, E., Zuhriyah, H., Abdulrazaq, A., et al.: 2019 Novel coronavirus disease (Covid19): thermal imaging system for Covid-19 symptom detection using IoT technology. Revista Argentina de Clinica Psicologica 29(5), 234–239 (2020) 8. Thangaiyan, J.: Automated kitchen management and provisions monitoring system using IoT technology. Int. J. Control Autom. 13(2), 776–784 (2020) 9. Jenish, T.: Survey of literature on reliable smart grid operation incorporating IOT technology. J. Adv. Res. Dyn. Control Syst. 12(SP3), 1330–1334 (2020) 10. Raj, A.S.: Polymer sensor t-shirt for sleeping disordered breathing patient monitoringusing IoT technology. Xi’an Dianzi Keji Daxue Xuebao J. Xidian Univ. 14(5), 4793–4811 (2020) 11. Kumudham, R., Ganesh, E.N., Rajendran, V., et al.: Testing anti-collision algorithm for tracking purpose using RFID in IOT technology. J. Crit. Rev. 7(15), 4587–4592 (2020) 12. Kosasih, A.: Designing enterprise architecture for gasoline distribution monitoring system using IoT technology. Int. J. Adv. Trends Comput. Sci. Eng. 9(3), 2642–2648 (2020) 13. Djordjevic, M., Punovic, V., Dankovic, D., et al.: Smart autonomous agricultural system for improving yields in greenhouse based on sensor and IoT technology. Istrazivanja i Projektovanja za Privredu 18(4), 1–8 (2020) 14. Mastanrao, S.: Propose and achievement of solar power elegant irrigation system by using IoT technology. J. Sci. Res. Dev. 6(5), 9321–9325 (2020) 15. Dhaya, R., Kanthavel, R.: A wireless collision detection on transmission poles through IoT technology. J. Trends Comput. Sci. Smart Technol. 2(3), 165–172 (2020) 16. Rahim, N., Zaki, F., Noor, A.: Smart app for gardening monitoring system using IoT technology. Int. J. Adv. Sci. Technol. 29(4), 7375–7384 (2020) 17. Trappey, A., Trappey, C.V., Govindarajan, U.H., et al.: Patent value analysis using deep learning models—the case of IoT technology mining for the manufacturing industry. IEEE Trans. Eng. Manag. PP(99), 1–13 (2019)

Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence Xinwei Zhang(B) Wuhan Donghu University, Wuhan 430212, Hubei, China [email protected]

Abstract. In recent years, the improvement of technical level has provided the foundation for the gradual commercialization and daily life of artificial intelligence. At present, artificial intelligence has been closely related to people’s lives. With the continuous development of our country’s social economy, environmental issues have been paid more and more attention by people. The proposal of green construction management just meets the needs of the development of the times. Green construction management highlights the importance of environmental protection during construction. The purpose of this paper is to optimize the governance of urban water environment through green construction, using artificial intelligence means, and a multi-objective optimization model of the quality and environment using the weighted sum method. The optimization results show that there are two optimization schemes in terms of quality and environment. The overall quality level after optimization is improved and meets the requirements; the overall environmental impact after optimization has increased, mainly due to the improvement of construction lighting and construction noise pollution, which are inevitable during nighttime construction due to the reduction of operating time of the process. Keywords: Artificial Intelligence · Urban Water Environment · Water Environment Governance · Green Construction

1 Introduction In recent years, the rapid development of the artificial intelligence market mainly depends on the improvement of technical level, the most important of which is the collection of big data and efficient computing operation ability [1]. Artificial intelligence has increasingly become a hot word in the current practical and academic circles, and it has a certain role in restructuring the social division of labor, market structure, and labor patterns. Water pollution is one of the main reasons restricting the construction of urban ecological civilization. The construction of water environment treatment projects is rapid, the water environment treatment projects have received strong support from the state, and good results have been achieved in water environment restoration. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 441–448, 2023. https://doi.org/10.1007/978-3-031-31775-0_45

442

X. Zhang

With the acceleration of my country’s urbanization process, urban water pollution is serious, river water quality is poor, water is black and odorous, and the water ecological environment is seriously damaged. In Ehteram M’s study, an improved adaptive neuro-fuzzy inference system (ANFIS) and multilayer perceptron (MLP) model were combined with the sunflower optimization (SO) algorithm and introduced into lake level simulations. Using the potential of the proposed advanced artificial intelligence (AI) model to predict and assess Lake Urmia water levels. Implement the sunflower optimization algorithm to find the best tuning parameters. The long-term lead (i.e., a rain lag of six months) achieved the worst predictive power. Uncertainty analysis shows that the ANFIS-SO model has less uncertainty based on the percentage of more responses in the confidence band and the lower bandwidth. In addition, different water harvesting options have been studied considering environmental constraints and fair water allocation to stakeholders [3]. Veve C has developed many AI sharing mobility solutions in recent decades. In terms of AI mobile technology innovation, new solutions have emerged that are more flexible to meet user needs. These dynamic solutions allow serving users by optimizing different aspects of the service, such as detours to pick up and drop off passengers or wait times for users. Such an approach fulfills requests quickly and matches user expectations as closely as possible. However, these approaches typically use fleets of many small-capacity vehicles to serve each user. In contrast, AI-powered microtransportation is designed to meet a wider range of needs than traditional shared mobility. It aims to identify recurring intelligent patterns of mobility and validate the possibility of implementing microbus lines to serve them [4]. Water environment management and water ecological restoration are important guarantees for building an ecologically civilized city. The comprehensive water environment improvement project can improve the water ecological environment and change the image of the city, so as to satisfy people’s yearning for a better life. This paper studies the green construction optimization of urban water environment governance based on artificial intelligence, clarifies the development process of artificial intelligence, the characteristics of water environment governance, the principles of green construction, etc., and adopts the method of weighted summation. Aspects of investigation and analysis, and strive to find the optimal solution.

2 Research on Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence 2.1 The Development History of Artificial Intelligence (1) The theoretical origin of artificial intelligence The ideological direction of artificial intelligence is divided into two paths: one is the symbolism school, which advocates simplifying human thinking activities into symbol processing mechanism, symbol preparation and symbol combination generation code base, while artificial intelligence is to introduce as much as possible. The codebase is as inclusive as possible. The second is the school of connectionism, which advocates that the actual structure of the human brain determines the human thinking process, so the construction of artificial intelligence needs to be

Green Construction Optimization of Urban Water Environment Governance

443

aware of the understanding of neural networks and the human brain that imitates. To develop behavioral science theories, neurophysiologists have proposed that human neurons can be thought of as logical processing devices [5]. And when countless neurons are connected in a precise way, complex logic can be implemented. Behavioral psychology represented by this attempts to reduce human reasoning to logical responses, and the development of behaviorism provides the necessary theoretical basis for the concept of artificial intelligence. (2) Modern progress of artificial intelligence With the development of Internet technology and the improvement of hardware performance, the neural network architecture is developing in the direction of deep learning algorithms, and the self-learning ability of artificial intelligence has been greatly improved [6]. In the case of multiple network layers superimposed, different network layers are trained based on the input data and normal feature detection, then a different network, and then the layers are executed against each other, using weight matching and feature alignment to fine-tune the algorithm to improve learning efficiency. At present, artificial intelligence is developing rapidly, forming unique advantages in image recognition, speech recognition, artistic creativity, etc. 2.2 Construction Characteristics of Water Environment Engineering With the acceleration of the country’s urbanization process and the continuous growth of the economy, the urban population continues to grow, people pay more and more attention to their living environment and quality of life, and there are more and more projects for water environment governance [7]. Large-scale comprehensive water environment improvement projects generally have the following characteristics: (1) The large project investment The large-scale comprehensive water environment improvement project is the backbone project. The project is large in scale, with more construction personnel and construction machinery, and requires a large number of fences, building signs and other supporting facilities. (2) The difficult construction of the project The outstanding characteristics of large-scale comprehensive water environment improvement projects are that the project is complex and difficult to construct, and many unknown situations will be encountered during the construction process. During the construction process, the power supply and water supply pipelines in the surrounding areas may be interrupted, affecting the normal life of residents. For such projects, special construction methods and construction techniques can be adopted to achieve civilized, fast and high-quality construction [8]. (3) The wide construction scope Large-scale comprehensive water environment improvement projects are generally basin system management, covering a wide range, from point to line, from line to ground, and even from ground to underground [9]. The establishment of water environment management projects can promote the process of local urbanization, create a good city image, and promote the construction of local ecological civilization. However, some road construction projects will seriously affect local trade and

444

X. Zhang

industry. The corresponding supporting public infrastructure should be launched at the same time, which requires the support and cooperation of all cooperative units. 2.3 Green Construction Green building refers to a construction activity that maximizes resources and minimizes negative impact on the environment through advanced and scientific management and building technology innovation under the premise of ensuring the construction period, quality, and safety of the basic requirements [10]. Green buildings cover many aspects of sustainable development, including the use of renewable resources, reduction of energy consumption, and environmental protection. my country is committed to the sustainable development strategy, and green building construction projects are the follow-up to the sustainable development policy and the main construction mode of my country’s future engineering projects. The main principles followed by green buildings include: (1) Lightly disturb the site and protect the environment The scope of the site enclosure and the protection of surrounding plants; during the construction process, the layout of temporary construction facilities should be reduced as much as possible, and the general construction layout should be reasonably arranged; special treatment plans should be formulated for wastes. (2) Save resources The ultimate goal of green construction is to reduce energy consumption, reduce emissions of harmful substances, and reduce carbon emissions. (3) Establish a scientific management model to improve construction quality The construction unit should change from the initial passive to the active implementation. High-quality green construction can not only save resources and protect the environment, but also improve the overall level of the project.

3 Investigation and Research on Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence 3.1 Analysis of Multi-objective Optimization Problems in Green Construction Project Management Often, there is a conflicting relationship between the duration, quality and cost of an engineering project. In the process of project creation, the three goals of construction period, quality and cost cannot be expected to be “ideal” at the same time, but various resources can be fully considered, comprehensive planning, rational organization of construction, and overall optimization. In order to achieve project management goals, enterprises should On the basis of qualified quality, environmental issues are considered, and the goal of maximizing benefits is achieved under the premise of reasonable construction period pressure. Therefore, how to achieve a balanced improvement of the two goals will be the top priority for construction enterprises to manage green construction projects and manage the urban water environment.

Green Construction Optimization of Urban Water Environment Governance

445

3.2 Data Collection This paper selects a construction section of a city water environment treatment project in M province as an example of model checking to further verify the feasibility of green construction optimization project management in practical engineering. Based on the relevant theoretical knowledge, a multi-objective optimization model of quality and environment is adopted using the weighted sum method. In the actual implementation process of the project, the importance of each operation process, the quality of completion and the impact on the overall quality of the project are all different. Therefore, it is not appropriate to assign equal weight to each process of the project. Reasonable. Therefore, the quality and environmental protection weights of each process in this paper are as follows: ωiQ (1) kqi = kq ∗ ωQ kei = 0.75 − kqi − kei

(2)

In the formula, kqi , kei , they represent the weights of the quality and environment corresponding to the operation process, respectively.

4 Analysis and Research on Green Construction Optimization of Urban Water Environment Governance Based on Artificial Intelligence 4.1 Data Analysis Urban water environment factors mainly include natural environment, policy environment and field operation environment. These factors directly or indirectly affect the construction quality, effectively predict the environment and take relevant measures to deal with it. According to the noise, dust, air quality of the surrounding cities and other detection systems on the construction site and the regular sewage discharge test data, certain treatment is carried out, and the actual situation of the city’s construction scale and location is summarized and sorted out A, B, C The scoring value of each influencing factor of the process is shown in Fig. 1 and Table 1 below:

446

X. Zhang Table 1. The number of each influencing factors of A - C

Process name

A

B

C

Greenhouse effect

0

0.067

0.046

Dust pollution

0.034

0.09

0.032

Water pollution

0.076

0

0.062

Light pollution

0

0.05

0.04

Noise pollution

0.02

0.019

0.02

Environmental impact value

0.187

0.28

0.257

0.3

A

B

0.28

C

0.257 0.25

0.187

0.2

Value

0.15

0.09

0.1

0.076 0.062

0.067 0.046 0.05

0.05 0.04

0.034 0.032

0.02 0.019 0

0

0.02

0

0 Greenhouse effect

-0.05

Dust pollution Water pollution Light pollution Noise pollution

Influencing Factors Fig. 1. The scoring value of each influencing factors

Environmental impact value

Green Construction Optimization of Urban Water Environment Governance

447

4.2 Evaluation of Optimization Results This paper evaluates the solution results of the optimization scheme from two aspects of quality and environment. The overall quality level after optimization is 98.66%, which is 90% higher than the target level and meets the requirements. It is 8.33% higher than the overall quality level of 90.33% under the extreme working time. The optimized overall environmental impact value is 2.176, which is better than the target environmental impact value of 2.3, and is 1.203 lower than the overall environmental impact value of 3.45 under extreme operation. The overall environmental impact value after optimization is still higher than the environmental impact value of 2.2 under normal operation, mainly due to the improvement of construction lighting and construction noise pollution that are inevitable during nighttime construction due to the compression of operating time, as shown in Fig. 2:

98

Maximum

98

88

Goal Content

65

90

Postoptimality 60

82

The limit 39

0

Quality

20

40

60 Value

80

Environmental protection

Fig. 2. Comparative analysis chart of target data

100

120

448

X. Zhang

To sum up, it can be seen that the comprehensive optimization of green construction of urban water environment governance based on artificial intelligence has achieved the expected effect. To achieve the purpose of significantly improving the quality level and significantly reducing the environmental impact at a lower cost.

5 Conclusions The urban water environment management project is a complex system, and there are many factors that affect the quality management. These factors affect the results of systematic, comprehensive and scientific research to a certain extent. It is an inevitable trend of today’s social development to realize the harmonious coexistence between man and nature, and sustainable development is the fundamental strategy to realize environmental protection and resource conservation. Relatively mature research results have been achieved in the field of green construction technology and new energy materials. There are still many deficiencies in the study of multi-objective optimization problems in this paper, but it is hoped that this paper can provide some practical significance for the research in this field and provide reference value for future research.

References 1. Diligenskaya, A.: Parametric identification of technological thermophysics processes based on neural network approach. J. Vibroeng. 23(6), 11–12 (2021) 2. Gupta, S.K., et al.: Artificial intelligence-based modelling and multi-objective optimization of friction stir welding of dissimilar AA5083-O and AA6063-T6 aluminium alloys. Proc. Inst. Mech. Eng. Part L J. Mater. Des. Appl. 232(4), 333–342 (2018) 3. Ehteram, M., Ferdowsi, A., Faramarzpour, M., et al.: Hybridization of artificial intelligence models with nature inspired optimization algorithms for lake water level prediction and uncertainty analysis. AEJ Alex. Eng. J. 60(2), 2193–2208 (2021) 4. Veve, C., Chiabaut, N.: Demand-driven optimization method for microtransit services. Transp. Res. Rec. 2676(3), 58–70 (2022) 5. Aissat, A., Bahi Azzououm, A., Benyettou, F., Laidouci, A.: Optimization of copper indium gallium diselenide thin film solar cell (CIGS). In: Hatti, M. (ed.) ICAIRES 2017. LNNS, vol. 35, pp. 479–485. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73192-6_50 6. Dev, S., Srivastava, R.: Parametric analysis and optimization of fused deposition modeling technique for dynamic mechanical properties of acrylic butadiene styrene parts. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 236(8), 4166–4179 (2022) 7. Dajer, M., Ma, Z., Piazzi, L., et al.: Reconfigurable intelligent surface: design the channel-a new opportunity for future wireless networks. Digit. Commun. Netw. Engl. Vers. 8(2), 18–19 (2022) 8. Solanki, P., Baldaniya, D., Jogani, D., et al.: Artificial intelligence: new age of transformation in petroleum upstream. Pet. Res. 7(1), 9–10 (2022) 9. Butarbutar, T.: A preliminary study: forest and environment governance based on hydronomic zone and authority agency for Toba water catchment area and Asahan watershed, North Sumatera. Environ. Ecol. Res. 6(5), 423–432 (2018) 10. Ariff, R., Yahya, K., Sharif, S.: The impact of green behavior capability on green construction performance in Indonesia. IOP Conf. Ser. Mater. Sci. Eng. 849(1), 012–031 (2020)

Design and Experience of Virtual Ski Resort Based on VR Technology and Meteorological Condition Simulation Haimin Cheng(B) , Guiqin Fu, and Yushan Liu Hebei Meteorological Service Center, Shijiazhuang 050021, China [email protected]

Abstract. Daily snowfall, average wind speed, and maximum daily temperature are the most representative influencing factors for skiing among all meteorological factors. In this paper, various VR skiing virtual environments that incorporate the impact of meteorological conditions are designed. Firstly, different levels of snowfall simulation effects are designed at different stages of the skiing experience. Secondly, an intelligent environmental wind field simulation system is developed to dynamically simulate the impact of different wind forces during the skiing process on the sliding experience. A wind speed change function is created to simulate the fluctuation of wind force at level 2 in the snow field, and the corresponding levels of wind force simulation system are activated according to the simulated wind forces in the snow field. Thirdly, the temperature is designed to change with the wind force and snowfall, and the dynamic real-time temperature of the corresponding range is simulated in different regions. The process lasts for 3–4 min and experiences the simulated skiing weather indexes: level 1, level 2, and level 3, and end before level 4. The collection of ski movement data through the ski platform and controlment of the computer virtual simulation system for human-computer interaction make the experiencer have a simulation experience of skiing in the virtual skiing resort. Keywords: Meteorological Conditions · VR Skiing · Interactive Experience · Process Design · Virtual Environment

1 Introduction Virtual reality technology has promoted the rapid development of the VR industry. Because of its immersion and strong interactivity, it has brought three-dimensional and three-dimensional real feelings to the public. In recent years, it has been widely used in film, television, science, games and other industries [1, 2]. The VR ski simulator is a typical case of deep integration of virtual reality technology and skiing. The VR ski simulator based on virtual reality technology basically provides visual scenes and simulates the adjustments of the speed and direction in real skiing [ 3, 4]. However, there are no VR ski product that incorporates the influence of weather conditions into the experience. In fact, the influence of weather conditions such as wind, snowfall, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 449–458, 2023. https://doi.org/10.1007/978-3-031-31775-0_46

450

H. Cheng et al.

temperature and other weather conditions on skiing is very obvious. In this paper, the VR skiing project with the influence of meteorological conditions is developed. The VR ski science exhibition design incorporating the influence of weather conditions is based on virtual reality technology, hardware systems and series of sensors. The 3D model of the VR ski virtual ski track refers to Chongli Miyuan Genting Ski Resort Snow Ruyi, Lavender Snow Trail and Peony Flower Obstacle Chase Track. The highly simulated VR skiing technology with the influence of meteorological conditions could make skiing break through many restrictions such as time, space and weather, experience the fun of skiing on different meteorological conditions on, avoid the possible danger of weather changes on outdoor real skiers, and ensure the safety of skiers.

2 Method 2.1 Technical Methods (1) Using 3D design technology, 3D modeling [5–7], sticker design, particle effects [8–9], virtual camera motion, environmental effect design and other technologies are comprehensively applied to create a virtual snow track model of the Chongli Winter Olympics stadium. On the basis of the above, the difficulty of gliding is appropriately increased and three corners are designed at the end of the snow track, which makes the VR skiing experience more interesting and challenging. (2) The three meteorological elements of snowfall, wind speed, and temperature are used as algorithm indicators for the simulated ski weather index. When any of the three indicators reaches the threshold value of the simulated ski weather index at a higher level, the level of the simulated ski weather index is triggered to be adjusted accordingly. (3) Based on the initial VR skiing scene, different VR skiing virtual environments are created by the artificial intelligence (AI) algorithm simulating the virtual ski resort snowfall, wind, temperature and other meteorological factors to create, which include wind speed change function design, temperature change function design, snowfall simulation control, sliding action data collection, ski pedal and VR motion transformation algorithm design, human-computer interaction design. (4) The computer virtual simulation system and intelligent environmental wind field simulation system provide the high simulation ski sensory content with real-time rendering. The simulation experience of VR skiing for experiencers are realized by collecting ski motion data from the ski platform and controlling the computer virtual simulation system for human-computer interaction. 2.2 The Design for the Snow Road Based on the Chongli Snow Resort Winter Olympics track as the prototype, fully considering the interactivity, fun and challenging nature of VR skiing, breaking through physical space limitations, the Xueruyi, Lavender Track and Peony Flower Track are skillfully combined to form a complete VR ski experience environment starting from Xueruyi modeling, with both beginner experience and advanced challenge snow tracks.

Design and Experience of Virtual Ski Resort

451

The Xueruyi 3D model consists of a summit club at the top, a big jump platform in the middle, a standard jump platform a referee tower and a grandstand area at the bottom. Based on the prototype of the Chongli Ski Resort track, the two snow tracks are transformed into two virtual snow tracks (the junior experience snow track and the advanced challenge snow track) that are more suitable for VR interactive experiences. Referring to the lavender track prototype, the junior experience snow track with the total length of 1,800 m is downhill trend with the average snow slope of 15°, the maximum slope of 22° and no bend, which is suitable for beginners. The advanced challenge snow track with the length of 2032 m refers to the peony obstacle chase track. The average slope, the maximum slope the buffer zone slope and acceleration zone slope of the advanced challenge snow track are 18°, 26°, 12–18° and 20–26°, respectively. A continuous small turning curve is designed at the flag gate near the end point to provide a sense of turning experience and increase the difficulty of the experience. Figure 1 shows the VR ski experience ski slopes panorama.

Fig. 1. VR ski experience ski slopes panorama

452

H. Cheng et al.

3 The Study on the Simulation of Weather Conditions for Skiing Daily snowfall, average wind speed, and maximum daily temperature are the most representative influencing factors for skiing among all meteorological factors. The combination of the three meteorological factors has the most significant impact on the skiing experience. The regression equation between the number of skiers and the three meteorological factors is as follows [10]: Y = 472.553 − 12.344M − 29.861F − 2.974T where Y, M, F and T represent the number of skiers, the daily snowfall, the average wind speed and the highest daily temperature, respectively. According to the national meteorological industry standards QX/T 386–2017, GB/T 28592–2012 and GB/T 28591–2012 [11–13] , the simulated ski weather index grades are designed and simulated, as shown in Table 1. Table 1. Grade and meaning of simulated ski weather index Simulated ski meteorological index level

1

2

3

4

meaning

Suitable

More appropriate

Is not very suitable

Not suitable

Table 2 shows the thresholds of weather elements for each simulated ski weather index level. The real-time data of the three weather elements snowfall, wind power and temperature in the virtual scene are applied to replace the daily snowfall, average wind speed, and daily maximum temperature. The simulated ski weather index is automatically counted and displayed instantly in the user experience field range to prompt the experiencer to pay attention to the real-time status of the weather elements. Table 2. Snowfall, wind, temperature and simulated ski weather index were compared Simulated ski meteorological index level

Snow (24 h)

wind(f)

The temperature (Tg)°C

1

No snow

f ≤ Level 2

−12 ≤ Tg < 2

2

Light snow

Level 2 < f ≤ Level 3

−16 ≤ Tg < −12 or 2 ≤ Tg < 10

3

Moderate snow

Level 3 < f ≤ Level 5

−20 ≤ Tg < −16

4

Heavy snow and above

f > Level 5

Tg ≤ -20

Snowfall simulation design is based on the national standard GB/T 28592-2012. Various snowfall simulation effects are designed at different stages of the virtual scene.

Design and Experience of Virtual Ski Resort

453

The light snow level is simulated according to the standard of snowfall less than 2.4 mm within 24 h, and the moderate snow level is simulated according to the snowfall volume of 2.5 to 4.9 mm within 24 h. The wind simulation design refers to the research results of Gao Feng et al. [10] on the relationship between weather conditions and skiing. When the wind power level of the virtual scene reaches level 2 or above and the wind speed is about 3.5 to 5 m/s, the wind simulation effect of the wind simulation system level 1 is turned on. When the wind level of the virtual scene is above level 3 and the wind speed is about 5–7 m/s, the 2-level wind simulation effect is activated. The temperature simulation design is mainly affected by instant wind power and snowfall and the impact of the long term changes in the sun’s exposure throughout the day on the temperature simulation design is not considered. A range of dynamic realtime temperatures in different regions is simulated as wind and snowfall changes. The temperature in the first, second and third stage are set at −12–2 °C, −16–−12 °C and − 20–−16 °C, respectively. The snowfall is inversely proportional to the temperature. In each stage, the temperature gradually decreases with the increase of the snowfall.

4 The Design for VR Interactive Experience Process Design The VR interactive experience process design includes initial scene construction, snowfall simulation design, wind speed and temperature change function design, human-computer interaction mode design and control system design. 4.1 Initial Scenario Construction The meteorological conditions of initial virtual environment are set as sunny, no snowfall, ambient temperature fluctuation up and down −11 °C, minimum ambient temperature >−12 °C, wind power 100

a = 4, Avoid vertigo

60 < V < 99

a = 2-V/40, a ∈ (−1.25, −1.5)

4.4 The Design for Control System Relying on the VR ski hardware equipment and supporting VR glasses of the Chongli Winter Olympics Meteorological Science Museum, the control system is designed. The horizontally opposed double damped slide physically-simulated dynamic architecture is applied on the runway to ensure the correct carbine parallel ski motion simulation experience for the experiencer. The computer virtual simulation system uses a highperformance virtual reality graphics rendering host and a 5K binoculars high-definition VR head display to provide real-time rendering of high-simulation ski sensory content. The intelligent environmental wind field simulation system simulates the wind power changes of the ski resort in real time during skiing. Through the program-controlled intelligent wind field simulator, the wind power output and regulation at the physical level are realized to create human feelings under different wind conditions.

5 VR Skiing Experience Effect Design for Different Weather Conditions Meteorological factors are integrated into the overall gliding process according to the segmented design, and the whole process is controlled for 3 to 4 min (the actual experience time is affected by the gliding speed). (1) Get ready to glide. The virtual snow resort meteorological environment is created through AI algorithms from the starting point. There is no snowfall, ambient temperature fluctuations up and down −11 °C, minimum ambient temperature >−12

456

(2)

(3)

(4)

(5)

H. Cheng et al.

°C, and wind power 0, making Y + εMM T + ε−1 N T N < 0. Lemma 2 For discrete switched systems with time delay x(k + 1) = Ai x(k) + Adi x(k − d ), i(k) = s(x(k), i(k − 1))

(6)

The Stability of Online Discrete Switched System Based on Computer

481

If there is a symmetric positive definite matrix P, ..., Pm , S ∈ Rn∗n , the following conditions are met:   T Ai Pj Ai − Pi + S ATi Pj Adi < 0, ∀(i, j) ∈ M × M (7) ATdi Pj Ai ATdi Pj Adi − S Then, for any switching scheme, the system (8) can be guaranteed to be asymptotically stable. Theorem 1 For discrete time delay switched systems (8), if there is a symmetric positive definite matrix X1 , ..., Xm , W ∈ Rn∗n , the following conditions are met: ⎡

⎤ −W X 0 0 ⎢ Xi −Xi 0 Xi ATi ⎥ ⎢ ⎥ < 0, ∀(i, j) ∈ M × M ⎣ 0 0 W − 2Xi Xi ATdi ⎦ 0 Ai Xi Adi Xi −Xj

(8)

Then, for any switching scheme, the system (2) can be guaranteed to be asymptotically stable. Prove According to the conclusion of Lemma 2, the sufficient condition for system (6) to be asymptotically stable for any switching scheme is:   T  Ai

−Pi + S 0 Pj Ai Adi < 0 (9) + T Adi 0 −s Use Schur to supplement, there are: ⎡ ⎤ −Pi + S 0 ATi ⎢ ⎥ 0 −S ATdi ⎦ < 0 ⎣ Ai Adi −Pj−1

(10)



⎤ Pi−1 0 0 The left and right are multiplied by ⎣ 0 Pi−1 0 ⎦ respectively 0 0 I ⎤ 0 Pi−1 ATi −Pi−1 + Pi−1 SPi−1 ⎥ ⎢ 0 −Pi−1 SPi−1 Pi−1 ATdi ⎦ < 0 ⎣ −1 −1 −1 Ai Pi Adi Pi −Pj ⎡

(11)

In the above formula, if the right matrix is negative definite, the left matrix must be negative definite. Therefore, the negative determination of the right matrix is a sufficient

482

S. Gao

condition for system (6) to be asymptotically stable for any switching scheme, reuse Schur supplement, there are: ⎡

⎤ −W Xi 0 0 ⎢ Xi −Xi 0 Xi ATi ⎥ ⎢ ⎥ 0, ⎢ 0 ∗ ∗ ⎥ 0 exists, so that: ⎤ ⎡ ⎤⎡ ⎤T 0 0 −W ∗ ∗ ∗ ⎢ ⎥ ⎢ ⎥ ⎢ Xi 0 ⎥⎢ 0 ⎥ −Xi ∗ ∗ ⎥ ⎥ + ⎢ + εij ⎢ ⎦ ⎣ ⎣ 0 0 ⎦⎣ 0 ⎦ 0 W − 2Xi ∗ Di Di 0 Ai Xi + Bi Qi Adi Xi −Xj



T εij−1 0 Ei1 Xi + Ei3 Qi Ei2 Xi 0 × 0 Ei1 Xi + Ei3 Qi Ei2 Xi 0 < 0 ⎡

Use Schur to supplement, there are: ⎡ ⎤ −W Xi 0 0 0 ⎢ X −Xi 0 (Ai Xi + Bi Qi )T (Ei1 Xi + Ei3 Qi )T ⎥ ⎢ i ⎥ ⎢ ⎥ T 0 W − 2Xi Xi ATdi Xi Ei2 ⎢ 0 ⎥ T ∗

493

(3)

where, f (x, y) is the gray level of pixel (x, y), and g (x, y) is the binary image obtained after segmentation. Through the analysis of the requirements of image threshold segmentation algorithm and clothing pattern element extraction, graph theory segmentation has great advantages as a hot field of image segmentation. Artificial bee colony algorithm is an improved algorithm for bee behavior. It is a special application of the concept of cluster intelligence. Its main idea is not to find out the specific information of the problem, but to compare the advantages and disadvantages of the problem. Through the good practice of all artificial bees in the local area, the global effect has finally appeared in the population, and it has changed rapidly. Artificial bee colony algorithm is a new bionic heuristic algorithm. This algorithm is used as a method to get the threshold, and the information entropy is used as the power function. Combined with the histogram information of clothing pattern image, a reasonable threshold is obtained, so as to reasonable segmentation target and background in the image. Then it is stored in the computer in the form of vectorization or for the reuse of clothing pattern images [20].

4 Conclusion With the development of clothing industry, the innovation of clothing patterns is very important, especially the application of national patterns has become an effective way of clothing pattern innovation. It is an important technology to segment the target and background in garment pattern image with good initial quality, and the result of bee colony algorithm is an indispensable powerful tool to get the initial image. Therefore, the author adopts an improved bee colony algorithm as a means. Combining artificial bee colony algorithm with the principle of information entropy, the clothing pattern image is segmented from the beginning. The number of algorithms applied by the author to feature extraction and matching of national costume patterns is limited, in the subsequent research, more algorithms can be tried to find more suitable feature extraction algorithms for national costume patterns. Acknowledgement. This work was supported by Jilin Education Office Higher Education Department, research on Innovation and practice of hybrid teaching mode of clothing specialty under MOOC environment (Project No. JLJY202131613637).

References 1. Tuncer, A.: 15-puzzle problem solving with the artificial bee colony algorithm based on pattern database. J. Univ. Comput. Sci. 27(6), 635–645 (2021)

494

B. Yang

2. GonzalezCastano, C., Restrepo, C., Kouro, S., Rodriguez, J.: MPPT algorithm based on artificial bee colony for PV system. IEEE Access 9, 43121–43133 (2021). https://doi.org/10. 1109/ACCESS.2021.3066281 3. Chao, C.: Medical image boundary extraction based on improved ant colony algorithm. Comput. Appl. Softw. 36(10), 227–232 (2019) 4. Jang, W.: Fuzzy region segmentation based on improved ant colony algorithm for high-speed image acquisition. Comput. Simul. 32(12), 377–381 (2015) 5. Hui, H., He, J., He, X.: Image edge connection method based on fuzzy theory and ant colony algorithm. Comput. Eng. Appl. 3, 168–172 (2014) 6. Biao, T., Shen, Y., Huang, X., et al.: Research on substation robot path planning and equipment defect identification based on improved ant colony algorithm and image recognition. Manuf. Autom. 44(2), 46–52 (2022) 7. Kiran, M.S., Hakli, H., Gunduz, M., Uguz, H.: Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf. Sci. 300, 140–157 (2015) 8. Xiao, Y., Chen, D., Zhang, L.Y.: Research on spectrum scheduling based on discrete artificial bee colony algorithm. J. Phys. Conf. Ser. 1856(1), 012059 (6p.) (2021) 9. Shi, K., Bao, L., Ding, H., Zhao, L., Guan, Z.: Research on artificial bee colony algorithm based on homogenization logistic mapping. J. Phys. Conf. Ser. 1624(4), 042030 (5p.) (2020) 10. Zhang, L., Fu, M., Li, H., Liu, T.: Improved artificial bee colony algorithm based on damping motion and artificial fish swarm algorithm. J. Phys. Conf. Ser. 1903(1), 012038 (9p.) (2021) 11. Sun, W., Yu, J., Kang, Y., Kadry, S., Nam, Y.: Virtual reality-based visual interaction: a framework for classification of ethnic clothing totem patterns. IEEE Access 9, 81512–81526 (2021). https://doi.org/10.1109/ACCESS.2021.3086333 12. Banharnsakun, A.: Artificial bee colony algorithm for content-based image retrieval. Comput. Intell. 36(1), 351–367 (2020) 13. Sazykina, I.A., Sirotina, I.L.: Updating traditional ethnic symbols in the design of a modern finnougric costume. Finno-Ugric World 13(2), 180–192 (2021) 14. Ko, S.J., Kim, Y.A.: A study on the development process of using 3d printing - focusing on the pattern of health and longevity. J. Korean Trad. Costum. 23(2), 49–63 (2020) 15. Zelenova, Y., Belgorodsky, V., Korobtseva, N.: Retransmission of historical lace ornaments using 3d-design method. Bull. Sci. Pract. 6(1), 207–225 (2020) 16. Zhang, J., Dolg, M.: ABCluster: the artificial bee colony algorithm for cluster global optimization. Phys. Chem. Chem. Phys. 17(37), 24173–24181 (2015) 17. Xue, Y., Jiang, J., Zhao, B., Ma, T.: A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft. Comput. 22(9), 2935–2952 (2017). https://doi.org/ 10.1007/s00500-017-2547-1 18. Zhou, J., et al.: An individual dependent multi-colony artificial bee colony algorithm. Inf. Sci. 485, 114–140 (2019) 19. Zuo, L., Shu, L., Dong, S., Zhu, C., Hara, T.: A multi-objective optimization scheduling method based on the ant colony algorithm in cloud computing. IEEE Access 3, 2687–2699 (2015) 20. Kefayat, M., Ara, A.L., Niaki, S.N.: A hybrid of ant colony optimization and artificial bee colony algorithm for probabilistic optimal placement and sizing of distributed energy resources. Energy Convers. Manage. 92, 149–161 (2015)

Intelligent Hotel System Design Based on Internet of Things Qingqing Geng1 , Yu Peng1(B) , and Sundar Rajasekaran2 1 Chongqing College of Architecture and Technology, Chongqing 401331, China

[email protected] 2 Nisantasi University, Istanbul, Turkey

Abstract. Due to the vigorous development of intelligent technology, has put forward many new “smart +” concepts, including smart hotel. In order to realize the smart hotel system, this paper designs the system based on the Internet of Things technology, mainly introduces the relationship between the Internet of Things and the smart hotel system, analyzes the significance of the system design, and finally puts forward the system design scheme. Through research, the smart hotel system can be realized under the support of the Internet of Things. The system can meet the hotel’s various operational needs and make the hotel management more convenient and effective. Keywords: Internet of Things · Smart hotel · hotel management

1 Introduction Intelligent technology can help hotels complete many tasks, or provide convenience for staff at work, so this technology is widely used in hotel management, and gradually formed the concept of smart hotel. However, with the development of cognition, people find that relying solely on intelligent technology to help implement hotel management cannot meet people’s expectations for smart hotels, so people begin to try to integrate intelligent technology with other technologies to serve the hotel together. In this context, the Internet of Things has received high attention from people. By integrating the Internet of Things with intelligent technology, many new hotel management functions can be developed, and the intelligent level of the smart hotel system can be further improved. Therefore, how to integrate the Internet of Things with intelligent technology and realize the design of smart hotel system has become a problem worth thinking about. It is necessary to carry out relevant research on this issue.

2 The Relationship Between the Internet of Things and Smart Hotel System and the Significance of System Design 2.1 Relationship The Internet of Things is a network model composed of several physical devices in the physical environment, among which the devices are mainly connected by wired © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 495–503, 2023. https://doi.org/10.1007/978-3-031-31775-0_51

496

Q. Geng et al.

and wireless communication technologies. Each device in the Internet of Things has different functions, so each device can realize one or several functions. For example, the radio frequency device can realize the radio frequency identification function, which can identify the electronic code. The electronic code is widely used, so this function can be used in different work. The wired and wireless communication technologies in the Internet of Things together constitute the communication layer of the Internet of Things. This layer can not only connect all physical devices together, but also connect all devices with external terminal systems. The smart hotel system is a terminal system, which represents that people can manage the physical devices of the Internet of Things through the communication layer on the terminal devices of the smart hotel system, For example, directly input the “close” command to the RF device on the terminal device, and the command will be sent to the corresponding RF device along the communication link of the communication layer, making the device shut down [1–3]. It can be seen that after the combination of the Internet of Things and the smart hotel system, the former is equivalent to the functional layer, and the latter is equivalent to the terminal layer [4]. The two are connected by the communication layer. See Fig. 1 for the specific architecture.

Fig. 1. Basic architecture of smart hotel system based on the Internet of Things

According to Fig. 1, the system function layer mainly provides services for consumers, that is, according to the hotel business project, it can provide multiple functional services such as consumer identification, fire safety management, etc. The communication layer mainly connects the function layer and the terminal layer, which can realize the interactive communication between them and support the overall operation of the system. The terminal layer mainly provides services for managers, that is, according to hotel management requirements, it can provide check-in/check-out information automatic registration, equipment maintenance reminder, automatic management of relevant business functions at the function layer and other functions. It is worth mentioning that most of the functions realized by the Internet of Things in the system function layer and terminal layer can operate automatically driven by the intelligent terminal. For example, the intelligent terminal can drive the on-site facial recognition and radio frequency

Intelligent Hotel System Design Based on Internet of Things

497

identification functions to automatically identify the consumer’s facial information and room card information. If the two kinds of information are identified and matched, the identification will be determined. This is the decision mode of the intelligent terminal. According to this judgment mode, the smart hotel system does not require human intervention to a large extent, but does not restrict human intervention. If necessary, human can intervene in the system operation process at any time, indicating that the system has good human-computer interaction. Figure 2 shows the basic logic of system operation [5, 6].

Fig. 2. Basic logic of system operation (The dotted line represents that the human can intervene in the system operation at any time, but it is not necessary to intervene at any time).

2.2 Design Significance The significance of the design of the smart hotel system of the Internet of Things is reflected in many aspects. For example, the system can improve the convenience and quality of service, and also better meet the personalized needs of consumers. In addition, the significance of the system in terms of economic benefits is the most prominent, and hotels, as commercial organizations, naturally pay the most attention to it [7]. In terms of economic benefits, the smart hotel system of the Internet of Things can help hotel enterprises save energy, thereby reducing energy costs. That is to say, hotels generally operate 24 h a day, and a large number of internal residents and staff have energy demand all the time. The most important energy is electric energy, so electric energy will be in a state of constant consumption, which will cause great economic burden to hotel enterprises. In the face of electric energy consumption, hotel enterprises have long advocated energy conservation and consumption reduction, but they cannot achieve ideal results in artificial mode, and the emergence of the Internet of Things smart hotel system can solve this problem. Taking the hotel temperature control equipment as an example, in order to save energy and reduce consumption, it is necessary to control the operation

498

Q. Geng et al.

time of the temperature control equipment. Therefore, depending on the system, the natural resources and renewable resources can be flexibly used by the Internet of Things equipment to reduce people’s demand for electric energy. For example, combining solar energy equipment, sensors and intelligent systems can achieve the goal, that is, sensors monitor the indoor ambient temperature. When the temperature is lower than the preset standard, will start solar heating. It can be seen that the smart hotel system of the Internet of Things has good economic benefits and can save a lot of hotel operating costs [8–11]. Therefore, it is necessary to design the system for the hotel.

3 Design Scheme of Smart Hotel System Based on Internet of Things 3.1 Functional Layer Design The functional layer of the smart hotel system of the Internet of Things is mainly realized by the Internet of Things. In the design, the corresponding physical equipment should be selected according to the hotel business project and the hotel’s internal management needs. The specific content is uncertain. This article will only discuss by functional classification. The first is the hotel business project function. According to the general situation, the main functions include the consumer check-in function, the consumer access management function, and the consumer indoor environment adjustment function. The basic realization method of the three functions is the same, that is, select the corresponding Internet of Things device, connect it with the system communication layer, power supply, etc. after installation, so that the device can be started, and the device will be in a continuous standby state after startup. When the consumer makes the corresponding behavior, or the environmental conditions change, the device will make the action by itself or under the drive of the intelligent terminal. Take the consumer’s indoor environment adjustment function as an example. The Internet of Things devices involved in this function include light intensity, temperature and humidity sensors and lighting, temperature and humidity adjustment devices. The function of the sensor is to collect the on-site light intensity, temperature and humidity balance information, and then transmit the information to the intelligent terminal. The intelligent terminal can judge the quality of indoor environment conditions according to the preset standard parameters. If the indoor light intensity is found, Or if the temperature and humidity balance is not up to the standard, a parameterized instruction will be generated according to the difference, and the instruction will be received by the on-site lighting and temperature and humidity adjustment equipment through the communication layer, so as to make the equipment operate according to the command parameters, adjust the indoor lighting intensity and temperature and humidity balance, and provide a good accommodation environment for consumers, which is conducive to the hotel service quality. Figure 3 shows the operation process of the consumer’s indoor environment regulation function. Other functional processes are similar, so it is unnecessary to elaborate [12–14]. The second is the hotel’s internal management demand function, which has many types and covers most of the hotel’s internal affairs. There are also some differences in the implementation methods. Therefore, it is discussed separately:

Intelligent Hotel System Design Based on Internet of Things

499

Fig. 3. Operation process of consumer’s indoor environment regulation function

The first is the security monitoring function, that is, the hotel has the responsibility to provide security for the personal and financial safety of internal consumers. For this purpose, the security monitoring function can be developed on the basis of the Internet of Things, such as intelligent television monitoring. The intelligent TV monitoring function is mainly composed of remote control camera and auxiliary equipment, and is connected with the monitoring room, anti-theft alarm and other infrastructure through the communication path. During operation, the remote control camera and auxiliary equipment will clearly capture the scene image, synchronously collect the sound, and then transmit the shooting content to the intelligent terminal in real time. The intelligent terminal will make judgments based on the shooting content, such as recognizing the facial features of the person in the image, if it fails to recognize, it will cooperate with the anti-theft alarm system to notify the police, residents, security and other personnel, and even automatically lock the doors and windows (after informing the residents, if the residents choose to lock it, it is generally recommended to start it when there is no one in the room), to prevent illegal persons from escaping. At the same time, under the control of the intelligent terminal, the operation logic of the relevant Internet of Things devices can take into account the privacy of consumers, that is, if consumers enter the room

500

Q. Geng et al.

through the access control card, face recognition, etc., the monitoring class I devices will automatically shut down, and will not monitor the indoor situation, but instead act as the monitoring device at the door to protect the personal safety of consumers [15]. The second is the collaborative communication function, which establishes the internal video communication network with the system, and can connect the network of multiple locations by IP address to realize visual communication. This function has very prominent application value in hotel management. It can not only make all people in the hotel communicate more smoothly, but also be used for ultra-remote communication, which can effectively improve work efficiency and coordination. The third is BAS function, which is mainly to centralize the power supply, lighting, air conditioning, heating and ventilation, fire alarm, security monitoring, audio broadcasting and other facilities in the hotel. The distributed centralized management of these facilities through wireless communication network can effectively improve the system of building facilities. Relying on this function can provide consumers with a safe and comfortable environment. Taking the power supply facilities as an example, the BAS intelligent terminal will continuously record the power consumption of the entire hotel building, analyze the peak and trough periods of building power consumption, and then accurately control the power supply facility units according to the technical results, so as to promote the high consistency of power supply and demand, and effectively avoid the occurrence of power waste and other problems. The basic principle is that according to the maximum power consumption in different periods and the standard transmission capacity of each power supply facility in the unit, we can know how many power supply facilities need to be opened at most, and then we can supply power according to this scheme. Through the BAS function, many management tasks in intelligent buildings do not need manual intervention, so it can reduce the labor burden and reduce the equipment loss, which is beneficial to the building maintenance cost in the long term. The fourth is the fire management function. The internal situation of the hotel is complex, and there are some fire hazards that are unavoidable. This cannot be changed, but the majority of fire accidents will go through a process from small to large, so as long as the fire accidents are handled in time, it will not cause too much impact. For this purpose, hotels can develop fire management functions in the intelligent hotel management system of the Internet of Things. The fire management function design mainly focuses on the layout of sensors and fire fighting facilities, and requires comprehensive coverage, while the individual coverage does not overlap, so that the sensor can monitor the ambient temperature, smoke concentration, light color and intensity, and comprehensively judge whether there is a fire prevention problem on the site. Once there is an intelligent system, the fire fighting facilities will be started to put out the fire. On the contrary, if the situation does not change after the facility is started for a period of time, The fire alarm system will be started directly and the output of fire fighting facilities will be increased. 3.2 Communication Layer Design The design requirements of the communication layer of the smart hotel system of the Internet of Things are relatively special. On the one hand, for privacy reasons, the communication layer needs to have special communication channels for communication

Intelligent Hotel System Design Based on Internet of Things

501

between system equipment, terminals and personnel. On the other hand, for service quality reasons, the communication layer also needs to ensure the communication between consumers and hotel equipment, terminals and personnel. According to this requirement, the system communication layer will be designed with a two-layer architecture, as shown in Fig. 4.

Fig. 4. Two-layer architecture of system communication layer

According to Fig. 4, the core of the communication layer is the internal communication channel. In the channel, the system equipment, terminals and personnel are connected through IP authentication, so that the equipment, terminals and personnel can communicate with each other in this channel. When the external consumer wants to communicate with the hotel equipment, terminals and personnel, the signal is transmitted to the transfer node. If the node determines that the signal comes from the consumer’s housing terminal, The signal will be transmitted to the internal communication channel for communication, otherwise, the transfer will be rejected and the signal will be rejected. According to the above requirements, the first step of the communication layer design is to select the network communication protocol. A reasonable choice can ensure the operation quality of the communication layer, achieve the above requirements, and take into account the needs of real-time communication and other aspects to ensure the hotel service response speed. From this point of view, because a single device, terminal and person may receive dozens or even more messages at the same time, in order to meet the requirements, it is necessary to select a network protocol with sufficient communication capacity and good security. The Ethernet network protocol meets this requirement. This network protocol is characterized by large capacity, but has some special network properties, only authenticated users can access the network channel. These characteristics make the Ethernet network free from communication congestion, which is conducive to real-time information transmission. After the selection of the network communication protocol, the network layout should be carried out, that is, because the hotel equipment, terminals, personnel and consumers usually communicate in the internal environment of the hotel, the communication network layout will also be carried out in this environment. The specific methods are as follows: first, develop the Ethernet port in the Web page; Second, establish an identity authentication system on the port, set the hotel staff as administrators, and then set the consumer household end as “allowed to access users”

502

Q. Geng et al.

through the administrator, thus forming a two-level communication channel dedicated to serving administrators, administrators and consumers in the communication network. In addition, the use conditions of the small Ethernet communication protocol are relatively harsh and must be established in a good Internet environment. However, the internal environment of the hotel is relatively complex and may cause interference to the signal, thus affecting the communication quality. With this in mind, if there is signal interference, it is recommended to eliminate it as much as possible. If it cannot be eliminated, other communication protocols, such as virtual private network protocol, can be considered, depending on the actual situation. 3.3 Terminal Layer Design The terminal layer of the system is mainly responsible for processing various actual information, requiring judgment on the actual situation according to the information, and then producing accurate instructions. Therefore, the key of the terminal layer design is to realize the machine recognition function. In response to this requirement, it is necessary to use algorithms to make the machine recognize information features according to the thinking logic similar to human beings, and then further process them. Therefore, algorithm selection should be done well in the design. For example, select AHP algorithm, which is a multi-scheme or multi-objective decision-making method. It can extract the characteristics of the overall information, and then treat all information as a problem according to the characteristics. Finally, establish the problem logic, and make judgments according to the preset criteria. The judgment results are highly accurate. Formula (1) is the expression of AHP algorithm. 1  aij n  n j=1 akj n

ωi =

(1)

k=1

where i and j represent the layer i in the first j, n is the normalized standard value, a is the correlation factor, and k is the relative degree. It is worth noting that the AHP algorithm is not the only algorithm that can be used in the terminal layer. In fact, there are other algorithms that can be selected, such as the K-means method and the image recognition method. These algorithms do not conflict with each other and can be superimposed, but the actual situation must be taken into account when selecting to ensure the applicability of the algorithm. After the algorithm is selected, the algorithm module can be designed and installed according to the Java programming method.

4 Conclusion After the combination of the Internet of Things and intelligent technology, the smart hotel system will further develop. Therefore, in order to realize the system, hotel staff should fully understand the system design method. The Internet of Things smart hotel system can improve the convenience, real-time and security of hotel service quality, which shows that the system has good application value and is worth promoting.

Intelligent Hotel System Design Based on Internet of Things

503

References 1. Dammak, M., Aroua, S., Senouci, S.M., et al.: A secure and interoperable platform for privacy protection in the smart hotel context. In: 2020 Global Information Infrastructure and Networking Symposium(GIIS) (2020) 2. Blue, L., Marchal, S., Traynor, P.G., et al.: Lux:Enabling ephemeral authorization for displaylimited IoT devices. In: IoTDI2021:International Conference on Internet-of-Things Design and Implementation (2021) 3. Sanin, C., Haoxi, Z., Shafiq, I., et al.: Experience based knowledge representation for Internet of Things and cyber physical systems with case studies. Future Gener. Comput. Syst. 92, 604–616 (2019) 4. Dolfsma, W., Mahdad, M., Hasanov, M., et al.: A smart web of firms, farms and internet of things (IOT): enabling? Collaboration-based business models in the agri-food industry. Br. Food J. 124(6), 1857–1874 (2022) 5. Zhang, H., Uddin, M., Hao, F., et al.: MAIDE: Augmented Reality (AR)-facilitated mobile system for onboarding of Internet of Things (IoT) devices at ease. ACM Trans. Internet Things 2022(2), 3 (2022) 6. Salim, S., Turnbull, B., Moustafa, N.: Data analytics of social media 3.0: privacy protection perspectives for integrating social media and Internet of Things (SM-IoT) systems. Ad hoc Netw. 2022(Apr.), 128 (2022) 7. Zhang, Q., Zhu, L., Li, Y., et al.: A group key agreement protocol for intelligent internet of things system. Int. J. Intell. Syst. 37(1), 699–722 (2022) 8. Huang, Q., Yang, Y., Wang, L.: Secure data access control with ciphertext update and computation outsourcing in fog computing for internet of things. IEEE Access 2019, 12941–12950 (2019) 9. Gong, B., Wu, Y., Wang, Q., et al.: A secure and lightweight certificateless hybrid signcryption scheme for Internet of Things. Future Gener. Comput. Syst. 127, 23–30 (2022) 10. Zvarikova, K., Horak, J., Bradley, P.: Machine and deep learning algorithms, computer vision technologies, and internet of things-based healthcare monitoring systems in COVID-19 prevention, testing, detection, and treatment. Am. J. Med. Res. 2022(1), 9 (2022) 11. Hermanu, C., Maghfiroh, H., Santoso, H.P., Arifin, Z., Harsito, C.: Dual mode system of smart home based on internet of things. J. Robot. Control (JRC). 3(1), 26–31 (2022) 12. Jiang, H.: Research on hotel management based on internet of things and big data analysis. Int. J. Reliab. Qual. Saf. Eng. 29(05), 2240004 (2022) 13. Nadkarni, S., Kriechbaumer, F., Rothenberger, M., Christodoulidou, N.: The path to the hotel of things: Internet of Things and big data converging in hospitality. J. Hosp. Tour. Technol. 11(1), 93–107 (2020) 14. Rajesh, S., et al.: Detection of features from the internet of things customer attitudes in the hotel industry using a deep neural network model. Measur. Sens. 22, 100384 (2022). https:// doi.org/10.1016/j.measen.2022.100384 15. Awotunde, J.B., Misra, S. (2022). Feature extraction and artificial intelligence-based intrusion detection model for a secure internet of things networks. In: Misra, S., Arumugam, C. (eds) Illumination of Artificial Intelligence in Cybersecurity and Forensics. Lecture Notes on Data Engineering and Communications Technologies, vol. 109, pp. 21–44. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-93453-8_2

Construction of Campus English Education Resources Information Platform Based on Internet of Things Lijun Yang(B) , Zhihong Wang, and Dongxia Zhao Shandong Agriculture and Engineering University, Jinan 250100, China [email protected]

Abstract. In order to realize the application functions of collecting, processing, retrieving, sharing and reusing English teaching resources, this paper constructs an English education resource information platform based on the Internet of Things technology and smart campus construction technology. According to the construction requirements of the intelligent teaching resource service cloud platform, the service layer and application service layer of the software system are designed, and the system test results of the English education resource information platform are given. The test results show that when 300 concurrent users access, the average corresponding time for reading is 0.901 s, and the average corresponding time for writing is 2.82 s, both within the normal range. Through the survey, students and teachers have produced good feedback on the teaching effect of the English education resource information platform. Keywords: Teaching resource service platform · Applied Undergraduate College · Smart campus · Cloud computing

1 Introduction With the rapid development of information technology, information construction has penetrated into all fields of social development, affecting and changing people’s lifestyle [1]. Smart campus is generally considered as the further improvement and development of digital campus, and a higher form or stage of educational informatization. In terms of the sharing of resources and services, it can be easily obtained by anyone, anytime and anywhere. Smart campus can provide a transparent and efficient management platform, real-time and effective data analysis, and realize flexible and independent teaching mode, ubiquitous learning environment, and convenient and thoughtful campus life. Nowadays, more and more colleges and universities have joined the construction of smart campus and achieved corresponding results [2]. The construction of smart campus can provide strong support for promoting educational informatization and accelerating educational modernization, which has important application value and far-reaching practical significance [3]. With the continuous development of smart campus construction and the introduction of the general framework for smart campus construction GB/T 363422018, more and more universities have joined the ranks of smart campus construction. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 504–513, 2023. https://doi.org/10.1007/978-3-031-31775-0_52

Construction of Campus English Education Resources

505

By building a smart campus, making full use of information infrastructure and resources, providing a platform to support various applications in the school, and creating a digital, networked, insightful and intelligent application environment, solve many problems in the informatization construction of higher education. This paper specifically studies the software design of the intelligent teaching resource service platform for English, and determines the basic service layer and the application service layer.

2 Research Methods 2.1 Design of Basic Service Layer MongoDB Design Structure Planning The file system is mainly used for the storage of unstructured data, which saves the files used or generated by the system to the distributed file system. It is required that files can be saved in large quantities, classified and stored according to the file size, and can reach at least one billion level of storage, and can be read quickly. Therefore, mongodb is a mature solution. Structural Design Planning of File System

(1) Distinguish according to source The file system is divided into two types: original file and derivative file. A. Original file: the file uploaded by the user. Start with s (source abbreviation). B. Derivative document: because of some business requirements, the original document is converted into another document or decomposed, and the additional document generated from the original document is called derivative document. Start with D (derived abbreviation). (2) According to size The file system divides the files into n types according to the file size. The large types are divided into the following three types: A. Within 1M: use bson storage, starting with BS. B. Within 16M: use bson storage, starting with BS. C. 16M or more: gridfs storage, starting with FS. Create a small category under the big category, and divide it by unit plus number range, such as: K10: indicates a file with storage < 100 K. K20: indicates a file that stores < 200 K. M01: it means storing files > = 400 k and = 1 MB and K, which means ST must be larger than K, so the option price is simply S0 − Ke−rT . The second is X (b) < K, which can’t happen in the real market since it means K − ST must be larger than 0. Now in the final case X (a) ≤ K ≤ X (b), we have Vc = e−rT E[max(ST − K, 0)|F0 ]  2  Ce−rT b z dz = √ max(X (z) − K, 0)exp − 2 2π a     Ce−rT b exp(gz) − 1 (h − 1)z 2 = √ exp dz, A+B g 2 2π X −1 (K) where Vc is the option price. Then use the same process as we calculate (10), we can get   (b) −  X −1 (K) −rT Vc = e (A − K) (b) − (a) √   √    e−rT B  b 1 − h −  X −1 (K) 1 − h − √ (b) − (a) g 1−h   ∗ −rT 2 g (b ) −  X −1 (K ∗ ) e B 2−2h (12) + √ e (b) − (a) g 1−h Even though (12) is a complicated formula, the following proposition shows our model is actually a generalized BS model.

540

J. Li

Proposition 1. Our model can degenerate to the BS model. Proof. Note that the lognoraml distribution belongs to the distribution family (5), namely, if we set. √ √ A = S0 eμT , B = σ T S0 eμT , g = σ T , h = 0, a = −∞, b = +∞, (13) Then the underlying price will follow a lognormal distribution ST ∼ S0 eμT +σ



TZ

.

This is the distribution that Black & Scholes (1973) assumed the underlying price satisfies. Then, the non-arbitrage condition (11) implies 1

S0 eμT ∗ e 2 σ

2T

+ S0 eμT − S0 eμT = S0 erT .

As a result, we have μ = r − 21 σ 2

(14)

when calculating the option price. Then, combine (12), (13) and (14), we have ⎛ ⎛   ⎞⎞   ln SK0 − rT + 21 σ 2 T 1 2 ⎠⎠ Vc = e−rT S0 e−rT − 2 σ T − K ⎝1 − ⎝ σT ⎛ ⎛   ⎞⎞ K 1 2 − rT + ln σ T 1 2 S0 2 ⎠⎠ − S0 e−rT +rT − 2 σ T ⎝1 − ⎝ σT ⎛ ⎛   ⎞⎞ ln SK0 − rT + 21 σ 2 T 1 2 ⎠⎠ + S0 e−rT +rT − 2 σ T ⎝1 − ⎝ σT ⎛ ⎛   ⎛ ⎛   ⎞⎞ ⎞⎞ ln SK0 − rT + 21 σ 2 T ln SK0 − rT + 21 σ 2 T ⎠⎠ − Ke−rT ⎝1 − ⎝ ⎠⎠ = S0 ⎝1 − ⎝ σT σT ⎛   ⎛   ⎞ ⎞ ln SK0 − rT + 21 σ 2 T ln SK0 − rT + 21 σ 2 T ⎠ − Ke−rT ⎝ ⎠, = S0 ⎝ σT σT which has the same form as the BS formula. Thus the proof is complete.  The above proposition demonstrates the rationality of our model. Now we will show that our model have a better performance in the real market based on the following empirical study.

3 Empirical Study In this section, we will do an empirical study based on the Shanghai 50ETF option. After pre-processing the data and estimating the parameters, our model, Zhu’s model and BS model will be compared to show whether our model can better describe the option price.

A Further Modification of a Revised B-S Model

541

3.1 Pre-process the Data We use Shanghai 50ETF European call options from October 2021 to August 2022 as the data of this research. To eliminate the sample noises, we have several standards to filter the data. First of all, we use the the closing price to represent the option price. The DepositoryInstitutions Repo Rate is chosen as the risk-free interest. Secondly, to eliminate the effect of time value, we only retain the options with expiry time between 37 to 43 days. At last, options with price no more than $1/20 are deleted because options with low price are often volatile in real market (Zhu and He 2018). At last we have 443 groups of data. We choose half (224 groups) of these data as the train set to estimate the parameters in three models. The another half (219 groups) is chosen to be the test set. We will use the test set to evaluate the performance of the three models based on the parameters obtained with the train set. 3.2 Estimate the Parameters To show the model performance in option pricing, we shall first find the best parameters to fit the real market data. Here we follow the approach used by Christoffersen & Jacobs (2004) to choose the parameters to minimize the errors between the actual option price and the model-obtained option price. Denote the mean-squared error (MSE) as MSE =

1 n

n 

(Vmarket − Vmodel )2 ,

(15)

i=1

where Vmarket and Vmodel are the actual and model-obtained option price respectively, and n is the total group numbers of data. To estimate the minimal value of (15), we adapt two methods here to find the best parameters for different models. For BS model and Zhu’s model, we use the Simulated Annealing (SA) algorithm, which is a popular global optimization algorithm, to determine the parameters. For our model, because we need to determine a large amount of parameters, it will be very difficult to apply a global optimization algorithm. As a result we use the optimizer in Matlab, which can find the local minima around a given initial value, to calculate the minimal value of (15). We choose initial parameters that degenerate our model to the BS model as in (13). Also the parameters we find may not be the global optimal solution, our model is still superior to Zhu’s model and BS model, which will be shown in the next subsection. Assume ST satisfies (7), to adapt our model to different data, we now assume e−rT ST /S0 follows the truncated g&h distribution, namely,     −rT −rT −rT exp g Z˜ − 1 e ST e A e B exp hZ˜ 2 /2 ∼ + S0 S0 S0 g −rT −rT where the destiny function of Z˜ is described in (6). Denote A0 = e S0 A , B0 = e S0 B , assume A0 , B0 , g, h, a, b are constants, then the non-arbitrage condition (11) can transform to

A0 −

 √   √   b 1−h − a 1−h √B0 (b)−(a) g 1−h

+

2

g (b∗ )−(a∗ ) √B0 e 2−2h (b)−(a) g 1−h

= 1.

(16)

542

J. Li

Use the method mentioned above, we can now determine the parameters B0 , g, h, a, b for our model, as shown in Table 1. The Eq. (16) is applied to determine A0 . The parameters in Zhu’s model and BS model are also illustrated in Table 2. Table 1. The parameters in our model Parameters

B0

g

h

a

b

Our model

0.0480

0.3457

0.6560

– 1.9846

1.6436

Table 2. The parameters in Zhu’s and BS model Parameters

σ

a

b

Zhu’s model

0.2688

– 0.5446

0.0963

BS model

0.2021

3.3 Compare the Results After the parameters are determined for each model, now we can compare the performance in predicting the option price of these three models. Table 3. The MSE for the three models MSE

train set

test set

Our model

3.9837e−5

5.0859e−5

Zhu’s model

4.4779e−5

5.6074e−5

BS model

5.2297e−5

5.9480e−5

Table 3 shows the MSE for train set and test set under three different models. It is easy to see that our model performs better than both Zhu’s model and BS model, since it fits better to both the train set and the test set. To be specify, for the train set, Zhu’s model is 14% more accurate than BS model while our model is 24% more accurate than BS model. When predicting the option price for the test set, our model performs 14% better compared with the BS model, while Zhu’s model only performs 5% better than the BS model. For the test set, we also calculated the relative errors between the model-obtained price and the real market price. The amount of data with relative errors less than 1% and more than 10% are shown in Table 4. As shown in the table, our model has a lower probability of making big mistakes and has a higher probability of providing very accurate predictions.

A Further Modification of a Revised B-S Model

543

Table 4. The amount of data with relative errors less than 1% or more than 10% Relative error

less than 1%

more than 10%

Our model

61

27

Zhu’s model

53

30

BS model

46

28

4 Discussion In this article, we assume the underlying price follows a truncated g&h distribution. This assumption help us describe some properties of the real market and the fact that the underlying price should have reasonable bounds. We first transformed the g&h distribution to a bounded one, then obtained the non-arbitrage condition. After that, we derived the pricing formula for predicting European call options. The property that our model is a generalized BS model is obtained to show the rationality of our model. Finally, an empirical research show that our model performs better than Zhu’s model and the BS model. The fact that our model produces less errors in predicting the option price can prove that our model indeed improves Zhu’s model and the BS model. Finally, we would like to mention the limitations of our work. Firstly, our model do not take the time effect into account, so it is not as flexible as Zhu’s model or BS model. Also, in the empirical study we only use the local optimal solution of (15) to estimate the parameters, so we may still have room to improve the performance of our model. These problems will be the subjects of our future work.

References Badrinath, S.G., Chatterjee, S.: On measuring skewness and elongation in common stock return distributions: the case of the market index. J. Bus. 451–472 (1988) Badrinath, S.G., Chatterjee, S.: A data-analytic look at skewness and elongation in common-stockreturn distributions. J. Bus. Econ. Statist. 9(2), 223–233 (1991) Bates, D.S.: The crash of ’87: was it expected? The evidence from options markets. J. Financ. 46(3), 1009–1044 (1991) Black, F., Scholes, M.: The pricing of options and corporate liabilities. J. Polit. Econ. 81(3), 637–654 (1973) Christoffersen, P., Jacobs, K.: The importance of the loss function in option valuation. J. Financ. Econ. 72(2), 291–318 (2004) Duffie, D.: An extension of the Black-Scholes model of security valuation. J. Econ. Theory 46(1), 194–204 (1988) Dumas, B., Fleming, J., Whaley, R.E.: Implied volatility functions: empirical tests. J. Financ. 53(6), 2059–2106 (1998) Dutta, K.K., Babbel, D.F.: Extracting probabilistic information from the prices of interest rate options: tests of distributional assumptions. J. Bus. 78(3), 841–870 (2005) Fabozzi, F.J., Rachev, S.T., Menn, C.: Fat-tailed and skewed asset return distributions: implications for risk management, portfolio selection, and option pricing (2005)

544

J. Li

Hoaglin, D.C.: Using Quantiles to Study Shape. Exploring Data Tables, Trends, and Shapes, pp. 417–460. Wiley (2006) Jorge, M., Boris, I.: Some properties of the Tukey g and h family of distributions. Commun. Statist.-Theory Meth. 13(3), 353–369 (1984) Longstaff, F.A.: Option pricing and the martingale restriction. Rev. Finan. Stud. 8(4), 1091–1124 (1995) McDonald, J.B., Bookstaber, R.M.: Option pricing for generalized distributions. Commun. Statist.Theory Meth. 20(12), 4053–4068 (1991) Savickas, R.: A simple option-pricing formula. Financ. Rev. 37(2), 207–226 (2002) Scott, L.O.: Option pricing when the variance changes randomly: theory, estimation, and an application. J. Finan. Quant. Anal. 22(4), 419–438 (1987) Sherrick, B.J., Garcia, P., Tirupattur, V.: Recovering probabilistic information from option markets: tests of distributional assumptions. J. Futur. Mark. 16(5), 545–560 (1996) Tukey, J.W.: Exploratory data analysis, vol. 2, pp. 131–160 (1977) Wiggins, J.B.: Option values under stochastic volatility: theory and empirical estimates. J. Financ. Econ. 19(2), 351–372 (1987) Zhu, S.P., He, X.J.: A modified Black-Scholes pricing formula for European options with bounded underlying prices. Comput. Math. Appl. 75(5), 1635–1647 (2018)

Construction of English Teaching Resource Database Based on Big Data Technology Li Chen1(B) , Linlin Wang2 , and Gagan Mostafa3 1 Foreign Language Teaching and Research Department, Changchun University of Finance and

Economics, Jilin 130122, China [email protected] 2 Aviation Foundation College, Aviation University of Air Force, Jilin 130022, China 3 South Valley University, Qena, Egypt

Abstract. In order to optimize English teaching resources and improve the effect of English teaching, the author proposes to construct an English teaching resource database based on big data technology. The author firstly constructs an English teaching resource database under the background of big data, secondly, it introduces the overall design of the English teaching resource database platform and the analysis and design of the database. Before the development of the system, it is very important to design the system, the key point is to give the overall design blueprint of the system according to the requirements analysis results of the system, so as to prepare for the next step of system development. Firstly, the overall framework of the system, the sub-modules included in the system and the functional flow of each module are given. Analyze, design, and ultimately conduct performance evaluations of data against performance measures. From the test, the results are: The number of requests 245, through 16.260.650, the average connection time of 0.203, the opening time 0.246. Ensure that the system platform works as expected; The web page loading speed of the English Learning Resource Library platform is very fast, and the platform’s consistent performance has reached the desired target. The use of big data in the information of the English curriculum can update and improve the information of the educational information, continuously and improve the information, and thus improve the student learning and information use. The establishment of an English teaching library creates a broad space for reforming English teaching and improving the quality of English teaching in universities. Keywords: Big data technology · English teaching resource database · database · teaching mode

1 Introduction The concept of “Internet +” action plan, the use of big data and cloud computing have had an impact on the development of information technology in education [1]. A digital learning library is an integrated system created with the help of the best information technology to store, store and access learning materials. In recent years, most colleges and universities have developed many professional and teaching methods, established their © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 545–553, 2023. https://doi.org/10.1007/978-3-031-31775-0_56

546

L. Chen et al.

own academic libraries, and completed many success in learning materials [2]. With the rapid development of information-based teaching, the construction of teaching resource platforms in some colleges and universities has developed from initial construction planning to practical use, and then to the management stage. Based on the development of digital technology and the network environment, English courses have also accelerated the process of modernization. The establishment of an English learning library will provide teachers and students with a variety of English learning resources, and create new types of learning based on English learning materials, so that students can complete independent and independent English learning process to improve their English skills [3].

2 Literature Review According to Kumaran, V. S. et al., in the data age, many industries have collected very large amounts of data. And the value of data continues to grow exponentially [4]. Big data usually refers to some important information and important data. Man, J believes this information is a valuable “investment” for many industries and industries [5]. Gui, Y. believes that machine learning and data mining techniques can see some patterns in the data [6]. In the information age, educational big data such as machine learning and data mining technology can provide important information for professional training, scientifically refine educational teaching methods and project management decisions, and it is necessary to study the possibility of obtaining better education. Khan, B said that in recent years, data mining education has become more and more important to the national strategy of all schools and colleges [7]. In the era of big data, the actual and future needs of educational development, including the advancement of information technology and educational concepts, as well as the application of data mining in education, have attracted more and more researchers. In a teaching setting, the communication between teachers and students is usually face-to-face in the classroom. During this period, academic data is collected from interview and survey data, which is then analyzed and processed through big data or statistics. Soriano-valdez, D once evaluated the performance of a class of 22 elementary school students and studied learning data using regression model trees [8]. The results showed that 45% of students were affected by peer intervention in the classroom. Class, personal and environmental impacts were 18% and 16%, respectively. It can be seen that big data technology is widely used in the field of education. This paper mainly uses big data technology to build an English education resource database.

Construction of English Teaching Resource Database

547

3 Methods 3.1 Construction of English Teaching Resource Database Under the Background of Big Data Conducive to the Creation of a Multi-course System in Which Learners, Teachers, and Managers Share and Build a Win-Win Situation The library of digital resources provides an opportunity for independent and independent study of students, active learning, broadening the range of knowledge and improving cultural awareness; It encourages teachers to use the management of student learning through the library, and provides a platform for the development of integration for teachers, communication and interaction collaboration, and support for teacher development; It can prompt managers to integrate resources, save costs, and conduct “big data” management analysis on the learning traces left by students in the resource library, such as classrooms, grades, and click rates, and provide strategies for the next step of teaching and management. The construction and use of the resource library can not only meet the three-dimensional and networked teaching requirements, but also create a personalized, hierarchical and diversified college English curriculum system [9, 10]. Conducive to Improving the College English Curriculum System of Newly-Built Undergraduate Colleges and Universities, and Promoting the Teaching and Reform of College English Curriculum School A actively promotes the reform of college English teaching, the school-level organization organized many investigations and demonstrations on the college English reform plan, revised the college English teaching syllabus, and issued important documents such as the “College English Curriculum Reform Plan”, “Undergraduate Talent Training Plan” and “Public Compulsory Course Reform Plan”, each year, allocated special funds for reforming English courses in universities [11]. With the supervision and support of the school, the English department of College A has achieved significant progress in the development of teaching staff, teaching levels, teaching content and methods [12]. For this good reason, the creation of a digital library for teaching English in the university that integrates learning, teaching and management using modern information technology Output and analysis of big data based on the needs of reforming education in schools is a key aspect of research and performance. It is important to meet the requirements of teaching English in college, improve the teaching of English courses in college, and make the changes in teaching English in college [13]. 3.2 The Overall Structure and Functional Module Design of the English Teaching Resource Library The Overall Structure of the English Teaching Resource Library The focus of building an English teaching resource database platform is to create an interactive environment with rich resources, timely updates of resources, and open network teaching and learning. From resource management and full use to online collaboration between teachers and students, and then our process of student self-recovery, the English curriculum can be considered a minefield for English learning. The standard model of using the English library curriculum is shown in Fig. 1 [14].

548

L. Chen et al.

Fig. 1. Framework diagram of the English teaching resource library platform

Design of Functional Modules of English Teaching Resource Library The English learning library mainly has four functional sub-modules: Shared library sub-module, best class management, network learning management and management management sub-module. Each sub-module has several functions to implement all the functions of the English learning library. Figure 2 is a block diagram of the equipment designed to work on the English language learning library [15, 16]. Shared resource library sub-module: Used to store uploaded media resources, animation resources, courseware resources and resource retrieval. Excellent course module: It is used to store and manage national, provincial and school-level excellent courses over the years, at the same time, the existing courses on the platform can also be released into excellent courses through evaluation. Excellent course template function, according to the evaluation standards of various excellent courses, the columns of the excellent course are made into templates, and teachers can fill in the content according to the columns of the template when publishing the excellent courses. Network teaching module: Including the creation and management of internal courses and external courses for teaching and students’ daily teaching. System management module: Including professional management, class management, user management, system maintenance [17]. 3.3 The Overall Design of the Database After identifying the requirements for English language learning resources in the library platform, the functions of the English learning library were obtained, and it was clarified that the system platform needs do data storage. In order to better and effectively store and manage the information contained in the English education information database platform, the overall design of the information should be done. For the English teaching resource database platform, from the perspective of functional modules, it mainly concentrates 6 categories of data: user management, resource management, course management, test question management, log management, and interactive data management. The 6 types of data analyzed, each type of data corresponds to an entity, entities are

Construction of English Teaching Resource Database

549

Fig. 2. Functional block diagram of the teaching resource library platform

not independent, they have a corresponding relationship, if courses are created from resources, the resources for each course are independent resources, therefore, there is a 1-to-1 relationship between courses and resources. The system log records the operations of each user, each user has many operations, and there will be multiple log records, therefore, there is a many-to-many relationship between users and system logs [18]. 3.4 The Connotation and Composition of the Integrated Development of English Teaching Resources in the Era of Big Data The Meaning of the Development of English Teaching The development of English education is based on the development of students’ English language skills through various texts, pictures, audio, video, animations, microcourses, MOOCs and other methods. Help in the network, review the information on education. In order to improve students’ ability to use English effectively, and achieve the goals of knowledge, ability and effective English teaching, the best materials that meet the needs of college students [19]. The Composition of the Integrated Development of English Teaching Resources According to the current construction of English teaching resources, it can be divided into three parts: Teaching platform, learning center and material library. Teaching platforms generally include: User management, interactive Q&A, homework review, question bank management, online testing, online retrieval, and search engines, etc. The learning center mainly includes: Courseware index, case library, test question library, literature and so on. The material library mainly provides various teaching resources including text, images, animation, audio, video, games, etc. for teaching [20, 21]. The teaching platform is the basis of online teaching, the learning center is the extension of knowledge points, and the material library provides rich materials for teaching, the organic combination of the three constitutes a complete network teaching resource system [22].

550

L. Chen et al.

4 Results and Analysis 4.1 Performance Test of English Teaching Resource Database Platform English Language Learning Library Research platform attempts to see as many problems as possible, expose as many problems and defects as possible, thus achieving the goal of modification of existing quality lines and quality software testing [23]. Performance measurement plays an important role in software security, and the performance measurement of software project (product or system) includes many test points and difficulties. Performance metrics typically include clients, network systems, and servers. The specific elements of this test generally include the following three aspects: One is the customer’s test using the software system; Another is the software system performance measurement in the transmission process in the network; Performance evaluation of server-side software systems. The following are instructions on how to use the Gtmetrix tool to measure the website’s performance on the target system platform and the webcaca tool to measure the speed of the website layer and the brand’s pressure target system platform. Webkaka tool stress test: Use the webkaka tool to perform a stress test on the English teaching resource library system platform, set the number of concurrent accesses to 20, the duration to 5 min, and the test data delayed by 3 s for each request, the detailed test report obtained is shown in Fig. 3, the overall report is shown in Table 1. 0.55 Page opening time Cumulative throughput

20

0.45 0.40

10

0.35 0.30

0

0.25 0.20

56.5

57.0

57.5

58.0

58.5

59.0

59.5

TIME

Fig. 3. Stress test detailed report

60.0

-10

60.5

Cumulative throughput

Page opening time

0.50

Construction of English Teaching Resource Database

551

Table 1. Overall report time

concurrent users

number of requests

success

fail

Throughput

average connection time

average open time

22:57:02

10

245

245

0

16.260.650

0.203

0.246

Performance appraisal is based on job requirements. Through testing, it has been ensured that the system platform performance has reached the specific needs; The web page loading speed of the English language learning library platform is fast, and the platform’s simultaneous click has reached the desired target [24]. 4.2 Application of Big Data in English Teaching Resource Database Big data-based analysis can contribute to the sustainable development of academic libraries. Through the user information generated in the academic library, teachers will understand the location, area, content, and resources of the academic library, and conduct research on curriculum, curriculum, instruction, and feedback. Big data analysis can help students to understand the heart rate, understand the behavior of people, behavior, behavior assessment, and analyze a lot of data. Records of student learning to provide timely academic support. Teachers can also reflect on their own teaching behavior, consider the design, content, and structure of the library’s curriculum, and continue to update and improve the building. Library education based on information from teacher-student interaction. The development of library resources can, in turn, improve the effectiveness of students’ learning and the work of using library materials [20].

5 Conclusion The purpose of creating an educational library platform is to facilitate the teaching and learning of teachers and students using modern computer technology and information technology. In the era of big data, English teachers should change their teaching strategy in time, often adapt to the requirements of teachers and students in the time of big data, study the development of information network, and use the network appropriately. Information to create English learning resources work, achieve English learning goals, and create a library of English learning. It is an important tool for the use of technology in daily learning in schools. This educational resource library uses three platforms: a shared library, an excellent classroom, and a network learning center. Identify the requirements of the information library platform, determine the content of the development of English education in the era of big information, and use the information major in English language learning libraries. During the development and testing, the web page loading speed of the English language learning library platform is fast, and the high performance platform has simultaneously reached the desired goal. In the era of big data, the creation, integration and development of more English skills teaching is a teaching need, as well as the need of the time.

552

L. Chen et al.

References 1. Tan, Q., Shao, X.: Construction of college english teaching resource database under the background of big data. J. Phys. Conf. Ser. 1744(3), 032004 (2021) 2. Wang, L.: Construction of english network teaching platform relying on computer big data. J. Phys. Conf. Ser. 1744(3), 032142 (2021) 3. Hilpert, M., Perek, F.: You don’t get to see that every day: on the development of permissive get. Construct. Frames 14(1), 13–40 (2022) 4. Kumaran, V.S., Malar, B.: Distributed ensemble based iterative classification for churn analysis and prediction of dropout ratio in e-learning. Interact. Learn. Environ. 3, 1–16 (2021) 5. Men, J., Lv, Z., Zhou, X., Han, Z., Song, Y.N.: Machine learning methods for industrial protocol security analysis: issues, taxonomy and directions. IEEE Access 8, 83842–83857 (2020) 6. Gui, Y., Huang, R., Ding, Y.: Three faces of the online leftists: an exploratory study based on case observations and big-data analysis. Chin. J. Sociol. 6(1), 67–101 (2020) 7. Khan, B., Naseem, R., Shah, M.A., Wakil, K., Mahmoud, M.: Software defect prediction for healthcare big data: an empirical evaluation of machine learning techniques. J. Healthcare Eng. 2021(2), 1–16 (2021) 8. Tian, G., Han, P.: Research on the application of offshore smart oilfield construction based on computer big data and internet of things technology. J. Phys. Conf. Ser. 1992(3), 032002 (2021) 9. Chen, C., Chen, K., Chen, X., Zhu, X., Ke, Y.: Construction of power transmission and transformation project cost information platform based on big data analysis. J. Phys. Conf. Ser. 2146(1), 012004 (2022) 10. Pei, Y.: Research on the construction of quality monitoring system party building in private colleges and universities based on big data technology analysis. J. Phys. Conf. Ser. 1744(4), 042103 (2021) 11. Wanyan, D.J.: Application of deep foundation pit support technology based on big data analysis in construction engineering construction. J. Phys. Conf. Ser. 1533(4), 042001 (6pp) (2020) 12. Guan, Y.: Data mining system and construction analysis of skilled talents based on big data. J. Phys. Conf. Ser. 1744(3), 032021 (7pp) (2021) 13. Wang, J.: Construction and effect evaluation of roaming system based on big data. J. Phys. Conf. Ser. 1852(2), 022087 (2021) 14. Meng, F.: Information management strategy of bridge construction technology based on big data. J. Phys. Conf. Ser. 1852(3), 032031 (2021) 15. Zhang, X.: Research on the construction of subject group english corpus based on computer network informatization. J. Phys. Conf. Ser. 1915(2), 022033 (5pp) (2021) 16. Meng, L., Zhang, B., Wang, D., Jun: Construction of digital library precision service system under big data environment. IOP Conf. Ser. Mater. Sci. Eng. 806(1), 012059 (5pp) (2020) 17. Wang, Y.: Research on the innovation of teaching mode of the university english hierarchical listening and speaking under the “internet+” era based on the analysis of big data. J. Phys. Conf. Ser. 1992(2), 022125 (4pp) (2021) 18. Yuan, X.: Design of college English teaching information platform based on artificial intelligence technology. J. Phys. Conf. Ser. 1852(2), 022031 (2021) 19. Fan, H.: Research on hubei tourism talent demand and university tourism education training model based on computer big data mining technology. J. Phys. Conf. Ser. 1744(3), 032090 (2021) 20. Ren, X., Li, C., Ma, X., Chen, F., Wang, H., Sharma, A., et al.: Design of multi-information fusion based intelligent electrical fire detection system for green buildings. Sustainability 13(6), 3405 (2021)

Construction of English Teaching Resource Database

553

21. Ajay, P., Nagaraj, B., Huang, R., Raj, P., Ananthi, P.: Environmental and Geographical (EG) Image Classification Using FLIM and CNN Algorithms. Contrast Media & Molecular Imaging (2022) 22. Liu, X., Su, Y.-X., Dong, S.-L., Deng, W.-Y., Zhao, B.-T.: Experimental study on the selective catalytic reduction of NO with C3H6 over Co/Fe/Al2O3/cordierite catalysts. Ranliao Huaxue Xuebao/J. Fuel Chem. Technol. 46(6), 743–753 (2018) 23. Huang, R., Yang, X.: The application of TiO2 and noble metal nanomaterials in tele materials. J. Ceram. Process. Res. 23(2), 213–220 (2022) 24. Zhang, Q.: Relay vibration protection simulation experimental platform based on signal reconstruction of MATLAB software. Nonlinear Eng. 10(1), 461–468 (2021)

USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning Junyi Wang(B) Xi’an Jiaotong University, Xi’an 710049, China [email protected]

Abstract. Imbalanced learning plays an important role in our daily life, featuring large amounts of normal samples and small percentage of abnormal ones in its data set. To solve these imbalanced data cases, machine learning models like Decision Tree and Logistic Regression have been widely applied. However, performance of models is always negatively affected due to the massive imbalance. In order to fix this problem, sampling methods are used to balance the data sets. This work combines random undersampling with SMOTE (Synthetic Minority Oversampling Technique) to synthetically modify data sets and train models, which achieves better recall_score performance in experiments. Additionally, we correct the mistake that other works about sampling methods always evaluate models on the transformed data set, which is against its original purpose. At last, we improve the Logistic Regression algorithm using this data-set-based technique, allowing it to perform better when handling imbalanced data cases. Keywords: machine learning · imbalanced learning · random undersampling · SMOTE · logistic regression

1 Introduction An imbalanced data set occurs when the classification categories in it are not equally represented. Imbalanced data set can be seen everywhere in our daily life. Often the data reflecting people’s behavior is composed of large amounts of “normal” samples along with a tiny percent of “abnormal” ones, like 950 to 50, which is of huge imbalance. In a specific field like credit card fraud detection, frauds, which we need to precisely predict, are around 0.2% of all transactions [12]. Under this situation, the imbalanced learning occurs. It is of great significance to efficiently recognize the abnormal examples, that is, the minority class in the data sets. In previous work, the machine learning methods, which Fig. 1 contains, like Decision Tree Classification, Logistic Regression, Random Forest and so on, have been widely used in this field, achieving satisfactory results [14]. With more attention on the study of the imbalanced learning, better models which can discriminate more precisely between majority and minority class are highly required in fields like credit card fraud detection. However, because of the massive imbalance in these data sets [7], the process of designing more efficient machine learning algorithms has been delayed [9]. Despite of this, the technique to artificially modify imbalanced data © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 554–564, 2023. https://doi.org/10.1007/978-3-031-31775-0_57

USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning

555

Fig. 1. Machine learning methods

sets and then train models on these synthetic sets in order to optimize the performance is widely utilized. For example, Dedi Rosadi successfully improved the performance of the classification model in the prediction of Peatlands fire, using SMOTE (Synthetic Minority Over-sampling Technique) approach, which is definitely a positive result of imbalanced machine learning based on artificially-changed data sets [11]. In our work, we will first correct the mistake that tests models on modified data sets. Then we will discuss a sampling method called Undersample-combined Synthetic Minority Over-sampling Technique (USMOTE) and the influence it has on the imbalanced machine learning. After that, we take a real-world data set for experiment to show the improvement of the model performance, using USMOTE. Section 2 reviews the previous work dealing with imbalanced data sets in machine learning. Section 3 includes the methodology of this work. In Sect. 4, the experimental results are displayed. At last, Sect. 5 gives the suggestions for our future work.

2 Previous Work on Imbalanced Learning Machine learning methods have been frequently applied in imbalanced learning. However, as what Fig. 2 has shown, due to the special properties like massive imbalance of these data sets, the accuracy of the algorithms is negatively influenced. When it comes to the feed-forward neural network, DeRouin, E. and Brown. J found that an imbalanced data set may lead to the model failing to tell precisely between the classes [3]. This disappointing output calls for methods to improve the performance of machine learning classifications using techniques balancing the imbalanced data sets, like random sampling. Means to balance data sets are mainly composed of undersampling and oversampling. Although Gayan K. Kulatilleke points out that the both will make the distribution of the training data set and the real-world one different from each other, which may cause a problem called sample selection bias [8], our final goal still lies in pursuing better model performance by optimizing algorithms, including artificially modifying data sets. The undersampling method was successfully applied by Kubat and Martin in 1997. They undersampled the majority class while the distribution of minority ones stayed the same, achieving a satisfactory result in the ROC curve [6]. Afterwards, Domingos

556

J. Wang

3%

Normal Abnormal 97% Fig. 2. Two examples of massively imbalanced data sets

(1999) used an approach called “metacost” and found that when measured in the same standard, undersampling may perform better than oversampling method [5]. After the single use of either undersampling or oversampling method, Ling and Li combined the two to explore a greater approach [10]. In order to balance the data set, they oversampled the minority examples with replacement before undersampling the majority ones. Unfortunately, when evaluated by the lift index, the combination method didn’t show a significant improvement in the model performance. However, our work is different from theirs. Nitesh V. Chawla has found a method called Synthetic Minority Over-sampling Technique (SMOTE) [1], which uses the KNN algorithm to oversample minority class by creating instead of replacing. Rather than operates in the “data space”, the method works in the “feature space”, which achieves a better performance in machine learning algorithms when handling imbalanced data cases. However, there always exists a problem that models trained on the oversampled or undersampled data sets are tested on it either, which is far from the original purpose. For example, in the study of stroke prediction completed by Soumyabrata Dev [4], the data set features 28524 “normal” samples and 548 “abnormal” ones, which is of great imbalance. To balance the data, S. Dev randomly chose 548 normal samples from the total 28524 and kept the abnormal ones the same. Then they trained and tested models on the undersampled data set, which achieved good results. However, the final aim of the prediction is to apply the model in the real-world data, which means that testing on the undersampled data set is far from the original purpose. Additionally, Ahmad B. Hassanata has the same concern with us. He points out that the sampling method may curse a poor performance when the model is applied in real-world data sets [13]. So in our work, we will study the same question, test the model on real-world data and correct this mistake. Moreover, as suggested in the SMOTE article, when combined with the undersampling method, the model trained by the SMOTEed data set will perform better. However, previous work just showed how the changes of the rate of the undersampling will affect final result of prediction accuracy if the SMOTE rate is fixed. The most important thing that how to select the two parameters to optimize machine learning models hasn’t discussed. So in this work, we will firstly use the Undersample-combined Synthetic Minority Over-sampling Technique (USMOTE) to improve the imbalanced machine

USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning

557

learning and then take a real-world data set as an experiment to show the effect of the method. Most importantly, we will perfect the Logistic Regression algorithm applying USMOTE, which is innovative and of practical use.

3 Performance Measures When talking about measuring a machine learning model, the confusion matrix, shown in Fig. 3, is of great importance. There comes many standards from the matrix.

Fig. 3. Confusion Matrix

The most frequent standards are the Accuracy Score and the Precision Score, which represent the percent of the true prediction (both negative and positive) and the percentage of true positive prediction accordingly. Higher the score is, better the model seems to be. However, the conclusion is totally different when it comes to the imbalanced learning. A high accuracy score or a great precision score doesn’t secure a good model when dealing with greatly-imbalanced data set. Figure 4 takes a data set featuring 950 negative samples and 50 positive ones for example. The target is to predict the positive one. The model can be graded as 0.955 in accuracy and 1 in precision when it works as below. The score is really high, but the miss rate is 0.9, which is unacceptable under the imbalanced data cases. So we won’t select the accuracy or the precision score as our performance measures.

Fig. 4. A model with high accuracy and precision score but performs poorly

As the cost we mistake an abnormal example (minority class) for normal (majority class) is often dramatically higher than the reverse, precisely judging the minority class is thought to be the most important in the imbalanced learning [8]. So the final performance

558

J. Wang

measure in our work is to see to which degree the true positive prediction covers all the positive class (we call it Recall Score), which is more of practical use (detecting the minority class). It means that we can tolerate a bit more mistakes in judging normal as abnormal, if this contributes to a higher Recall Score. Accuracy = (TP + TN )/(TP + FP + FN + TN )

(1)

precision = TP/(TP + FP)

(2)

Recall = TP/(TP + FN )

(3)

As mentioned before, there exists a problem that in previous work, the testing process is sometimes carried on the modified data set. It causes that even the score is high, we still don’t know the performance model has on real-world data set. It is far from its original purpose. So in this work, we evaluate all the models on our original data set instead of the modified set, which serves the initial purpose and solves the problems raised by Ahmad B. as well [13].

4 Methodology 4.1 Testing on Real-World Data Set Previous work always tests models on the modified data sets, and achieve a good performance. However, only when the model is evaluated on the original data set can it be of practical use. So this time, we will compare the two results: the fake one and the real one. Firstly, we will choose the stroke prediction data set published in Kaggle, which is the same source of the one S. Dev [4] used and has been updated. After that, we will use the same sampling method he applied to balance the data set and train models on it. Then, we will follow the “wrong” testing criteria, test models on the undersampled data set and get the “fake” results. Meanwhile, we test them on real-world data set and achieve the “real” performance. Finally, the comparison of the two will be taken. 4.2 USOMTE Two sampling methods we use for balancing data set are: undersampling and oversampling. Undersampling means to undersample the majority class while keeping the minority ones the same. And oversampling represents the opposite. Figure 5 vividly shows different results of the various sampling methods on the same data set. In this case, we use random undersampling as the undersampling method and SMOTE as oversampling method, combining both to balance the data sets. Random undersampling is a technique to balance uneven data sets by decreasing the size of the majority class randomly with no changes on the minority class. It is one

USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning

559

Fig. 5. Data sets using sampling methods: undersampling, oversampling and SMOTE

of several means we can apply to dig out more useful information when dealing with imbalanced data cases [2]. SMOTE is a synthetic technique widely used to oversample the minority class to handle the tricky imbalanced-data problem. This method synthetically generates data from the original class by applying the k-nearest neighbor algorithm, achieving better results than random oversampling. The data-generating process of SMOTE work as follows [1]. The visualization of this is also shown in Fig. 6. SMOTE Algorithm Setting: Rate of SMOTE N%; Method for calculating distance D; Number of selected nearest neighbors k (default as 5) (1) (2) (3) (4) (5)

Define the distance D, like Euclid Distance and Hamming Distance. Select k nearest neighbors of the sample under consideration. Randomly choose N% neighbor and the distance as well. Choose between 0 and 1 and multiply it to the distance above. Add the changed distance to the original sample.

Result: Generating synthetic samples in minority class, which handles the imbalanced data case. Our method is a combination of the random undersampling and SMOTE, which is called USMOTE. This method first SMOTEs the minority class in a specific rate,

560

J. Wang

Fig. 6. The process of SMOTE

and then undersamples the majority class. The two modified classes together make up the final training data set, with both less changes on the distribution of each one. In the experiment below, we will show how the USMOTE method improves model performance on imbalanced learning and a perfection of Logistic Regression algorithm as well. 4.3 Imbalanced Learning Using USMOTE When the classifier deals with imbalanced data set, it is called imbalanced learning. The imbalanced classes always negatively affect models’ performance, adding difficulty to this kind of machine learning. Under this situation, we will use USMOTE method to improve the results. About data set. In order to simulate an imbalanced learning base, we use the Primary PCA encoded (Pozzolo) Data set which contains two days data of credit card transactions in September 2013 in Europe. As listed in Table 1, the fraud (minority class) percentage is 0.172%, which shows the massively imbalance [2]. Table 1. Primary PCA encoded (Pozzolo) Data Set Instances

Normal Samples

Abnormal Samples

Rate

284,807

284,315

492

0.172%

After balancing the data using USMOTE, four machine learning models are trained on the artificially-modified data set. To show the advantages of USMOTE, we will compare the model performance between the same model trained with different data sets according to recall score. The algorithms we use here are Logistic Regression, Decision Tree, Support Vector Machine and Random Forest. 4.4 Improve Logistic Regression by USMOTE Logistic Regression is one of the machine learning algorithms, which is a basic method for binary classification. It features high speed, great mathematical properties and low

USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning

561

cost in memory. However, Logistic Regression cannot handle imbalanced data cases very well. So in this case, we will use the USMOTE method to improve the Logistic Regression algorithm. The steps work as follow. Improvement of Logistic Regression Algorithm Setting: Percentage of the minority class R; random undersampling rate u; SMOTE rate s (1) (2) (3) (4)

Select the range of u and s according to R Calculate the step gaps of u and s according to the decimal digit of R Train Logistic Regression models and evaluate them at the same time Choose the best model using the recall standard

Result: An improved Logistic Regression model based on the sampling method of data set.

5 Results of the Experiments 5.1 Stop Evaluating Models on Transformed Data Set Figure 7 shows the comparisons of the fake and real results.

Fig. 7. Comparison of fake and real results in two data sets

We can see that in the credit card data set, the score dramatically decreases from 0.95 to 0.142 after using the real data set to test, and from 0.76 to 0.083 in stroke data set. It is obvious that some so-called great performances disappear when we use the realworld data set to test models. So it’s time for us not to evaluate models on transformed (undersampled or oversampled) data set and go back to the real one. 5.2 Imbalanced Machine Learning Using USMOTE Comparison of original performance and that after USMOTE method is displayed in Table 2.

562

J. Wang Table 2. Comparison of recall score performances

Logistic Regression

Recall Score Original Performance 0.63

Support Vector Machine

0.68

0.82

Decision Tree

0.79

0.81

Random Forest

0.81

0.85

After USMOTE 0.83

In the table above, we can see that Logistic Regression and SVM model both improve dramatically after USMOTE, with an increase of 0.2 and 0.14 in recall score accordingly. Although this method doesn’t raise the score of DT and RF very much, models still perform better. As a result, we can conclude that when dealing with imbalanced data case, models trained on USMOTEed data set will perform much better than those trained on original one, especially in Logistic Regression algorithm. 5.3 Logistic Regression with USMOTE Due to the conclusion above and the USMOTE method, we successfully design the algorithm selecting the optimal parameters of USMOTE to perfect the Logistic Regression model, according to the method in Sect. 4.4. And Fig. 8 visualizes the comparison before and after optimization.

Fig. 8. Improvement in Logistic Regression

Although the false negative prediction increases, the true positive prediction boosts a dramatical improvement. This situation is acceptable and satisfactory under the imbalanced data cases. With a little cost in time complexity, this algorithm greatly improves the model performance of Logistic Regression when dealing with massively imbalanced data problems, realizing the model amelioration based on data preprocessing method. And the novel function called USMOTE_LOGISTIC_REGRESSION is shown in the Appendix.

USMOTE: A Synthetic Data-Set-Based Method Improving Imbalanced Learning

563

6 Future Work In our work, although models trained on USMOTEed data set performs better on the original samples, which solves the problem raised by Ahmad B. Hassanata [13], this doesn’t necessarily secure the robustness of model. The reason is that this ideal result may come from two factors below. (1) The testing set is randomly chosen, so the good result maybe just one of the random events. (2) Although the testing set doesn’t overlap with the training one, they still have similar distributions as they are collected by similar means, so the model may perform poorly on other real-world data sets. So in our future work, we will evaluate machine learning models across other realworld data sets when handling imbalanced cases, verifying the robustness of them.

References 1. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: snopes.com: two-striped telamonia spider. J. Artif. Intell. Res. 16(Sept. 28), 321–357 (2002). https://arxiv.org/pdf/1106. 1813.pdf, http://www.snopes.com/horrors/insects/telamonia.asp 2. Dal Pozzolo, A., Caelen, O., Bontempi, G., Johnson, R.A.: Calibrating Probability with Undersampling for Unbalanced Classification Fraud detection View project Volatility forecasting View project Calibrating Probability with Undersampling for Unbalanced Classification (2015). https://www.researchgate.net/publication/283349138 3. DeRouin, E., Brown, J., Fausett, L., Schneider, M.: Neural network training on unequally represented classes. In: Intelligent Engineering Systems Through Artificial Neural Networks, pp. 135–141. ASME Press, New York (1991). https://dl.acm.org/doi/book/10.5555/1557404 4. Dev, S., Wang, H., Nwosu, C.S., Jain, N., Veeravalli, B., John, D.: A predictive analytics approach for stroke prediction using machine learning and neural networks. Healthc. Anal. 2, 100032 (2022). https://doi.org/10.1016/j.health.2022.100032 5. Domingos, P.: MetaCost: a general method for making classifiers cost-sensitive. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155–164. ACM Press, San Diego, CA (1999). https://dl.acm.org/doi/pdf/10.1145/ 312129.312220 6. Kubat, M., Matwin, S.: Addressing the curse of imbalanced training sets: one sided selection. In: Proceedings of the Fourteenth International Conference on Machine Learning, pp. 179–186. Morgan Kaufmann, Nashville, Tennesse (1997). https://dblp.org/rec/conf/icml/ KubatM97.html 7. Kulatilleke, G.K.: Challenges and complexities in machine learning based credit card fraud detection, pp. 1–17 (2022a). http://arxiv.org/abs/2208.10943 8. Kulatilleke, G.K.: Credit card fraud detection - classifier selection strategy, pp. 1–17 (2022b). http://arxiv.org/abs/2208.11900 9. Kulatilleke, G.K., Samarakoon, S.: Empirical study of machine learning classifier evaluation metrics behavior in massively imbalanced and noisy data (2022). http://arxiv.org/abs/2208. 11904 10. Ling, C., Li, C.: Data mining for direct marketing problems and solutions. In: Proceedings of the Fourth International Conference on Knowledge Discovery and Data Mining (KDD-1998). AAAI Press, New York, NY (1998). https://www.csd.uwo.ca/~xling/papers/kdd98

564

J. Wang

11. Rosadi, D., et al.: Improving machine learning prediction of peatlands fire occurrence for unbalanced data using SMOTE approach. In: 2021 International Conference on Data Science, Artificial Intelligence, and Business Analytics, DATABIA 2021 - Proceedings, pp. 160–163 (2021). https://doi.org/10.1109/DATABIA53375.2021.9650084 12. Sohony, I., Pratap, R., Nambiar, U.: Ensemble learning for credit card fraud detection. In: Proceedings of the ACM India Joint International Conference on Data Science and Management of Data, pp. 289–294 (2018). https://dl.acm.org/doi/abs/10.1145/3152494.3156815 13. Tarawneh, A.S., Hassanat, A.B., Altarawneh, G.A., Almuhaimeed, A.: Stop oversampling for class imbalance learning: a review. IEEE Access 10, 47643–47660 (2022). https://doi.org/10. 1109/ACCESS.2022.3169512 14. Yousuf, B.B., Sulaiman, R.B., Nipun, M.S.: Chapter * A novel approach to increase scalability while training machine learning algorithms using Bfloat – 16 in credit card fraud detection (n.d.)

Design and Implementation of Mobile Terminal Network Broadcast Platform Bing Wang(B) Guangdong University of Science and Technology, Dongguan 523083, China [email protected]

Abstract. With the popularization of mobile terminal, the network broadcast industry can no longer be confined to the PC terminal, and corresponding apps need to be developed to enter people’s mobile terminal devices. For this purpose, this paper will carry out relevant research, introduce the basic concepts and advantages of the mobile terminal network broadcast platform, and then design the basic framework of the platform, and then discuss the implementation methods of the platform according to the framework. Through the research, the mobile terminal network broadcast platform has unique advantages. According to the basic framework and methods, the platform can be realized, give play to its advantages, and bring good economic benefits. Keywords: Mobile end · Network broadcast platform · Platform design and implementation

1 Introduction Our country network broadcast industry started relatively early, but in the early days, the industry belonged to a minority industry, mostly audio transmission. But later, with the development of network and the maturation of multimedia technology, the network broadcast industry realized the video broadcast, which greatly increased the richness of the live content. Therefore, it attracted a large number of users and made the industry develop very rapidly in recent years. However, in the process of continuous development, many webcast enterprises have found through user behavior feedback data that the frequency of use of PC terminal is decreasing, while the frequency of use of mobile terminal is increasing, indicating that users are increasingly favoring mobile terminal. Therefore, in order to keep pace with users, a large number of enterprises hope to develop mobile webcast platforms. Therefore, the design and implementation of mobile terminal platform has become a concern of people, and it is necessary to carry out relevant research on this problem.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 565–573, 2023. https://doi.org/10.1007/978-3-031-31775-0_58

566

B. Wang

2 Basic Concepts and Advantages of Mobile Live Streaming Platforms 2.1 Basic Concepts The essence of mobile webcast platform is an APP program, which provides live streaming service for people mainly through data stream transmission technology [1, 2]. That is to say, after the webcast personnel start the live broadcast, they collect their images and audio data with the help of the camera equipment and recording equipment on the mobile terminal. After the data is generated, it will be collected and sorted by the platform, thus forming a data stream. The data stream will be connected to the network through the mobile terminal of the live broadcast personnel, and then it will be divided into several data streams into the diverter. Each data stream will carry the image and audio data and send to the mobile terminal of the specified user, so that users can see the rendering effect of the image and audio data on their own mobile terminal. In addition to data stream transmission technology, communication technology is also one of the core technologies of mobile live streaming platform. That is to say, live streaming is not a service that uploads pre-recorded videos for people to watch, but emphasizes real-time transmission of data stream. The lack of real-time transmission will easily cause bad experience for users. In addition, in the process of data stream transmission, there is interactive communication between the splitter and the mobile terminal. On the one hand, the splitter can spread the data to the user, and the user can feed back their information to the splitter, and the splitter can feed back to the platform [3–6]. Figure 1 shows the basic operation flow of the mobile webcast platform.

Fig. 1. Basic operation flow of mobile webcast platform

Design and Implementation of Mobile Terminal Network Broadcast Platform

567

2.2 Advantages Compared with similar or similar network media services, mobile live streaming platforms have some unique advantages, as follows. Flexible Usage Flexible use is the most prominent advantage of mobile live streaming platform, even PC live streaming platform can not be matched. Mobile live streaming platform is flexible because of mobile devices. At present, smart phones are popular mobile terminal devices. After years of development, such devices have covered an extremely wide range of people in China, ranging from the elderly to children. Mobile webcast platforms fall within such devices, and people carrying their devices out are equivalent to carrying mobile webcast platforms. You can watch live streaming anytime and anywhere, and you can also use various functions of the live streaming platform. Rich Functions Although the main service of mobile terminal network broadcast platform is live video, it still has the characteristics of network platform in a broad sense and can carry a variety of functions. At present, the main functions of mobile webcast platforms can be divided into four categories: The first is user service, which is mainly for the majority of users. From the perspective of user entertainment, personalized setting and other needs, various function keys are developed. Combined with the relevant operation mode of mobile terminal devices, users can make a choice on the operation interface at any time and anywhere, and then continue to operate to enjoy functional services. For example, if a user wants to play a small game while watching a live broadcast, he or she can generally press the screen to call out the small game interface and enter the in-game interface for operation after selecting it. This operation can be carried out regardless of whether the livestreamer starts the live broadcast [7]. The other is the livestreamer service category, which is mainly designed for the various needs of livestreamer in the process of livestreaming. There are a variety of such functions, most of which are set functions of livestreaming. For example, in the process of live broadcasting of multiplayer games simultaneously online, if the livestreaker live-broadcasts in real time, it is likely to destroy the fairness of the game and lead to the low quality of the livestreaming content from his own perspective. Then, the livestreaker can set the delay time of the livestreaming data stream, so that the content seen by the user is inconsistent with the time when the content is produced, so as to guarantee the livestreaming content. The third is the internal management, which is mainly oriented to the platform’s internal platform management. The common functions include the function of banning the permission of live broadcasting and the function of backstage management of live broadcasting, etc., which can help the platform to do a good job in the relevant management of live broadcasting content on the platform, maintain the health of live broadcasting content, and also allow the platform to adjust the relevant live broadcasting content or the overall platform according to its own needs. The fourth is the business function, which generally refers to the online payment function, product design, pricing function, etc., which can support users’ online tipping and payment, and is the main economic source of the livestreaming business platform. In addition, there are other types of functions, which

568

B. Wang

depend on the independent choice of the live broadcast platform. However, the above four categories alone are enough to show that the mobile terminal network live broadcast platform has very rich functions. High Real-Time Performance The real-time performance of live broadcast service of mobile terminal network broadcast platform is very high, which is also the main reason why it is called “live broadcast”. It is a major feature and advantage of the platform. The real-time advantage of live broadcasting platforms has attracted people’s attention for a long time, that is, although various live broadcasting platforms in the early stage mainly focus on audio transmission, they have been able to achieve audio synchronous transmission. Later, after the emergence of 4G communication technology, live broadcasting content is not limited to audio and basically realizes video image transmission. Later, in the 5G era, video image transmission is basically mature. The time difference between the generation time of live video content and the display time on the client side has been reduced to the level of milliseconds. It is worth mentioning that millisecond level video content transmission has been able to meet the highest requirements of users [8–10]. Therefore, people’s development of real-time transmission of content transmission on mobile webcast platforms is no longer limited to real-time itself, and further proposed ultra-high-definition, ultra-highdefinition and original picture quality real-time transmission, which have been realized in recent years. Therefore, on the premise of ensuring real-time transmission, the mobile network broadcast platform can effectively guarantee the picture clarity and bring good experience to users. Wide Coverage of Population The mobile network broadcast platform covers a wide range of groups, for the following reasons: First of all, the mobile live streaming platform itself is only a data transmission platform, so it does not exclude any live streaming content. Therefore, there are a lot of live streaming content on the platform, including live streaming, game broadcasting, education broadcasting, film and television broadcasting, etc., and each plate contains people from different fields. For example, there are thousands of games in the live streaming of games. Each game has its own audience, and multiple broadcasters can broadcast the same game. Under this condition, the mobile terminal network broadcast platform can get the attention of multiple user groups, and different user groups can completely choose their favorite and needed live broadcast from the platform. Figure 2 shows the mechanism of mobile webcast platforms covering different user groups. In combination with Fig. 2, it can be seen that the livestream platform will concentrate the livestream content in a certain plate, and then send the content to the user group. If the user is attracted, they will continue to select the plate and content, forming a cycle. Otherwise, there will be no action. At least for some time to come, the platform is designed to reach different people through this circular mechanism. In addition, in order for the platform to accurately disseminate the content to users in the circular mechanism, it needs to analyze the needs of users. Here, most platforms will choose the personalized recommendation algorithm, which is an algorithm to calculate the matching degree between the content label and the characteristic data of users’ needs. For details, see

Design and Implementation of Mobile Terminal Network Broadcast Platform

569

Fig. 2. Mechanism of mobile webcast platforms covering different user groups

Formula (1).   dis Xi , Cj =

 m  2 Xit − Cjt t=1

(1)

The above formula can calculate the matching degree according to the Euclidean distance theory, where X and C are content label set and user demand feature data set respectively, i and j are a certain content label and user demand feature data, m is the Euclidean coefficient, and t is the weight.

3 Basic Framework Design and Implementation Methods 3.1 Frame Design Figure 3 shows the basic framework of the mobile webcast platform. As can be seen from Fig. 3, the overall platform is designed in a hierarchical way, in which the platform can manage the livestreaming source, such as authorizing livestreaming or proposing rectification requirements for livestreaming. The live stream source is connected to the peripheral terminal of the livestreamer, so that the livestreamer can use the live stream authority to start the live stream. During the process, the livestreamer can adjust its own live content through the setting function of live stream, so as to establish a comfortable network live stream environment. After the live broadcast source outputs the digitized signal, the content carried by the signal will enter the communication layer, be received by the communication network, and then be imported into the shunt, which decodes the signal and gets the data stream after decoding. At this time, the shunt and transmission will be carried out. After the diversion, the data stream will be transmitted to the user layer and received by the user side, which can see the live broadcast picture on behalf of the user. At the same time, the user side has various service functions for the user to choose. In addition, if users want to express their wishes, they can communicate with the livestreaming source through the communication layer in reverse, and then the livestreaming source will transmit the information to the livestreamer, who will react according to the information.

570

B. Wang

Fig. 3. Basic framework of mobile webcast platform

3.2 Implementation Method According to the framework in Fig. 3, the implementation method of the mobile webbroadcast platform includes four parts: hierarchical development, functional design, page design and security guarantee mechanism design. Details are as follows. Hierarchical Development Although the mobile webcast platform is divided into four levels, the development methods of the other three levels except the communication layer are basically the same, so the implementation methods of the level development are discussed in two parts: First, the non-communication layer-level development method is mainly implemented by programming technology. It is usually suggested to choose Java language for programming, build the framework of the hierarchy itself, arrange according to the logic given in Fig. 3, and finally use communication technology to connect. Although the development methods of non-communication layer by layer are the same, the contents are different to

Design and Implementation of Mobile Terminal Network Broadcast Platform

571

some extent. The platform oriented management layer is the highest level of authority and needs to carry a number of management functions and data, so it needs to have corresponding internal functional modules and databases. The functional modules themselves can also be implemented through Java language programming, but because of the large number of functional modules, it cannot be directly developed. It is recommended to choose virtual server, the capacity of this server is wireless, can effectively carry a large number of functional modules, at the same time for part of the security level of relatively high functional modules, or it is recommended that the platform use traditional physical server to carry, otherwise by outsiders into, is likely to cause a devastating blow to the platform. In terms of database, cloud database is recommended for the same reason that the database has unlimited capacity, that is, the mobile webcast platform has a wide coverage of users, so massive level data will be generated every moment. Ordinary database capacity is always limited and cannot be stored. Only cloud database has unlimited capacity and can solve the problem of data storage. Second, the development of the communication layer is special, that is, although the communication layer is a level in the system, it has a certain independence and belongs to the communication service provider. Therefore, the development of the communication layer needs the cooperation between the platform and the communication service provider, so that the communication service provider can help it develop the communication layer by relying on the cooperation relationship. In terms of specific content, it is suggested to take the modern mainstream 5G communication technology as the basis, with the support of the communication base station facilities of communication service providers, let the mobile terminal network broadcast platform enter the network, and then choose the super-large traffic packet service to support the platform live data stream. The construction of 5G communication network on the communication layer does not belong to the scope of platform development, so we will not repeat it here. Function Design Focusing on the four main functions of the mobile terminal webcast platform, the communication function of the communication layer does not need additional development, as long as the mobile terminal webcast platform is connected to the network, while the other three functions have the same development technology and can be designed through programming. However, in the development of the other three types of functions, the platform must ensure that the final development results must correspond to user needs, otherwise the relevant functions will have no value and will be idle for a long time. Therefore, the platform should design the functions well in advance and then start the development. The basic idea of functional design is as follows: the platform can treat the platform as a data source, adopt data acquisition tools to obtain behavioral data from different users, and then analyze the data through algorithms to analyze the individual needs of each user. According to the individual needs of users, users with the same and similar needs are divided into a group, and then corresponding functions are developed according to the needs of each group. For example, through data analysis, it is learned that user A often plays a small game while watching live broadcast, while user B often plays b small game while watching live broadcast. Therefore, the essence of their needs is the same, and the difference is only reflected in the small game program. Therefore, the platform can treat users A and B as a group, and develop A small game

572

B. Wang

section to concentrate A and b small games. Provide corresponding services to user groups. Following this idea can complete the functional design. It is worth mentioning that although the functional design requirements of different mobile webcast platforms are different, some basic functions are necessary. See Table 1 for details. Table 1. Basic functions required for mobile webcast platforms Functional classification

Functional content

User Settings

Account information change Personalized setting Account status setting

Apply service functions

Account login Problem consultation Service guide

Design of Security Guarantee Mechanism After the design of the above two steps, the mobile network broadcast platform has basically been put into practical application. However, as a network platform, it is bound to face network security risks, and because the platform involves a large number of users, livestreamers and the platform itself, the outbreak of security risks will often cause very serious consequences. Focusing on this, we must do a good job in the development of mobile network broadcast platform security mechanism design. According to the source of network risks, the main security risks faced by mobile webcast platforms are data stream transmission risk and user account theft risk. The former will lead to the data stream from the platform being transferred and copied to other platforms, which will have a great impact on both the platform and the livestreamers, while the latter will cause the loss of user accounts and easily lead to property risks. In view of this situation, it is suggested that the platform adopt double encryption technology to protect the data transmission process. This technology has a high security level and is almost impossible to be breached as long as the private key is not leaked. Then the firewall is adopted to protect the user account database, at least in the storage side to protect the user account information from disclosure.

4 Conclusion To sum up, the design of mobile webcast platform is difficult and the content is relatively complex. However, as long as we master the basic design ideas and implementation methods, we can gradually complete the platform design. The design and implementation of mobile webcast platform should take user needs into full consideration, and develop corresponding framework and functions according to internal business needs. At the same time, network security risks should not be ignored. Risk prevention is an important work of the platform to ensure that the platform can safely bring users live broadcast experience.

Design and Implementation of Mobile Terminal Network Broadcast Platform

573

References 1. Hou, T., Zhang, T., Huang, H.: Design of wine grape mobile terminal scheduling platform architecture based on IOT. J. Phys.: Conf. Ser. 1927(1), 012009(7pp) (2021) 2. Bu, L., Zhang, Y., Liu, H., et al.: An IIoT-driven and AI-enabled framework for smart manufacturing system based on three-terminal collaborative platform. Adv. Eng. Inform. 50, 101370 (2021) 3. Wang, J., Li, J., Wang, C.: Research of automobile fault lamp recognition algorithm based on mobile platform. In: ICMLC 2020: 2020 12th International Conference on Machine Learning and Computing (2020) 4. Yousuf, B.M., Khan, A.S., Noor, A.: Multi-agent tracking of non-holonomic mobile robots via non-singular terminal sliding mode control. Robotica 38(11), 1–17 (2019) 5. Chi, C., Liu, G.P., Hu, W.: Design and implementation of a mobile terminal cloud supervisory control platform for networked control systems. Trans. Inst. Meas. Control 44(5), 1070–1080 (2022) 6. Zhai, D., Li, H.W.: Design and analysis of intelligent home system for remote control of luminance by mobile terminal. J. Phys. Conf. Ser. 1574, 012138 (2020) 7. Xie, X., Ouyang, W., Xia, Y., et al.: Identity authentication system for mobile terminal equipment based on SDN network. Int. J. Inf. Commun. Technol. 17(3), 257 (2020) 8. Soh, P.J., et al.: Recent advancements in user effect mitigation for mobile terminal antennas: a review. IEEE Trans. Electromagn. Compat. 61(1), 279–287 (2019) 9. Zhang, J.: Research on classroom teaching evaluation and instruction system based on GIS mobile terminal. Mob. Inf. Syst. 2021(11), 1–11 (2021) 10. Liu, G., Li, N., Deng, J., et al.: The SOLIDS 6G mobile network architecture: driving forces, features, and functional topology. Engineering 8, 42–59 (2022)

Monitoring of Tourist Attractions Based on Data Collection of Internet of Things Yu Peng and Qingqing Geng(B) Chongqing College of Architecture and Technology, Chongqing 401331, China [email protected]

Abstract. Due to the wide range of tourist attractions and complex regional division, the traditional monitoring system is difficult, and many problems are difficult to detect and deal with first. In order to solve this problem, this paper has carried out relevant research based on the Internet of Things data collection, mainly introducing the basic concept of the Internet of Things data collection and the advantages of the monitoring application in tourist attractions, and then proposed the design scheme of the monitoring system in tourist attractions of the Internet of Things, and discussed the implementation method of the system. Through research, we have developed an Internet of Things system that fully meets the monitoring needs of tourist attractions. Using this system can improve the monitoring quality of tourist attractions and indirectly promote the management quality of tourist attractions. Keywords: Internet of Things data collection · Monitoring of tourist attractions · Full range monitoring

1 Introduction The monitoring of tourist attractions is an important part of the management of tourist attractions. The management personnel need to understand the internal situation of each area of the scenic spot according to the monitoring picture, and then make management decisions. This can provide security for tourists, and also avoid the adverse impact of tourists’ irregular behavior and other factors on the scenic spot. However, the current monitoring system of many tourist attractions is more traditional, and the actual effect is not as good as expected. Therefore, some problems in tourist attractions still cannot be completely controlled. In the face of this situation, the maturity and popularization of the Internet of Things technology has brought development opportunities. Many scenic spots hope to reconstruct the monitoring system in the scenic spot based on the data collection function of the Internet of Things. Therefore, in order to achieve this goal, it is necessary to carry out relevant research.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 574–582, 2023. https://doi.org/10.1007/978-3-031-31775-0_59

Monitoring of Tourist Attractions Based on Data Collection

575

2 Basic Concept of Internet of Things Data Collection and Application Advantages of Scenic Spot Monitoring 2.1 Basic Concepts Data acquisition of the Internet of Things is a basic function of the Internet of Things, which is mainly composed of three parts: on-site data acquisition equipment, communication link and terminal. The function of data acquisition equipment is to collect on-site data. There are many types of data, including image, temperature, sound, etc. [1–3]. According to these data, accurate judgment can be made on the on-site situation, so the acquisition equipment can be used as monitoring equipment. When the data acquisition equipment obtains the relevant data, it will send the data communication link to the terminal. People can analyze the data at the terminal, understand the site situation, and then make management judgments and decisions based on the work needs [4, 5]. Figure 1 is the basic framework of the data collection function of the Internet of Things.

Fig. 1. Basic framework of data collection function of the Internet of Things

2.2 Advantages of Monitoring Application in Tourist Attractions Because the on-site data collection equipment in the Internet of Things data collection can be used as monitoring equipment, this function is gradually integrated into the monitoring work of tourist attractions. Compared with the traditional monitoring system, the Internet of Things data collection and monitoring system has many application advantages, as follows. Economic Advantages According to the ideal goal, the monitoring of tourist attractions needs to be comprehensive and not miss any corner, so that problems can be found at the first time and then dealt with quickly. In view of this goal, traditional scenic spot monitoring must

576

Y. Peng and Q. Geng

install a large number of monitoring equipment inside the scenic spot, such as cameras, temperature and humidity sensors, and equip each equipment with a corresponding power system to promote the system operation, and also complete the construction of data transmission and communication, so the huge number of equipment and supporting demand will greatly increase the construction cost of the scenic spot monitoring system, It will naturally weaken its economic benefits [6–8]. The data acquisition functions of the Internet of Things are different. Although it also needs to install various data acquisition equipment on the site and meet the needs of power and communication, each of these equipment can be controlled remotely and independently to achieve 360° dead-angle monitoring, so it only needs to be installed according to its monitoring range, which will greatly reduce the number, so the construction cost is lower and the economic benefit is higher. It can be seen that the data collection function of the Internet of Things has economic advantages in the monitoring of tourist attractions. Advantages of Human-Computer Interaction In fact, the relevant equipment in the traditional monitoring system of tourist attractions can also be monitored 360° without visual angle, but it cannot be controlled remotely and independently. If the angle is to be adjusted, it usually needs to be completed manually in person, or through automatic control logic, so that the equipment can move to a specific angle at a specific time. This has obvious regularity, but is easy to be used. This performance shows that the traditional system lacks human-computer interaction. In contrast to the data acquisition function of the Internet of Things, each data acquisition device is equipped with a control unit and a data receiving port. Therefore, when the administrator finds an exception at the terminal or has other requirements, he can send instructions to the corresponding device directly and remotely [9, 10]. The instructions enter the data receiving port of the device along the communication link, and then the port assigns instructions to the control unit, Finally, under the linkage control of the control unit, the equipment will make corresponding actions, and the whole process can be completed in a few seconds, indicating that the data acquisition function of the Internet of Things has good human-computer interaction. In addition, the relevant equipment of the Internet of Things data collection function can also be compatible with the automatic control logic, but the manual can intervene in the automatic control process at any time, which is also a major advantage of the function in the key control of tourist attractions. Response Speed Advantage Many transient problems will be encountered in the monitoring work of tourist attractions. For example, some tourists will rush into the forbidden areas in a few seconds or even a shorter time, or make some dangerous actions without knowing it. These problems occur very quickly. It is difficult to find them in the first time in the traditional monitoring system, and it is also difficult to deal with them in a timely manner. Therefore, in recent years, many safety accidents have occurred in tourist attractions. However, on the basis of the data collection function of the Internet of Things, the administrator can design the machine perspective and set the red zone according to the actual situation. Once other objects touch the red zone, it means that the danger is about to occur, so that the problem can be found at the first time, and the natural administrator can also respond at the first time, such as issuing a broadcast warning to tourists to leave the restricted zone,

Monitoring of Tourist Attractions Based on Data Collection

577

or stopping the dangerous action. It can be seen that the data collection function of the Internet of Things can significantly improve the response speed of the monitoring and management of tourist attractions. Although it can not completely eliminate the occurrence of problems, it can at least reduce the occurrence rate and also help to weaken the impact of problems [1, 11, 12]. Technological Development Advantages The basic framework of the traditional monitoring system in tourist attractions is “equipment + labor”, and there is not much room for technological expansion, which limits the development of the monitoring system and the management of tourist attractions. However, the monitoring system of tourist attractions under the Internet of Things data collection has a large space for technological expansion, and can be connected with a variety of advanced technologies. Among them, the more representative ideas are: First, connect with intelligent technology and develop intelligent terminals. The intelligent terminal has the ability to identify data information similar to human, so it can replace the manual inspection of various data information obtained by the monitoring equipment, and then make judgments. Without human intervention, it can also make management decisions based on the judgment results, and control the field equipment after generating corresponding instructions. For example, if the intelligent terminal finds “black shadow” in the image data, it will directly control the data acquisition equipment to turn the angle, continue to track the “black shadow” until the “black shadow” is recognized, and then produce the second step instruction, which is a cycle. The intervention of intelligent terminal can significantly reduce the workload of human work and effectively improve the quality of monitoring, that is, the terminal can access all data information at the same time, which cannot be done manually in the face of huge data information [13]. Second, connect with the anti-interference technology to ensure the normal operation of all monitoring equipment. The environment of tourist attractions is complex, and there are many signal interference factors. Affected by such factors, the communication signals between some monitoring devices and terminals will be interrupted and interrupted. After the problem occurs, the monitoring devices will enter the offline operation state. Although this state will not cause the loss of monitoring data information, it will delay the data information, so some problems may not be detected. However, the integration with anti-interference technology can shield most of the signal interference factors, avoid the equipment entering the offline operation state as much as possible, and play a role in ensuring the real-time of monitoring data information [14]. Third, it is connected with the dynamic image capture technology, which can make the equipment clearly take pictures of objects in high-speed state for easy identification and judgment. In the past, it was difficult for the monitoring of tourist attractions to capture the objects at high speed, which caused great trouble to the follow-up management work. However, the dynamic image capture technology can solve this problem. Its shooting frame number is very high, which can quickly freeze the dynamic picture, and then complete the shooting. The actual effect of dynamic image capture technology is very prominent, which can be confirmed from other fields. For example, the highway traffic camera uses this technology, which can clearly capture the interior of the car at

578

Y. Peng and Q. Geng

high speed. Therefore, applying this technology to the monitoring of tourist attractions can have the same effect.

3 The Design Scheme and Implementation Method of the Monitoring System for the Internet of Things Tourist Attractions 3.1 Design Scheme Combining with the basic framework of data collection of the Internet of Things (Fig. 1) and focusing on the application advantages of this function, this paper designs the tourism monitoring system of the Internet of Things scenic spots. The design scheme is shown in Fig. 2.

Fig. 2. Design method of tourism monitoring system for Internet of Things scenic spots

According to Fig. 2, the basic operation process of the tourism monitoring system of the Internet of Things scenic spot is as follows: First, turn on all data acquisition devices in the data acquisition layer, such as cameras, sensors, etc., and collect data in the default state, that is, the camera collects image data at the default angle, and the sensor collects temperature, humidity, light intensity, smoke and other data with the default parameters; Second, open the communication network of the communication layer, so that all data collected by the data acquisition equipment can be summarized in the data acquisition layer program, and then sent to the communication network in the form of bus [15]. The communication network finds the communication link connected with the terminal layer, and transmits the collected data to the terminal layer; Third, turn

Monitoring of Tourist Attractions Based on Data Collection

579

on the terminal layer equipment, read the summary data from the communication link, complete the data classification first, and then identify each item of data in each category. After that, judge the actual situation according to the correlation between the data, and generate instructions according to the judgment results. After that, the communication link reversely sends the instructions to the data acquisition layer, which are received by the device control unit in the acquisition layer, and promote the relevant data acquisition equipment to act. The main command types in the process are shown in Table 1. In addition, the main functions and functions of the terminal server in the terminal layer are shown in Table 2. Table 1. Terminal layer control instruction type Instruction type

Effect

Rotation angle

Control the angle of camera equipment to change and monitor the designated area

Parameter adjustment

Adjust the parameters of the sensor equipment to ensure the accuracy of the collected data

Continuous tracking

Control camera equipment to continuously track and monitor dynamic targets

Table 2. Main functions and functions of terminal server in terminal layer Function name

Effect

Database

It is divided into two parts: log base and knowledge base. The former is used to store the monitoring log, and the latter is used to store the intelligent terminal knowledge information. The administrator can call at will on the terminal layer device

Algorithm module The algorithm of data feature extraction is carried to facilitate the terminal layer equipment to identify the actual situation according to the data information. This system mainly selects the k-means algorithm. See formula (1) and related discussion for details

The k-means algorithm is a clustering algorithm. The basic principle is to select the center point within the data range, treat it as the k-means, and then make all the data close to the k-means through the correlation of the difference between the data. After reaching the termination condition, all the data entering the center range of the k-means are of one type. Then remove the original mean value and the data that has been classified, update the data range, find the center point again, set the k-mean value, so that the cycle can complete all data classification, where the k-mean value of each type of data is the feature of the corresponding data, so the k-means algorithm has the function of feature extraction. k  x − µi 22 (1) E= i=1

x∈Ci

580

Y. Peng and Q. Geng

In the formula, E is the difference between the data and the k-means, mostly the square mean difference, i is the range of the center of the k-means, x is the number of data samples, and C is the single data sample. 3.2 Implementation Method According to the design scheme, the implementation method of the Internet of Things tourism scenic spot monitoring system is as follows. Installation and Layout of Data Acquisition Equipment The IoT data equipment is located in the real environment, so it does not need to be specially designed, but it needs to pay attention to its installation and layout. In terms of installation, all equipment must be kept in a good working environment to avoid being affected by environmental factors. Take the camera equipment as an example. The equipment is exposed to the outdoor environment during the monitoring of tourist attractions, and the sand, water vapor, sundries, etc. in the environment may cause its lens to be blurred and cannot be photographed clearly. At the same time, the intentional or unintentional behavior of wildlife and tourists may also cause damage to the equipment. In order to avoid this kind of situation, it is recommended to install the camera equipment at a high place, install the protective fence below the ground, and use the buried technology to complete the installation of the line, so as to provide maximum protection for such equipment. In terms of layout, take camera equipment as an example. Because the terminal layer can control the equipment, so that each equipment can shoot 360° without dead angle, so long as the maximum sight distance of each equipment is confirmed and the layout design is carried out with the maximum sight distance as the boundary, that is, assuming that the maximum sight distance of each camera is 100 m, the spacing between each camera should also be 100 m, to avoid the overlap of the monitoring range (except for special needs), This can maximize the economic advantages of the physical network acquisition function. Communication Layer Networking and Link Planning In terms of communication layer networking, considering the existence of many signal interference factors in the tourism environment and the large number of communication equipment, the communication network itself should have good anti-interference ability and communication efficiency. For this purpose, it is recommended to select Ethernet network for networking. Ethernet is similar to LAN, with special frequency band and is not susceptible to electromagnetic interference, so it has strong anti-interference ability. At the same time, Ethernet only serves the monitoring system, and other port devices cannot be accessed, and the Ethernet communication capacity is large, so it can ensure smooth communication, which is conducive to communication efficiency. However, no matter which network is selected for networking, the surrounding network communication base stations must be located at a high place and cooperate with anti-interference technology for networking, in order to break through obstacles and eliminate the impact of obstructive interference factors. In terms of link planning, all communication links are composed of network wireless communication technology, but in order to ensure that the links can accurately connect to the corresponding ports, it is necessary to plan

Monitoring of Tourist Attractions Based on Data Collection

581

according to the system operation principle. For example, if there is an interactive communication relationship between the intelligent terminal and the data acquisition layer in the system operation, then there should be a two-way communication link or a circular communication link between the two, and so on to complete the link planning. Terminal Layer Server Selection and Function Module Design The terminal layer carries many functions, so it needs large servers as support. However, traditional large servers belong to physical devices, which are not only costly, but also have a high proportion of physical space and difficult to maintain. Therefore, it is recommended to choose cloud servers. With the support of unlimited cloud resources, the cloud server can become a super-large server, fully meeting the operation requirements of the terminal layer of the Internet of Things tourism scenic spot monitoring system. At the same time, the cloud server is also a virtual server, which does not occupy physical space at all and does not need to be maintained. On the basis of cloud server, administrators can design functional modules through cloud resources and cloud services. Take the database function module as an example, the administrator can summarize a batch of cloud resources on the cloud server, convert the cloud resources into two databases through the database development tool, which are used as the log base and the knowledge base respectively, and then install a firewall to protect them, or limit their initial capacity. If the capacity is insufficient in the future, you can temporarily uninstall the firewall and increase the cloud resources in the database, so as to realize the expansion. After the completion, you can restore the firewall.

4 Conclusion To sum up, the monitoring system for tourist attractions based on the data collection function of the Internet of Things has many advantages, so its application value is outstanding and worth promoting. In practical application, the system can replace manual monitoring equipment control, monitoring data information analysis, etc., and can also receive manual instructions at any time. This will effectively reduce the workload of manual work, improve the quality of monitoring work and scenic spot management, and is conducive to the development of comprehensive level of scenic spot maintenance and service.

References 1. Lin, Y.: Automatic recognition of image of abnormal situation in scenic spots based on Internet of Things. Image Vis. Comput. 96, 103908 (2020) 2. Li, G., Lan, J., Li, Q.: Online monitoring of self-elevating leveling ship based on edge computing. In: 2020 IEEE 3rd International Conference on Electronics Technology (ICET). IEEE (2020) 3. Zhou, Y., Lu, Y., Pei, Z.: Intelligent diagnosis of Alzheimer’s disease based on Internet of Things monitoring system and deep learning classification method. Microprocess. Microsyst. 83(1), 104007 (2021) 4. Liu, S., Guo, L., Webb, H., et al.: Internet of Things monitoring system of modern ecoagriculture based on cloud computing. IEEE Access 7, 37050–37058 (2019)

582

Y. Peng and Q. Geng

5. Großmann, M., Illig, S., Matˇejka, C.L.: SensIoT: an extensible and general Internet of Things monitoring framework. Wirel. Commun. Mob. Comput. 2019, 1–15 (2019) 6. Niu, L., Wang, F., Li, J., et al.: Development of agricultural Internet of Things monitoring system combining cloud computing and WeChat technology. In: 2019 IEEE 8th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). IEEE (2019) 7. Ye, Z., Wei, Y., Li, J., et al.: A distributed pavement monitoring system based on Internet of Things. J. Traffic Trans. Eng. (Engl. Edn.) 009(002), 305–317 (2022) 8. Li, X., Zhao, N., Jin, R., et al.: Internet of Things to network smart devices for ecosystem monitoring. Sci. Bull. 64(17), 1234–1245 (2019) 9. Maiti, A., Raza, A., Kang, B.H., et al.: Estimating service quality in industrial Internet-ofThings monitoring applications with blockchain. IEEE Access 7, 1 (2019) 10. Solano, F., Krause, S., Woellgens, C.: An Internet-of-Things enabled smart system for wastewater monitoring. IEEE Access 10(10), 4666–4685 (2022) 11. Cepeda-Pacheco, J.C., Domingo, M.C.: Deep learning and Internet of Things for tourist attraction recommendations in smart cities. Neural Comput. Appl. 34(10), 7691–7709 (2022) 12. Gao, H.: Big data development of tourism resources based on 5G network and Internet of Things system. Microprocess. Microsyst. 80, 103567 (2021) 13. Soegoto, E.S., Rafdhi, A.A., Oktafiani, D., Jumansyah, R.: Internet of Things based irrigation monitoring system. J. Eng. Sci. Technol. 17(4), 2720–2732 (2022) 14. Wei, H.: Integrated development of rural eco-tourism under the background of artificial intelligence applications and wireless Internet of Things. J. Ambient Intell. Hum. Comput. 13, 1–13 (2021) 15. Zhang, J., He, W., Li, K., Tian, F.: Research on the dynamic dispatch of guides in smart tourist attractions based on the RFID model under the Internet of Things. In: 2021 2nd Artificial Intelligence and Complex Systems Conference, pp. 207–211 (2021)

Author Index

B Bhadoria, Ruby 247 C Cao, Xiaohong 165 Chai, Junhui 195 Chen, Jiangping 165 Chen, Jiaxin 87 Chen, Li 545 Chen, Zhenrui 267 Cheng, Haimin 449 Chu, Yongling 400 Cui, Di 389 Cui, Xiwen 360 D Dang, Xiaojuan 165 Davoodi, Apeksha 227 Dong, Xiaodong 195 Duan, Yuanyuan 237 F Feng, Jiaqi 360 Fu, Guiqin 449 Fu, Jiawei 137

J Ji, Dan 310 Jiang, Xiangdong 370 Jin, Xin 459 Jin, Yuanchang 145 Ju, Yige 216 K Khan, Muhammad Kong, Siran 523

411

L Li, Bentu 258 Li, Jiachun 137 Li, Jiaxun 535 Li, Ping 185 Li, Shaochun 400 Li, Wenqiang 431 Li, Yufeng 145 Liang, Meng 514 Liang, Yiduo 329 Liu, Hengwang 459 Liu, Li 290 Liu, Tengjiao 227 Liu, Yushan 449 Lv, Zhongjie 195

G Gao, Shan 478 Gao, Tianjian 11 Gao, Zhenping 341, 350 Geng, Qingqing 495, 574 Gong, Hailin 370 Guan, Wei 329 Guo, Suqin 319

M Ma, Hongtao 300 Mi, Xin 431 Miao, Qian 59 Mo, Ruifeng 431 Mo, Yi 176 Mostafa, Gagan 545

H Hu, Yihan

N Niu, Dongxiao

155

360

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 Z. Xu et al. (Eds.): CSIA 2023, LNDECT 173, pp. 583–584, 2023. https://doi.org/10.1007/978-3-031-31775-0

584

Author Index

P Pan, Hong 165 Peng, Yu 495, 574

Wright, Justin 137 Wu, Bin 370 X Xia, Yunjian 469 Xian, Min 117 Xie, Tingting 127 Xie, Zhengjun 30 Xu, Bo 195

Q Qi, Shiwei 59 Qiu, Guihua 279 R Rajasekaran, Sundar S Sai, Yanyan 400 Shen, Jianmin 195 Shi, Jinmao 421 Shi, Shengyao 59 Song, Shaohua 370 Su, Caiyu 176 Su, Ying 370 Sun, Dehui 411 Sun, Xiaoyu 204 Sun, Xin 459 Sun, Yaping 247 T Tang, Zhirui 279 W Wang, Bing 565 Wang, Haihua 107 Wang, Junyi 554 Wang, Linlin 545 Wang, Min 267 Wang, Wen 459 Wang, Yan 50, 380 Wang, Yuwei 59 Wang, Ze 204 Wang, Zhihong 504 Wei, Jinri 176

495 Y Yang, Baojuan 487 Yang, Jing 469 Yang, Lijun 504 Yang, Tao 155 Yang, Yongzhi 11 Ye, Xiaoqin 117 Yin, Xin 431 You, You 290 Yu, Peng 370 Yu, Shuangmei 459 Yuan, Jingchao 97 Yuan, Yong 431 Z Zhang, Jinhai 267 Zhang, Shuran 69 Zhang, Xiaoli 21 Zhang, Xiaolong 195 Zhang, Xinfeng 1 Zhang, Xinwei 441 Zhang, Yu 319 Zhang, Zhipeng 59 Zhang, Zijian 195 Zhao, Dongxia 504 Zhao, Xiaoxia 319 Zheng, Xiang 117 Zhou, Yanjun 40 Zhuang, Yueping 78