Advanced Intelligent Technologies for Industry: Proceedings of 2nd International Conference on Advanced Intelligent Technologies (ICAIT 2021) (Smart Innovation, Systems and Technologies, 285) [1st ed. 2022] 9811697345, 9789811697340

The book includes new research results of scholars from the Second International Conference on Advanced Intelligent Tech

369 47 14MB

English Pages 615 [585] Year 2022

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
ICAIT-2021 Organization
Preface
Contents
About the Editors
Part I Invited Papers
1 Utilization of the Open-Source Arduino Platform to Control Based on Logic Eτ
1.1 Introduction
1.2 Theoretical Reference
1.2.1 Irrigation System
1.2.2 Irrigation Methods and Systems
1.2.3 Localized Irrigation
1.2.4 Paraconsistent Logic
1.2.5 Paraconsistent Annotated Evidential Logic Eτ
1.2.6 Para-Analyser Algorithm
1.2.7 Arduino
1.3 Methodology
1.4 Results
1.5 Conclusion
References
2 Ontology Implementation of OPC UA and AutomationML: A Review
2.1 Introduction
2.2 Research Method
2.3 Results Analysis
2.4 Discussions
2.5 Conclusions
References
Part II Regular Papers
3 Airport Cab Scheduling Model Based on Queuing Theory
3.1 Problem Restatement
3.1.1 Problem Background
3.1.2 Problem Requirements
3.2 Model Assumptions
3.3 Symbol Description
3.4 Problem Analysis
3.4.1 Problem One Analysis
3.4.2 Problem Two Analysis
3.4.3 Problem Three Analysis
3.4.4 Problem Four Analysis
3.5 Model Building and Solving
3.5.1 Give the Taxi Driver's Decision Model in the Two Choices of A and B
3.5.2 Analyze the Rationality of the Model Through the Relevant Data of a Domestic Airport and Its City Taxi
3.5.3 How to Set the “Boarding Point” to Maximize the Efficiency of the Ride
3.5.4 A Theoretical Analysis of the Scale of “Boarding Points” and Lane Sides
3.5.5 According to the Difference in the Mileage of Different Taxis, Develop a “Priority” Arrangement Plan to Maximize Time and Revenue
References
4 Machine Reading Comprehension Model Based on Multi-head Attention Mechanism
4.1 Introduction
4.2 Related Work
4.2.1 DuReader Dataset
4.2.2 DuReader Baseline Model
4.2.3 Multi-headed Attention Mechanism
4.3 Multi-head Attention Mechanism
4.3.1 Multi-head Attention Presidential Structure
4.3.2 Scaled Dot-Product Attention
4.4 Model Structure
4.4.1 Task Definition
4.4.2 Embedded Layer
4.4.3 Coding Layer
4.4.4 Matching Layer
4.4.5 Modeling Layer
4.4.6 Output Layer
4.5 Experiments and Results
4.5.1 Evaluation Criteria
4.5.2 Dataset
4.5.3 Experiments and Results
4.6 Conclusion
References
5 Design of Military Physical Fitness Evaluation System Based on Big Data Clustering Algorithm
5.1 Introduction
5.2 Hardware Unit Design of Military Physical Fitness Evaluation System
5.3 Software Module Design of Military Physical Fitness Evaluation System
5.3.1 Construction Module of Military Physical Fitness Evaluation Index System
5.3.2 Application Module of Big Data Clustering Algorithm
5.3.3 Database Building Module
5.4 Experiment and Result Analysis
5.4.1 Construction of Experimental Environment
5.4.2 Experimental Data Analysis
5.4.3 Acquisition of Experimental Results
5.5 Conclusion
References
6 ResNet-Based Multiscale U-Net for Human Parsing
6.1 Introduction
6.2 Related Work
6.3 Our Proposed Method
6.3.1 Encoder
6.3.2 Decoder
6.3.3 Multiscale Feature Fusion
6.4 Experiments
6.4.1 Datasets
6.4.2 Evaluation Metrics
6.4.3 Results
6.5 Conclusion
References
7 Short-Term Electricity Price Forecast Based on SSA-SVM Model
7.1 The Introduction
7.2 SSA
7.3 SVM
7.4 The SSA-SVM Model
7.5 Example Analysis
7.5.1 Data Processing
7.5.2 Data Normalization
7.5.3 Evaluation Indicators
7.5.4 Analysis of Prediction Results
7.6 Conclusion
References
8 Research on Mobile Advertising Click-Through Rate Estimation Based on Neural Network
8.1 Pricing and Placement of Advertisements
8.2 The AUC Indicators
8.3 Estimating Click-Through Rates of Mobile Ads Based on Neural Networks
8.3.1 Model
8.3.2 Objective Function and Algorithm
8.3.3 Key Points of Optimization
8.3.4 Result Analysis
8.4 Conclusion
References
9 Research on Macroeconomic Prediction Technology Based on Wavelet Neural Network
9.1 Prediction Technology
9.2 Optimized Wavelet Neural Network Model
9.3 Case Study
9.3.1 Specific Introduction
9.3.2 Training of Learning Samples
9.3.3 Analyze the Macroeconomic Forecast Results
9.4 Conclusion
References
10 Design Research on Financial Risk Control Model Based on CNN
10.1 Risk Control Model in Unbalanced Data Environment
10.1.1 Features of Financial Data
10.1.2 Pretreatment
10.1.3 Risk Control Model
10.2 Convolutional Neural Network (CNN) is Used to Build a Financial Risk Control Model
10.2.1 Design Network Structure
10.2.2 Result Analysis
10.2.3 Innovative Analysis
10.3 Conclusion
References
11 Application of Multi-objective Optimization Algorithm in Financial Market Portfolio
11.1 Genetic Algorithm Analysis of Multi-objective Programming
11.2 Analysis of Portfolio Optimization Model Based on Genetic Algorithm
11.2.1 Model Framework
11.2.2 Factor Analysis
11.2.3 Risk Indicators
11.3 Application Results
11.3.1 Data Processing
11.3.2 Optimal Solution of Investment Portfolio
11.3.3 Comparative Analysis
11.4 Conclusion
References
12 Research on Success Factors of Long Rental Apartment Based on SEM-ANN
12.1 Introduction
12.2 Methods
12.2.1 SEM Model
12.2.2 ANN Model
12.2.3 Research Model Based on SEM-ANN
12.3 Result Analysis
12.4 Conclusion
References
13 T-AGP: A Kind of Music-Researching Method Based on Data-Driven Comprehensive System of Network Analysis
13.1 Introduction
13.2 Structure and General Description
13.3 The Influencer Network and Influence Analysis
13.3.1 Influence Measures
13.3.2 Subnetwork Analysis
13.4 Similarity Measure
13.4.1 The Exploration of Core Features
13.4.2 Micro-Similarity Between Two Pieces of Music
13.4.3 Macro-Similarity Between Two Genres
13.5 Similarity and Difference Between Genres
13.5.1 Genre Similarity Analysis
13.5.2 Genre Influence Analysis
13.5.3 What Distinguishes a Genre
13.5.4 Relationships Between Genres
13.5.5 Genres’ Changes Over Time
13.6 The Mechanism of Influence from Influencer to Follower in Music
13.7 Dynamic Analysis Over Time and Identification of Major Leaps in Music
13.7.1 Dynamic Analysis of the Network
13.7.2 Music Characteristics’ Evolution Over Time
13.8 Cultural Influence on Music
13.8.1 Influence from Technology
13.8.2 Influence from Social and Political Revolution
13.9 Conclusions and Prospects
13.9.1 Conclusions
13.9.2 Weakness
13.9.3 Prospects
References
14 Application of Improved Algorithm of BP Neural Network
14.1 Model Analysis of BP Neural Network
14.2 Improve the Algorithm
14.3 Specific Application
14.3.1 PI Parameter Tuning of Valve Positioner
14.3.2 Shipboard Navigation System
14.4 Conclusion
References
15 Application of Improved Harmony Search Algorithm in Photovoltaic MPPT Under Partial Shadow Conditions
15.1 Introduction
15.2 Analysis of Output Characteristics of Photovoltaic Cells Array in Partial Shadow
15.3 Harmony Search (HS) Algorithm
15.3.1 Traditional Harmony Search Algorithm
15.3.2 Improvement of Traditional HS Algorithm
15.3.3 MPPT Control Method Based on IHS Algorithm
15.4 Simulation Experiment Analysis
15.5 Conclusion
References
16 A Traditional Butterfly Optimization Algorithm MPPT Method
16.1 Introduction
16.2 Multi-peak Output Characteristics of Photovoltaic Cells Under Local Shade
16.2.1 Mathematical Models of Multiple Photovoltaic Cells
16.2.2 Multi-peak Output Characteristics
16.3 MPPT Flow Chart Based on BOA
16.4 Simulation and Result Analysis
16.4.1 Simulation
16.4.2 Result Analysis
16.5 The Conclusion
1. References
17 Crops Plantation Problem in Uncertain Environment
17.1 Introduction
17.1.1 Motivation
17.1.2 Literature Review
17.1.3 Proposed Approaches
17.2 Basic Principles
17.3 Uncertain Crops Plantation Problem
17.4 Method
17.5 Conclusion
References
18 Simplified Full-Newton Step Method with a Kernel Function
18.1 Introduction
18.2 Preliminaries
18.2.1 The Central Path of the Perturbed Problems
18.2.2 Search Directions
18.2.3 The Iterative Process of the Algorithm
18.3 The Properties of Simple Function and the Proximity Function
18.4 Algorithm Analysis
18.4.1 The Feasibility Step
18.4.2 Upper Bounds for paralleldxfparallel
18.4.3 Values of θ and τ
18.4.4 Centering Steps
18.4.5 Total Number of Main Iterations
18.5 Conclusion
References
19 Marine Ship Identification Algorithm Based on Object Detection and Fine-Grained Recognition
19.1 Introduction
19.2 Overall Process
19.3 Design Scheme
19.3.1 Construction of Data Sets
19.3.2 Target Detection Network
19.3.3 Fine-Grained Identification Network
19.4 Experimental Verification
19.5 Conclusions and Prospects
References
20 Automatic Detection of Safety Helmet Based on Improved YOLO Deep Model
20.1 Introduction
20.2 YOLOv5 Network Model
20.2.1 Input
20.2.2 Backbone
20.2.3 Neck
20.2.4 Output
20.3 Improvements
20.3.1 Anchor Boxes and Detection Scale
20.3.2 Loss Function
20.4 Experimental Result and Analysis
20.4.1 Dataset
20.4.2 Training
20.4.3 Results
20.5 Conclusion
References
21 Opinion Mining and DENFIS Approaches for Modelling Variational Consumer Preferences Based on Online Comments
21.1 Introduction
21.2 Related Works
21.3 Proposed Methodology
21.3.1 Sentiment Analysis from Online Customer Comments
21.3.2 DENFIS Method
21.4 Implementation and Validation
21.5 Conclusions
References
22 Pysurveillance: A Novel Tool for Supporting Researchers in the Systematic Literature Review Process
22.1 Introduction
22.2 State of the Art
22.2.1 Performance Analysis Tools
22.2.2 Science Mapping Tools
22.3 Design and Development of the Solution
22.3.1 Front-End
22.3.2 Back-End
22.3.3 General Architecture of the System
22.4 Discussion
22.5 Conclusions
References
23 Research on Intelligent Control System of Corn Ear Vertical Drying Bin
23.1 Introduction
23.2 Parameter Optimization of Drying Device
23.3 Overall System Design
23.4 Design of the Control System
23.5 Monitoring System Module
23.6 Conclusions
References
24 The Design of Tracking Car Based on Single Chip Computer
24.1 Introduction
24.2 System Design
24.3 Hardware Circuit Design
24.4 Results Display and Analysis
24.5 Conclusion
References
25 Research Status and Trend Analysis of Coal Mine Electro-Mechanical Equipment Maintenance Under the Background of Smart Mine Construction
25.1 Introduction
25.2 Research Status
25.2.1 Fault Mechanism Study
25.2.2 Equipment Status Monitoring Research
25.2.3 Signal Analysis and Processing Study
25.2.4 Diagnosis and Predication Algorithm Research
25.3 Problems and Challenges
25.3.1 Insufficiently Study on the Failure Mechanism of Equipment
25.3.2 The Unavailable Failure Data at the Coal Mine Site
25.3.3 Diagnosis and Prediction of Low-Level Intelligence
25.4 Development Trend Analysis
25.4.1 Advance the Equipment Fault Identification Time
25.4.2 Development of Multi-fault Coupled Diagnosis and Predictive Maintenance
25.4.3 Strengthen the Guidance of Simulation and Experiment Data on Application Field Site
25.5 Conclusion
References
26 A Novel Conditioning Circuit for Testing Tank Fire Control Systems
26.1 Introduction
26.2 The Functional Requirements and Composition of the Conditioning Circuit
26.2.1 Analysis of the Functional Requirements of the Conditioning Circuit
26.2.2 Composition and Function of the Conditioning Circuit
26.3 The Working Principle of the Conditioning Circuit
26.3.1 Principle of I/O Control Signal Circuit
26.3.2 Principle of the DC Signal Acquisition Circuit
26.3.3 Principle of Conditioning Circuit
26.4 Experimental Testing
26.5 Conclusion
References
27 Intelligent Auxiliary Fault Diagnosis for Aircraft Using Knowledge Graph
27.1 Introduction
27.2 The Framework of Auxiliary Fault Diagnosis Technology Based on Knowledge Graph
27.3 Construction of Aircraft Fault Knowledge Graph
27.4 Auxiliary Diagnosis Using Fault Knowledge Graph
27.5 Conclusion
References
28 Study on the Relationship Between Ship Motion Attitude and Wave Resistance Increase Based on Numerical Simulation
28.1 Introduction
28.2 Numerical Wave Pool and Hull Motion
28.2.1 Numerical Wave Pool
28.2.2 Hull Motion
28.3 Analysis of Simulation Results
28.4 Conclusion
References
29 Simulation of Automatic Control System of Self-Balancing Robot Based on MATLAB
29.1 Case Study
29.2 Controller and System Modeling and Simulation Analysis
29.2.1 PID
29.2.2 LQR
29.3 Modeling and Simulation
29.4 Result Analysis
29.5 Conclusion
References
30 Design of Bionic Knee Joint Structure Based on the Dynamics of Double Rocker Mechanism
30.1 Dynamic Analysis of Double Rocker Mechanism of Intelligent Prosthesis Knee Joint
30.2 Optimal Design of Prosthetic Knee Joint Mechanism
30.2.1 Objective Function
30.2.2 Determination of Optimization Parameters
30.2.3 Constraints
30.3 Virtual Simulation Analysis of Prosthetic Knee Joint Mechanism
30.4 Conclusion
References
31 Research on Temperature Decoupling Control System of PET Bottle Blowing Machine Based on the Improved Single Neuron
31.1 Introduction
31.2 Modeling of the Temperature Control System of Bottle Blowing Machine
31.3 PID Decoupling Controller Based on Single Neuron
31.4 Simulation Study on Temperature Decoupling Control System of Bottle Blowing Machine
31.5 Actual Debugging Results
31.6 Conclusions
References
32 Research on the Simulation Method of the Drift Trajectory of Persons Overboard Under the Sea Search Mode of Unmanned Surface Vehicle (USV)
32.1 Introduction
32.2 Force Analysis of Drifting Motion at Sea
32.3 Drift Velocity Model
32.4 Computer Simulation
32.5 Conclusions
References
33 Design of Infrared Remote Control Obstacle Avoidance Car
33.1 Introduction
33.2 Systematic Design
33.3 Hardware Circuit Design
33.4 Results Display and Analysis
33.5 Conclusion
References
34 Parameter Adaptive Control of Virtual Synchronous Generator Based on Ant Colony Optimization Fuzzy
34.1 Introduction
34.2 Basic Principle of Virtual Synchronous Generator
34.3 Design and Optimization of Fuzzy Controller
34.3.1 Fuzzy Variable Relation
34.3.2 Fuzzy Controller Design
34.3.3 Ant Colony Optimization Fuzzy Controller
34.4 Simulation Result Analysis
34.5 Conclusion
References
35 Attitude Calculation of Quadrotor UAV Based on Gradient Descent Fusion Algorithm
35.1 Introduction
35.2 Attitude Calculation and Algorithm Implementation
35.2.1 Profile Modeling
35.2.2 Basic Concepts of Quaternion Method
35.2.3 Attitude Solution of Mahony Complementary Filtering Algorithm
35.2.4 Attitude Solution Based on Improved Gradient Descent Fusion Algorithm
35.3 Experimental Test and Result Analysis
35.4 Conclusion
References
36 A Mixed Reality Application of Francis Turbine Based on HoloLens 2
36.1 Introduction
36.2 MR Provides a New Learning Environment for Francis Turbine
36.2.1 HoloLens 2 is More Comfortable for Users to Wear Compared with HoloLens
36.2.2 HoloLens 2 is More Immersive Than HoloLens
36.2.3 Eye-Tracking
36.2.4 The Gesture Operation of HoloLens 2 is More Powerful Than that of HoloLens
36.2.5 Integrated Advanced AI Technology
36.2.6 HoloLens is Used in Various Education Fields
36.3 A Framework of Francis Turbine Based on HoloLens 2
36.3.1 Structure Module of Francis Turbine
36.3.2 Principle Module of Francis Turbine
36.3.3 Parts Recognition Module of Francis Turbine
36.3.4 Assembly Exam Module of Francis Turbine
36.4 Application Development of Francis Turbine Based on HoloLens 2
36.4.1 Design of Instructional Content
36.4.2 Choices of Hardware and Software
36.4.3 Implementation and Testing of the Application
36.5 Conclusions
References
37 Design of Intelligent Lighting Controller Based on Fuzzy Control
37.1 Introduction
37.2 Overall System Architecture
37.2.1 Functional Requirement
37.2.2 Workflow
37.3 Intelligent Lighting Controller Design
37.3.1 Structure of Fuzzy Controller
37.3.2 Identify Variables and Membership Functions
37.3.3 Establish Fuzzy Rules
37.4 Matlab Simulation and Result Analysis
37.5 Comparison of Simulink Modeling
37.6 Conclusion
References
38 Remote Sensing Monitoring of Cyanobacteria Blooms in Dianchi Lake Based on FAI Method and Meteorological Impact Analysis
38.1 Introduction
38.2 Data Sources and Research Methods
38.2.1 Overview of the Study Area
38.2.2 The Data Source
38.3 Research Method
38.4 Results and Analysis
38.4.1 Time and Space Distribution of Cyanobacteria Blooms in Dianchi Lake
38.4.2 Mechanism Analysis of Meteorological Factors Influencing the Occurrence and Distribution of Cyanobacteria Blooms in Dianchi Lake
38.5 Discussion
38.6 Conclusion
References
39 Situation Analysis of Land Combat Multi-platform Direct Aim Confrontation with Non-combat Loss
39.1 Introduction
39.2 Time Domain Situation Model of land Combat Multi-Platform Direct Aim Confrontation with Non-Combat Loss
39.3 Situation of Land Combat Multi-platform Direct Aim Confrontation with Non-combat Loss
39.4 Situation Simulation Analysis of Land Combat Multi-Platform Direct Aim Confrontation with Non-Combat Loss
39.5 Conclusion
References
40 Automotive Steering Wheel Angle Sensor Based on S32K144 and KMZ60
40.1 Introduction
40.2 Analysis of the Principle of Magnetoresistance and Mechanical Structure
40.3 Mechanical Structure Design
40.4 Hardware Design
40.5 Software Algorithm
40.5.1 Zero Mark
40.5.2 Angle Calculation
40.6 Conclusion
References
41 Research on Vehicle Routing Problem of Urban Emergency Material Delivery with Time Window
41.1 Introduction
41.2 Model Building
41.2.1 Problem Description
41.2.2 Mathematical Model
41.3 An Improved Ant Colony Algorithm for MOVRPTW Problem
41.3.1 Ant Colony Algorithm
41.3.2 Selection Strategy
41.3.3 Pheromone Updating
41.3.4 Algorithm Steps
41.4 Case Analysis
41.5 Conclusion
References
42 UAV Trajectory Planning Based on PH Curve Improved by Particle Swarm Optimization and Quasi-Newton Method
42.1 Introduction
42.2 Problem Description
42.3 UAV Trajectory Planning Model
42.3.1 Trajectory Evaluation Model
42.3.2 The PH Curve
42.4 Trajectory Planning Algorithm
42.5 Experimental Simulation
42.6 Conclusion
References
43 Maintenance Data Collection and Analysis of Road Transport Vehicles’ Safety Components
43.1 Introduction
43.2 Key Safety Components of Road Transport Vehicles
43.3 Maintenance Data Collection of Road Transport Vehicles’ Safety Components
43.4 Fault Analysis of Safety Components of Road Transport Vehicles
43.5 Conclusion
References
44 Effectiveness Evaluation Method of Marine Environmental Weapons and Equipment Based on Ensemble Leaning
44.1 Introduction
44.2 Evaluation Method of Ensemble Learning Efficiency Based on AHP Modeling
44.2.1 Efficiency Evaluation Modeling Idea Based on AHP Modeling
44.2.2 Build a Performance Evaluation System
44.2.3 Evaluation Model Intelligent Learning Optimization Method Based on Ensemble Learning
44.3 Application Cases and Advantages of Ensemble Learning Assessment Model
44.3.1 Integrated Learning Application Case
44.3.2 Advantage Analysis of Ensemble Learning Assessment Model
44.4 Conclusion
References
45 A Dynamic Load Distribution Method for Multi-Robot
45.1 Introduction
45.2 Kinematics and Force Analysis of Multi-Robot Grasping System
45.2.1 Kinematics of Multi-Robot Grasping System
45.2.2 Force Analysis of Multi-Robot Handling System
45.2.3 Dynamic Manipulability of Multi-Robot
45.3 Dynamic Load Distribution of Multi-Robot Grasping System
45.3.1 Virtual Mass and Virtual Inertia
45.3.2 Load Distribution
45.4 Simulation
45.5 Conclusion
References
46 Research on CNN-Based Image Denoising Methods
46.1 Introduction
46.2 Related Work
46.2.1 Spatial Domain Approach
46.2.2 Transformation Field Method
46.3 Method
46.3.1 Convolutional Neural Networks (CNN)
46.3.2 Convolutional Neural Networks for Image Denoising
46.4 Experiment
46.4.1 Experimental Setting
46.4.2 Experimental Results
46.5 Conclusion
References
47 Sheep Posture Recognition Based on SVM
47.1 The Significance of Sheep Posture Recognition
47.2 Methods and Status of Animal Posture Recognition
47.3 An Algorithm to Segment the Head, Body and Legs of Sheep
47.3.1 Definition of Sheep Posture Classification
47.3.2 Establish Posture Image Data Set
47.3.3 The Sheep Segmentation Algorithm
47.3.4 The Feature Extraction
47.3.5 The Normalization
47.4 The Posture Classification Based on SVM Method
47.5 Conclusion
References
48 Research on Application of Information Security Protection Technology in Power Internet of Things
48.1 Introduction
48.2 The Structure and Security of Internet of Things
48.2.1 Perception Layer
48.2.2 Network Layer
48.2.3 Application Layer
48.2.4 Platform Layer
48.3 Information Security Protection Technology of Power Internet of Things
48.3.1 Network Security Vulnerability Scanning Technology
48.3.2 VPN Technology
48.3.3 Security Audit Technology
48.3.4 Lightweight Security Authentication Technology
48.3.5 Biometric Technology
48.3.6 The New Cryptographic Technology
48.4 Conclusion
References
49 Research on User Conversational Sentiment Analysis Based on Deep Learning
49.1 Introduction
49.2 Sentiment Analysis Model Based on Deep Learning
49.2.1 The Generation of Word Vectors
49.2.2 The Construction of Emotional Model
49.3 Experiment and Analysis
49.4 Conclusion
References
50 Research and Application of Edge Computing and Power Data Interaction Mechanism Based on Cloud-Edge Collaboration
50.1 Introduction
50.2 Key Technologies of Edge Intelligent Agent
50.2.1 Cloud Edge Collaborative Architecture
50.2.2 Edge Computing
50.3 Power IoT Data Interaction Mechanism
50.3.1 Data Fusion Solution Architecture
50.3.2 Data Interaction Mechanism
50.4 Data Management and Application
50.4.1 Data Management
50.4.2 Power Business IOT Management Application
50.5 Summary
References
51 Research and Application of Edge Resource Allocation Algorithm of Power Internet of Things Based on SDN
51.1 Introduction
51.2 System Model and Load Model
51.2.1 System Model
51.2.2 Network Load Model
51.3 Simulation Experiment
51.4 Summary
References
52 Research and Application of Key Technologies of Multi-round Dialogue in Intelligent Customer Service
52.1 Introduction
52.2 Key Technology Methods of Multi-round Dialogue Based on Deep Learning
52.2.1 Multi-round Dialogue Preprocessing
52.2.2 Multiple Rounds of Dialogue
52.2.3 Multi-round Dialogue Key Technology Matching
52.2.4 Key Feature Extraction
52.2.5 Evaluation
52.3 Research and Application of Multi-round Dialogue
52.3.1 Environment Setup
52.3.2 Modeling of Multiple Rounds of Dialogue
52.3.3 Model Definition
52.4 Summary
References
53 Rolling Bearing Fault Diagnosis Method Based on SSA-ELM
53.1 Preface
53.2 Sparrow Search Algorithm
53.3 Optimized ELM Based on Sparrow Search Algorithm
53.3.1 ELM
53.3.2 Sparrow Search Algorithm to Optimize ELM Fault Diagnosis Process
53.4 Case Analysis of Rolling Bearing Fault Diagnosis
53.4.1 The Experimental Data
53.4.2 Rolling Bearing State Recognition Experiment
53.5 Conclusion
References
54 OneNet-Based Smart Classroom Design for Effective Teaching Management
54.1 Introduction
54.2 Related Work
54.2.1 Smart Classroom
54.2.2 Indoor Positioning
54.2.3 Data Visualization
54.3 Proposed Methodology
54.3.1 The Control System of Classroom Hardware Equipment and Teaching Equipment
54.3.2 Smart Security Module
54.3.3 Attendance Management Module
54.3.4 Data Visualization Module
54.4 System Implementation
54.4.1 Perception Layer
54.4.2 Transport Layer
54.4.3 Control Layer
54.4.4 Client Application
54.4.5 Cloud Application
54.5 Conclusion
References
55 Research on Image Denoising Method in Spatial Domain by Using MATLAB
55.1 Introduction
55.2 Noise Model and Principle of Spatial Domain Denoising
55.2.1 Models of Noise
55.2.2 Principles of Spatial Domain Denoising
55.3 Three Methods of Spatial Domain Denoising Based on MATLAB
55.3.1 Median Filter
55.3.2 Mean Value Filtering
55.3.3 Wiener Filter
55.4 Experimental Procedures, Results and Analysis
55.4.1 Experiment 1: Median, Mean, and Wiener Filtering of the Salt and Pepper, Respectively.
55.4.2 Experiment 2: Apply Median, Mean, and Wiener Filters to Poisson Noise, Respectively
55.5 Conclusion
References
56 An Improved Low-Light Image Enhancement Algorithm Based on Deep Learning
56.1 Introduction
56.1.1 Reasons for Research
56.1.2 Traditional and Modern Methods
56.1.3 Research Content
56.2 Basic Network for Improvement
56.2.1 EnlightenGAN
56.2.2 Wgan
56.3 The Proposed Improved Method
56.3.1 Structural Similarity Loss
56.3.2 Energy-Based GAN
56.4 Experiments
56.4.1 Test Data
56.4.2 Experimental Results
56.4.3 Conclusion and Outlook
References
57 Image Recognition of Wind Turbines Blade Surface Defects Based on Mask-RCNN
57.1 Introduction
57.2 Blade Surface Defect Recognition Model Based on Mask-Rcnn
57.2.1 Region Proposal Network
57.2.2 Feature Pyramid Network
57.2.3 ROIAlign Operation
57.2.4 Multi-task Loss
57.3 Overall Process of Blade Surface Defect Recognition Based on Mask-Rcnn
57.4 Simulation Case
57.4.1 Environmental Configuration and Deep Learning Framework
57.4.2 Blade Image Data Set Construction and Model Pre-training
57.4.3 Experiment and Analysis
57.5 Conclusion
References
58 TCAS System Fault Research and Troubleshooting Process
58.1 Introduction
58.2 Introduction of TCAS System Structure
58.3 Research on Function Module of TCAS Processor
58.4 TCAS Processor Fault Test Flow
58.4.1 Test System Design Scheme
58.4.2 Troubleshooting Process of TCAS Processor
58.5 Conclusions
References
Author Index
Recommend Papers

Advanced Intelligent Technologies for Industry: Proceedings of 2nd International Conference on Advanced Intelligent Technologies (ICAIT 2021) (Smart Innovation, Systems and Technologies, 285) [1st ed. 2022]
 9811697345, 9789811697340

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Smart Innovation, Systems and Technologies 285

Kazumi Nakamatsu · Roumen Kountchev · Srikanta Patnaik · Jair M. Abe · Andrey Tyugashev   Editors

Advanced Intelligent Technologies for Industry Proceedings of 2nd International Conference on Advanced Intelligent Technologies (ICAIT 2021)

123

Smart Innovation, Systems and Technologies Volume 285

Series Editors Robert J. Howlett, Bournemouth University and KES International, Shoreham-by-Sea, UK Lakhmi C. Jain, KES International, Shoreham-by-Sea, UK

The Smart Innovation, Systems and Technologies book series encompasses the topics of knowledge, intelligence, innovation and sustainability. The aim of the series is to make available a platform for the publication of books on all aspects of single and multi-disciplinary research on these themes in order to make the latest results available in a readily-accessible form. Volumes on interdisciplinary research combining two or more of these areas is particularly sought. The series covers systems and paradigms that employ knowledge and intelligence in a broad sense. Its scope is systems having embedded knowledge and intelligence, which may be applied to the solution of world problems in industry, the environment and the community. It also focusses on the knowledge-transfer methodologies and innovation strategies employed to make this happen effectively. The combination of intelligent systems tools and a broad range of applications introduces a need for a synergy of disciplines from science, technology, business and the humanities. The series will include conference proceedings, edited collections, monographs, handbooks, reference books, and other relevant types of book in areas of science and technology where smart systems and technologies can offer innovative solutions. High quality content is an essential feature for all book proposals accepted for the series. It is expected that editors of all accepted volumes will ensure that contributions are subjected to an appropriate level of reviewing process and adhere to KES quality principles. Indexed by SCOPUS, EI Compendex, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and Technology Agency (JST), SCImago, DBLP. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/8767

Kazumi Nakamatsu · Roumen Kountchev · Srikanta Patnaik · Jair M. Abe · Andrey Tyugashev Editors

Advanced Intelligent Technologies for Industry Proceedings of 2nd International Conference on Advanced Intelligent Technologies (ICAIT 2021)

Editors Kazumi Nakamatsu University of Hyogo Kobe, Japan

Roumen Kountchev Technical University of Sofia Sofia, Bulgaria

Srikanta Patnaik SOA University Bhubaneswar, India

Jair M. Abe Paulista University São Paulo, Brazil

Andrey Tyugashev Samara State Technical University Samara, Russia

ISSN 2190-3018 ISSN 2190-3026 (electronic) Smart Innovation, Systems and Technologies ISBN 978-981-16-9734-0 ISBN 978-981-16-9735-7 (eBook) https://doi.org/10.1007/978-981-16-9735-7 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore

ICAIT-2021 Organization

Honorary Chair Prof. Lakhmi C. Jain, KES International, UK

General Co-Chairs Prof. Kazumi Nakamatsu, University of Hyogo, Japan Assoc. Prof. Ari Aharari, Sojo University, Japan

Conference Co-Chairs Prof. Jair M. Abe, Pauliata University, Brazil Prof. Andrey A. Tyugashev, Samara State Technical University, Russia

International Advisory Board Chair Prof. Srikanta Patnaik, SOA University, India

Program Chair Prof. Roumen Kountchev, Technical University of Sofia, Bulgaria

v

vi

ICAIT-2021 Organization

Organizing Co-Chairs Mr. Silai Zhou, IRNet International Communication Center, China Mr. Bin Hu, IRNet International Communication Center, China

ICAIT-2021 International Advisory Board and Program Committee Srikanta Patnaik, India Roumen Kountchev, Bulgaria Jair M. Abe, Brazil Ari Aharari, Japan Nashwa El-Bendary, Egypt Aboul Ela Hassanian, Egypt Tulay Yildirim, Turkey Altas Ismail, Turkey Valentina E. Balas, Romania Margarita Favorskaya, Russia Andrei Tyugashev, Russia Ajith Abraham, USA Vincenzo Piuri, Italy Morgado Dias, Portugal Nicolae Paraschiv, Romania Sachio Hirokawa, Japan Siddhartha Bhattacharyya, India Ladjel Bellatreche, France Juan D. Velasquez Silva, Chile Hossam A. Gaber, Canada Ngoc Thanh Nguyen, Poland Mario Divan, Argentina Petia Koprinkova, Bulgaria Jawad K. Ali, Iraq Sufyan T. Faraj Al-Janabi, Iraq Hercules A. Prado, Brazil Lorna Uden, UK Sunil Kumar Khatri, India Mabrouk Omrani, USA Ding Li Ya, Japan Won Seok Yang, Japan Marius Olteanu, Romania Veska Georgieva, Bulgaria Rouminia Kountchev, Bulgaria

ICAIT-2021 Organization

Toshifumi Kimura, Japan Minh Le Nguyen, Japan Aslina Baharum, Malaysia Ivo Draganov, Bulgaria Kalin Dimitrov, Bulgaria Zhenfeng Xu, China JinYi Wang, China Tengyue Mao, China Zhijun Wang, China Xingbao Liu, China

vii

Preface

The first conference of conference series ICAIT (International Conference on Advanced Intelligent Technologies) was launched as the 1st International Conference on Agriculture and IT/IoT/ICT (ICAIT2019) in Xi’an, China, during November 22– 24, 2019 though, the topics of ICAIT2019 were mainly on agriculture and intelligent network technologies. The purpose of the conference series ICAIT is to provide a platform for exchanging ideas and discussion on wider topics in terms of the most advanced intelligent technologies by academic researchers, graduate students who are studying theories and applications of intelligent technologies, engineers, and practitioners concerned with intelligent technologies applied to industries. Therefore, we have revised ICAIT conference series to cover much wider topics in terms of intelligent technologies, and the second conference of ICAIT has been held as the 2nd International Conference on Advanced Intelligent Technologies (ICAIT2021) during 23–24, 2021 and subtitled Intelligent Technologies and Industries. ICAIT2021 was originally supposed to be held in Xi’an, China though, the world is still fighting against COVID-19 pandemic, there is no doubt that the safety and well-being of our participants are most important. Considering the health and safety of everyone, we had to make a tough decision and convert ICAIT2021 into a fully online conference via the Internet. The topics of ICAIT2021 include all aspects of intelligent technologies applicable to various parts of industry and its infrastructure as follows: Artificial Intelligence, Artificial Life, Automated Reasoning Automated Manufacturing, Automatic Car Driving, Autonomous Robot, Bayesian Networks, Bee-colony Optimization, Bio-inspired Systems, Business Process Modelling, Cloud Computing, Clustering, Communication Network Systems, Complex Systems, Computational Intelligence, Data Analysis, Data Mining, Data Processing Systems, Deep Learning, Distributed Systems, E-business, E-commerce, E-learning, Embedded Systems, Emotion Recognition, Energy Technologies, Enterprise Network Management, Expert Systems, Evolutionary Computing,

ix

x

Preface

Factory Automation, Farm Management, Forecasting Systems, Fuzzy Computing, Fuzzy Controller, Fuzzy Logic, Fuzzy Set, Fuzzy Systems, Genetic Algorithm, Hardware Design, Hardware Emulation, Heuristics, High Performance Computing, Hybrid Intelligent Systems, Image Processing, Information Security, Intelligent Agent Technologies, Intelligent Control Systems, Intelligent Information Systems Intelligent Logistics for Industry, Intelligent Monitoring Systems, Intelligent Network Routing, Intelligent Numerical Control, Ontology Techniques, Process Control, Intelligent Production Systems, Intelligent Safety Verification, Intelligent Teaching Systems, Intelligent Transportation Systems, Internet of Things, Internet Security, Machine Learning, Mathematical Foundation of Intelligence, Multi-Agent Systems, Natural Language Processing, Neural Networks, Neuro Computing, Optimization Techniques, Pattern Recognition, Quantum Computing, Risk Management Systems, Signal Processing, Virtual Reality, etc. We accepted 2 invited and 56 regular papers among submitted 89 papers from China, India, Japan, Malaysia, Argentina, Spain, Brazil, UK, etc. at ICAIT2021. This volume is devoted to presenting all those accepted papers of ICAIT2021, which is categorized into four parts titled intelligent computing, intelligent control, robotics, and others in intelligent technologies. Lastly, we wish to express our sincere appreciation to all individuals and the program committee for their review of all the submissions, which is vital to the success ICAIT2021, and also to the members of the organizing committee who had dedicated their time and efforts in planning, promoting, organizing, and helping the conference. Special appreciation is extended to our keynote speaker: Prof. Dr. Lakhmi C. Jain, KES International, UK, who made a very beneficial speech for the conference audience titled Research Direction, Research Projects, and Publication, and Prof. Jair M. Abe, Paulista University, Sao Paulo, Brazil and Prof. Mario Divan, National University of La Pampa, Santa Rose Argentina, who contributed invited papers titled: Utilization of the Open-Source Arduino Platform to Control Based on Logic Et and Ontology Implementation of OPC UA and AutomationML: A Review, respectively. Kobe, Japan Sofia, Bulgaria Bhubaneswar, India São Paulo, Brazil Samara, Russia October 2021

Kazumi Nakamatsu Roumen Kountchev Srikanta Patnaik Jair M. Abe Andrey Tyugashev

Contents

Part I 1

2

Invited Papers

Utilization of the Open-Source Arduino Platform to Control Based on Logic Eτ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jonatas Santos de Souza, Jair M. Abe, Nilson Amado de Souza, Luiz Antônio de Lima, Flávio Amadeu Bernardini, Liliam Sayuri Sakamoto, Angel Antônio Gonzalez Martinez, and Kazumi Nakamatsu Ontology Implementation of OPC UA and AutomationML: A Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Johnny Alvarado Domínguez, Rachel Pairol Fuentes, Marcela Vegetti, Luciana Roldán, Silvio Gonnet, and Mario José Diván

Part II

3

17

Regular Papers

3

Airport Cab Scheduling Model Based on Queuing Theory . . . . . . . . Yuanfei Ma and Liyao Tang

4

Machine Reading Comprehension Model Based on Multi-head Attention Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Yong Xue

45

Design of Military Physical Fitness Evaluation System Based on Big Data Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dong Xia, Rui Ma, Ying Wu, and Ying Ma

59

5

6

ResNet-Based Multiscale U-Net for Human Parsing . . . . . . . . . . . . . . Luping Fan and Peng Yang

7

Short-Term Electricity Price Forecast Based on SSA-SVM Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zhenyu Duan and Tianyu Liu

29

71

79

xi

xii

8

9

Contents

Research on Mobile Advertising Click-Through Rate Estimation Based on Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . Songjiang Liu and Songxian Liu

89

Research on Macroeconomic Prediction Technology Based on Wavelet Neural Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tao Wang, Yuxuan Du, and Zheming Cui

95

10 Design Research on Financial Risk Control Model Based on CNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Zhuoran Lu 11 Application of Multi-objective Optimization Algorithm in Financial Market Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Yue Wu 12 Research on Success Factors of Long Rental Apartment Based on SEM-ANN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Zhentao Yu 13 T-AGP: A Kind of Music-Researching Method Based on Data-Driven Comprehensive System of Network Analysis . . . . . . 135 Xiaole Duan, Yuqiao Yan, Yunhan Yu, and Furong Jia 14 Application of Improved Algorithm of BP Neural Network . . . . . . . . 163 Qingzi Shi, Zhicheng Zeng, and Jiaxuan Tang 15 Application of Improved Harmony Search Algorithm in Photovoltaic MPPT Under Partial Shadow Conditions . . . . . . . . . 169 Zhichao Liang, Mengda Li, Xubin Zheng, and Linping Yao 16 A Traditional Butterfly Optimization Algorithm MPPT Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Xubin Zheng, Mengda Li, Zhichao Liang, and Linping Yao 17 Crops Plantation Problem in Uncertain Environment . . . . . . . . . . . . . 187 Haitao Zhong, Fangchi Liang, Mingfa Zheng, and Lisheng Zhang 18 Simplified Full-Newton Step Method with a Kernel Function . . . . . . 195 Hongmei Bi, Fengyin Gao, and Xuejun Zhao 19 Marine Ship Identification Algorithm Based on Object Detection and Fine-Grained Recognition . . . . . . . . . . . . . . . . . . . . . . . . 207 Xingyue Du, Jianjun Wang, Yiqing Li, and Bingling Tang 20 Automatic Detection of Safety Helmet Based on Improved YOLO Deep Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Nan Ni and Chao Hu

Contents

xiii

21 Opinion Mining and DENFIS Approaches for Modelling Variational Consumer Preferences Based on Online Comments . . . . 229 Huimin Jiang, Gaicong Guo, and Farzad Sabetzadeh 22 Pysurveillance: A Novel Tool for Supporting Researchers in the Systematic Literature Review Process . . . . . . . . . . . . . . . . . . . . . 239 Julen Cestero, David Velásquez, Elizabeth Suescún, Mikel Maiza, and Marco Quartulli 23 Research on Intelligent Control System of Corn Ear Vertical Drying Bin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Qiang Fei, BiaoJin, Lijing Yan, Lian Yao Tang, and Rong Chen 24 The Design of Tracking Car Based on Single Chip Computer . . . . . . 255 Hanhong Tan, Wenze Lan, and Zhimin Huang 25 Research Status and Trend Analysis of Coal Mine Electro-Mechanical Equipment Maintenance Under the Background of Smart Mine Construction . . . . . . . . . . . . . . . . . . . . 263 Libing Zhou 26 A Novel Conditioning Circuit for Testing Tank Fire Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Yang Cao, Hongtian Liu, Chao Song, Hai Lin, Dongjun Wang, and Hongwei Wu 27 Intelligent Auxiliary Fault Diagnosis for Aircraft Using Knowledge Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Xilang Tang, Bin Hu, Jianhao Wang, Chuang Wu, and Sohail M. Noman 28 Study on the Relationship Between Ship Motion Attitude and Wave Resistance Increase Based on Numerical Simulation . . . . 289 Yuhao Cao and Yuliang Liufu 29 Simulation of Automatic Control System of Self-Balancing Robot Based on MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Zihan Wang and Minxing Fan 30 Design of Bionic Knee Joint Structure Based on the Dynamics of Double Rocker Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Tianshuo Xiao 31 Research on Temperature Decoupling Control System of PET Bottle Blowing Machine Based on the Improved Single Neuron . . . . 313 Yuanwei Li

xiv

Contents

32 Research on the Simulation Method of the Drift Trajectory of Persons Overboard Under the Sea Search Mode of Unmanned Surface Vehicle (USV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Ying Chang, Yani Cui, and Jia Ren 33 Design of Infrared Remote Control Obstacle Avoidance Car . . . . . . . 331 Hanhong Tan and Ziying Qi 34 Parameter Adaptive Control of Virtual Synchronous Generator Based on Ant Colony Optimization Fuzzy . . . . . . . . . . . . . 339 Yao Linping, Li Mengda, Liang Zhichao, and Zheng Xubin 35 Attitude Calculation of Quadrotor UAV Based on Gradient Descent Fusion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Li Dengpan, Ren Xiaoming, Gu Shuang, Chen Dongdong, and Wang Jinqiu 36 A Mixed Reality Application of Francis Turbine Based on HoloLens 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Wenqing Wu and Chibing Gong 37 Design of Intelligent Lighting Controller Based on Fuzzy Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369 Ting Zheng, Yan Peng, Yang Yang, and Yimao Liu 38 Remote Sensing Monitoring of Cyanobacteria Blooms in Dianchi Lake Based on FAI Method and Meteorological Impact Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 Wenrui Han 39 Situation Analysis of Land Combat Multi-platform Direct Aim Confrontation with Non-combat Loss . . . . . . . . . . . . . . . . . . . . . . . 395 Guosheng Wang, Bing Liang, and Kehu Xu 40 Automotive Steering Wheel Angle Sensor Based on S32K144 and KMZ60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Zhiqiang Zhou, Yu Gao, Long Qian, Tianyu Li, and Wenhao Peng 41 Research on Vehicle Routing Problem of Urban Emergency Material Delivery with Time Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Yabin Wang, Jinguo Wang, and Shuai Wang 42 UAV Trajectory Planning Based on PH Curve Improved by Particle Swarm Optimization and Quasi-Newton Method . . . . . . 423 Aoyu Zheng, Bingjie Li, Mingfa Zheng, and Lisheng Zhang 43 Maintenance Data Collection and Analysis of Road Transport Vehicles’ Safety Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Fujia Liu, Xiaojuan Yang, Shuquan Xv, and Guofang Wu

Contents

xv

44 Effectiveness Evaluation Method of Marine Environmental Weapons and Equipment Based on Ensemble Leaning . . . . . . . . . . . . 451 Honghao Zheng, Qingzuo Chen, Bingling Tang, Ming Zhao, Gang Lu, Jianjun Wang, and Guangzhao Song 45 A Dynamic Load Distribution Method for Multi-Robot . . . . . . . . . . . 463 Yang Li, Lin Geng, Jinxi Han, Jianguang Jia, and Jing Li 46 Research on CNN-Based Image Denoising Methods . . . . . . . . . . . . . . 475 Wei Liu, Chao Zhang, and Yonghang Tai 47 Sheep Posture Recognition Based on SVM . . . . . . . . . . . . . . . . . . . . . . . 483 YaJuan Yao, Han Tan, Juan Yao, Cheng Zhang, and Fang Tian 48 Research on Application of Information Security Protection Technology in Power Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . 491 Dong Li, Yingxian Chang, Hao Yu, Qingquan Dong, and Yuhang Chen 49 Research on User Conversational Sentiment Analysis Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Yongbo Ma, Xiuchun Wang, Juan Liu, Zhen Sun, and Bo Peng 50 Research and Application of Edge Computing and Power Data Interaction Mechanism Based on Cloud-Edge Collaboration . . . . . . 507 Bing Tian, Zhen Huang, Shengya Han, Qilin Yin, and Qingquan Dong 51 Research and Application of Edge Resource Allocation Algorithm of Power Internet of Things Based on SDN . . . . . . . . . . . . 515 Dong Li, Shuangshuang Guo, Yue Zhang, Hao Xu, and Qingquan Dong 52 Research and Application of Key Technologies of Multi-round Dialogue in Intelligent Customer Service . . . . . . . . . . . . . . . . . . . . . . . . 523 Xiubin Huang, Lingli Zeng, Xiaoyi Wang, Jun Yu, and Ziqian Li 53 Rolling Bearing Fault Diagnosis Method Based on SSA-ELM . . . . . 531 Long Qian, Zhigang Wang, Zhiqiang Zhou, Dong Li, Xinjie Peng, Binbin Li, and Bin Jiao 54 OneNet-Based Smart Classroom Design for Effective Teaching Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 Junlin Shang, Yuhao Liu, and Yuxin Lei 55 Research on Image Denoising Method in Spatial Domain by Using MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555 Shuying Li and Wei Liu

xvi

Contents

56 An Improved Low-Light Image Enhancement Algorithm Based on Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563 Wen Chen and Chao Hu 57 Image Recognition of Wind Turbines Blade Surface Defects Based on Mask-RCNN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573 Dong Wang, Yanfeng Zhang, and Xiyun Yang 58 TCAS System Fault Research and Troubleshooting Process . . . . . . . 585 Xiaomin Xie, Renwei Dou, Kun Hu, Jianghuai Du, and Yueqin Wang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593

About the Editors

Prof. Kazumi Nakamatsu received the Ms. Eng. and Dr. Sci. from Shizuoka University and Kyushu University, Japan, respectively. His research interests encompass various kinds of logic and their applications to computer science, especially paraconsistent annotated logic programs and their applications. He has developed some paraconsistent annotated logic programs called Annotated Logic Program with Strong Negation (ALPSN), Vector ALPSN (VALPSN), Extended VALPSN (EVALPSN), and before-after EVALPSN (bf-EVALPSN) recently, and applied them to various intelligent systems such as a safety verification-based railway interlocking control system and process order control. He is an author of over 150 papers and 20 chapters, and 10 edited books published by prominent publishers. He has chaired various international conferences, workshops, and invited sessions, and he has been a member of numerous international program committees of workshops and conferences in the area of computer science. He has served as the editor-in-chief of the International Journal of Reasoning-based Intelligent Systems (IJRIS); he is now the founding editor of IJRIS, and an editorial board member of many international journals. He has contributed numerous invited lectures at international workshops, conferences, and academic organizations. He also is a recipient of numerous research paper awards. He is a member of ACM. Prof. Roumen Kountchev Ph.D., D.Sc., is a professor at the Faculty of Telecommunications, Department of Radio Communications and Video Technologies, Technical University of Sofia, Bulgaria. His areas of interest include: digital signal and image processing, image compression, multimedia watermarking, video communications, pattern recognition, and neural networks. He has 350 papers published in magazines and proceedings of conferences; 20 books; 47 chapters; and 21 patents. He had been principle investigator of 38 research projects. At present, he is a member of Euro Mediterranean Academy of Arts and Sciences and president of Bulgarian Association for Pattern Recognition (member of Intern. Association for Pattern Recognition). He is an editorial board member of: Intern. J. of Reasoning-based Intelligent Systems; Intern. J. Broad Research in Artificial Intelligence and Neuroscience; KES Focus Group on Intelligent Decision Technologies; Egyptian Computer xvii

xviii

About the Editors

Science J.; Intern. J. of Bio-Medical Informatics and e-Health, and Intern. J. Intelligent Decision Technologies. He has been a plenary speaker at: WSEAS Intern. Conf. on Signal Processing, 2009, Istanbul, Turkey; WSEAS Intern. Conf. on Signal Processing, Robotics and Automation, University of Cambridge 2010, UK; WSEAS Intern. Conf. on Signal Processing, Computational Geometry and Artificial Vision 2012, Istanbul, Turkey; Intern. Workshop on Bioinformatics, Medical Informatics and e-Health 2013, Ain Shams University, Cairo, Egypt; Workshop SCCIBOV 2015, Djillali Liabes University, Sidi Bel Abbes, Algeria; Intern. Conf. on Information Technology 2015 and 2017, Al-Zaytoonah University, Amman, Jordan; WSEAS European Conf. of Computer Science 2016, Rome, Italy; The 9th Intern. Conf. on Circuits, Systems and Signals, London, UK, 2017; IEEE Intern. Conf. on High Technology for Sustainable Development 2018 and 2019, Sofia, Bulgaria; The 8th Intern. Congress of Information and Communication Technology, Xiamen, China, 2018; General Chair of the Intern. Workshop New Approaches for Multidimensional Signal Processing, July 2020, Sofia, Bulgaria. Prof. Srikanta Patnaik is presently working as the director of International Relation and Publication of SOA University. He is a full professor in the Department of Computer Science and Engineering, SOA University, Bhubaneswar, India. He has received his Ph.D. (Engineering) on Computational Intelligence from Jadavpur University, India, in 1999. He has supervised more than 25 Ph.D. theses and 60 master’s theses in the area of computational intelligence, machine learning, soft computing applications, and re-engineering. He has published around 100 research papers in international journals and conference proceedings. He is the author of 2 text books, 52 edited volumes, and few invited chapters, published by leading international publisher like Springer-Verlag, Kluwer Academic. He is the editorin-chief of International Journal of Information and Communication Technology and International Journal of Computational Vision and Robotics published from Inderscience Publishing House, England, and International Journal of Computational Intelligence in Control, published by MUK Publication, editor of Journal of Information and Communication Convergence Engineering, and associate editor of Journal of Intelligent and Fuzzy Systems (JIFS), which are all Scopus Index journals. He is also editor-in-chief of Book Series on “Modeling and Optimization in Science and Technology” published from Springer, Germany, and Advances in Computer and Electrical Engineering (ACEE) and Advances in Medical Technologies and Clinical Practice (AMTCP), published by IGI Global, USA. He has travelled more than 20 countries across the globe to deliver invited talks and keynote address at various places. He is also the visiting professor to some of the universities in China, South Korea, and Malaysia.

About the Editors

xix

Prof. Jair M. Abe received B.A. and M.Sc. in Pure Mathematics, University of Sao Paulo, Brazil, and also received the doctor degree and Livre Docente title from the same university. He was the coordinator of Logic Area of Institute of Advanced Studies, University of Sao Paulo, Brazil, 1987-2019 and full professor at Paulista University, Brazil. His research interest topics include paraconsistent annotated logics and AI, ANN in biomedicine and automation, among others. He is a senior member of IEEE. He is studious of a family of paraconsistent annotated logic which is used to solve many complex problems in engineering. He has authored/edited books on paraconsistent and related logic published by Springer, Germany, and other reputed publishers. He is the recipient of many awards including medals for his academic performance and also received many best paper awards. He is the editor-in-chief of International Journal of Reasoning-based Intelligent Systems. Presently, he serves as an associate editor and a member of the Editorial Board of some journals related to the intelligent systems and applications. He has supervised a number of Ph.D. candidates successfully and presented a number of keynote addresses. He has authored/co-authored around 300+ publications including books, research papers, research reports, etc. His research interests include system design using conventional and artificial intelligence techniques, paraconsistent annotated logic, human factors in aviation, intelligent decision making, teaching and learning practices, and cognitive studies. Prof. D.Sc. Andrey Tyugashev is a professor at the Comp. Syst. Dept. of the Samara State Technical University. He is a member of IEEE. He received a Ph.D. in computer science (CASE technologies) from the Samara State Aerospace University, Russia, in 1997 and has been titled as a habilitated professor since 2008 after receiving the Doctor of Science Degree. In 2014–2020, he headed Applied Mathematics & Computer Science Department of the Samara State Transport University. He has more than 180 publications, including 9 books. He is an associate editor of KES Scientific Journal. He is many times winner of Samara National Research University and Samara Region Grants for the best young postdoctoral researches, winner (under the leadership of A.A. Kalentyev) of the Samara Region Scientific Grant, winner of President of Russian Federation’s Grant for young Doctors of Sciences Author of “Programming Languages” handbook published by central Russian Piter Publishing and other books devoted to Artificial Intelligence, Computer-Aided Lifecycle Support in Aerospace Industry, Visual Programming, etc, awarded the sign of the Governor of the Samara region “for achievements in higher education and research” (2019), and awarded by a diploma from European Council of International Society of electronic devices manufacturers. He is an official member of Scientific and Technical Expert Registry of Russian Federation; areas of expertise include software engineering, programming languages, software verification, real-time control systems, etc. He leads research projects in Aerospace Industry since 1991. He is a member of Roskosmos’s Spacecraft Software Standardization Working Group. He was invited plenary speaker at several international conferences (TMPA’2014

xx

About the Editors

Russia, ICEST2018 Bulgaria, ICAIT’2019 China, SCM’2018 Saint Petersburg, etc.). And he has served as a member of Program Committees for several international conferences (SYRCoSE-17, 18, 19, Russia, ICMA2018 Portugal, INISTA2018 Greece, IDC’2019 Saint Petersburg, etc.). He helps as a reviewer for several scientific journals published by Elsevier, Horizon Publishing, etc.

Part I

Invited Papers

Chapter 1

Utilization of the Open-Source Arduino Platform to Control Based on Logic Eτ Jonatas Santos de Souza, Jair M. Abe, Nilson Amado de Souza, Luiz Antônio de Lima, Flávio Amadeu Bernardini, Liliam Sayuri Sakamoto, Angel Antônio Gonzalez Martinez, and Kazumi Nakamatsu Abstract The introduction of disruptive technologies in agriculture to facilitate the management and increase crop productivity is called “Precision Agriculture”. This work sought to develop a prototype that uses the Open-Source Arduino platform interconnected to a soil moisture sensor, using the Paraconsistent Annotated Evidential Logic Eτ (Logic Eτ) for decision-making on the irrigation system. This work proposed to research and understand the concepts of Irrigation System, Arduino Platform, and the use of sensors for monitoring a Home Vegetable Garden using logic Eτ as a support for decision-making in the irrigation system. The methodology used was design science research, which allowed contributing new knowledge for the construction and materialization of the project. As a result, artefacts were generated that corroborated the creation of a prototype that met the objectives of this work, and it was possible to present an alternative with a low-cost system, easy to learn, accessible to small producers, with low energy consumption, and that helps in the conscious use of water.

1.1 Introduction The use of information technology is increasingly present in the agricultural sector, and studies are currently being conducted to facilitate the management and increase the productivity of crops. The term used to name the phenomenon of technological implementation in the field is known as “Precision Agriculture.” [31].

J. S. de Souza · J. M. Abe (B) · N. A. de Souza · L. A. de Lima · F. A. Bernardini · L. S. Sakamoto · A. A. G. Martinez Graduate Program in Production Engineering, Paulista University – UNIP, São Paulo, Brazil e-mail: [email protected] K. Nakamatsu School of Human Science and Environment, University of Hyogo, Kobe, Japan © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_1

3

4

J. S. de Souza et al.

According to CARVALHO [15], the irrigation process is defined as the artificial application of water to the soil, in adequate amounts, to provide the necessary moisture for the development of plants grown there, to compensate for the lack or poor distribution of rainfall. Irrigation should not be considered only as the collection, conveyance, and supply of water. The proper use as of irrigation requires knowledge of the relationships that exist between the following subjects: the soil, the water, the plant, and the climate [12, 20]. Modern irrigation is quite advanced and has several types of automation; however, the small farmer does not always have full access to these technologies, either by financial problems or lack of knowledge [25]. With the constant population increase in Brazil, especially since the 1960s, intensifying in recent decades, the country now ranks fifth among the most populous countries on the planet, behind only China, India, the United States, and Indonesia. According to the Brazilian Institute of Geography and Statistics––IBGE, on April 08, the Brazilian population passed the mark of 212,925,930 inhabitants [27]. This mark is being updated in an average time of 20 s. Due to this increase in population, urban regions increase in perimeter and reduce the space for the consequences of this are an increase in the consumption of water, energy, and food. It is, therefore, necessary to rethink a new structure. The use of urban or domestic space. It is necessary to rethink a new structure for the use of urban or domestic space in terms of planting. With this population growth, many people are taking as an alternative the planting at home to produce herbs, vegetables, and other foods of easy production in small spaces. Paraconsistent Logic is included among the so-called Non-Classical Logic because some of the fundamental principles of classical logic [22], such as the contradiction principle [26], is not valid in this category of logic. The Paraconsistent Annotated Evidential Logic Eτ − Logic Eτ [2], is a class of Paraconsistent Logic, which will be employed in the decision-making of irrigation in the home garden, solving questions regarding the state of the soil, such as is the state of the soil half dry or half wet? How should the irrigation system act in this case? Arduino is a free hardware prototyping platform [9]. A single board microcontroller (SoC––System-on-a-chip), with built-in input and output support and a specific programming language, aims to help create prototypes that are easily accessible, low cost, and easy to use for professionals and beginners who do not have access to more sophisticated microcontrollers [10]. The paper is composed of five sections. Section 1.2 presents the theoretical framework about the concepts of the Open-Source Arduino Platform, Irrigation System, and Eτ Logic, Sect. 1.3 describes the methodology, Sect. 1.4 presents the results obtained, and Sect. 1.5 brings the conclusion. The purpose of this work is to propose a model of an irrigation system that can be controlled and 0automated that allows monitoring inputs and controlling outputs with the help of Paraconsistent Annotated Evidential Logic Eτ with the use of the

1 Utilization of the Open-Source Arduino Platform …

5

Open-Source Arduino Platform, aiming at saving and controlling water, low-cost system, and production of its food in space for planting.

1.2 Theoretical Reference This section presents the concepts used in the development of the research, which contributes to the understanding of the operation of the irrigation system, the concepts of Logic Eτ, and the Arduino platform.

1.2.1 Irrigation System According to the glossary of terms used in agricultural, forestry, and environmental sciences [35], an irrigation system is defined as a practice that consists of providing water to the soil in an artificial and controlled manner to make it suitable for the soil to make it suitable for agricultural cultivation. The development of several ancient civilizations can be traced back to the success of irrigation. Ancient irrigation had two major impacts: food supply and population increase [32]. In Modern Agriculture, which emerged with the advent of the Industrial Revolution at the end of the eighteenth century, there was the subsumption of manual labour by steam machinery. Because of this substitution, there was an increase in the production of agricultural products. However, demand did not follow the same pace, and there was a decrease in manual labour. The population that worked in agriculture left the rural areas and went to the capital in search of work, giving rise to the phenomenon of in search of employment, giving rise to the phenomenon called Rural Exodus, which caused an increase in water consumption.

1.2.2 Irrigation Methods and Systems The term method can be defined as the way of acting or doing things [41], and the term system is a set composed of objects, parts, or elements that interact to perform a particular function [35]. As time has passed, new types of irrigation systems have emerged in the modern era. One can define four main methods of irrigation; Surface, Sprinkler, Localized, and Subsurface [41].

6

J. S. de Souza et al.

Fig. 1.1 Localized drip irrigation systems [13]

1.2.3 Localized Irrigation The localized water distribution characterizes this irrigation method, i.e., water is applied in small volumes with a high frequency close to the plant. Unlike the other methods mentioned above, each plant has an individual distribution. In the localized drip irrigation systems, water is applied by a dripper that releases water in the form of a drop around the plant (Fig. 1.1). The drip irrigation system was chosen because it allows the plant to absorb nutrients from the water better and has greater control for conscious water use, ease of management, less time spent on operations, and has less water loss through evaporation from the soil [41].

1.2.4 Paraconsistent Logic Roughly speaking, Paraconsistent Logic is a logic that can serve as underlying logic of inconsistent but non-trivial theories. Thus such logic derrogates the law of noncontradiction [2, 3]. In the mid-1950s, the Polish logician S. Ja´skowski and the Brazilian logician N. C. A. da Costa presented the first paraconsistent logics and they are considered the founders of Paraconsistent Logic together with others.

1 Utilization of the Open-Source Arduino Platform …

7

With the help of Paraconsistent Logic, one can handle inconsistent, but not trivial information [38], which is useful in many applications, particularly in this paper.

1.2.5 Paraconsistent Annotated Evidential Logic Eτ The Paraconsistent Annotated Evidential Logic Eτ is a type of Paraconsistent Logic that has in its language atomic propositions of type p(μ, λ), where μ represents the value of the favourable degree of evidence and λ represents the value of the degree of contrary evidence and they are limited to real values between 0 and 1 [1, 4, 6, 33]. The pair (μ, λ) is called annotation constant. p(μ, λ) can be intuitively read: “It is assumed that p’s favourable evidence is μ and unfavourable evidence is λ [4, 28]. After obtaining the values of μ and λ, formulas are used to obtain the values of the Degree of Certainty (Gcer) and the Degree of Uncertainty (Gunc). Gcer (μ, λ) = μ − λ

(1.1)

Gunc (μ, λ) = μ + λ − 1

(1.2)

After obtaining the Degree of Certainty and the Degree of Uncertainty, the Unit Square of the Cartesian Plane [16], known as a decision-making lattice (τ) [4], was developed, which illustrates a representation of the resulting logic states (Fig. 1.2). Table 1.1 describes the meaning of the symbols of the logical states that are of the lattice (τ) seen in Fig. 1.2. Fig. 1.2 Illustration of the lattice (τ) [4]

8 Table 1.1 Resultant logical states [4]

J. S. de Souza et al. Logical states

Symbol

True

V

False

F

Inconsistent

T

Paracomplete



Fig. 1.3 Representation of the para-analyzer algorithm [17, 38]

1.2.6 Para-Analyser Algorithm The Para-Analszer Algorithm is a set of instructions that allows the analysis of propositions to obtain a resulting state (Fig. 1.3). With the Para-Analszer Algorithm, there was the possibility to develop studies in Robotics [42], Neural Networks [39], and Health [5].

1.2.7 Arduino Arduino is a platform for prototyping electronic projects with input and output support using an Atmel AVR microcontroller on a single board (Fig. 1.4) [9]. The Arduino project was created by Massimo Banzi, David Cuartielles, Tom Igoe, Gianluca Martino, and David Mellis in 2005 [14, 23] in Ivrea, Italy, to create tools that are accessible, low cost, flexible, and easy to use for beginners and professionals, or for small producers who would not be able to afford the more sophisticated controllers and more complicated tools [14].

1 Utilization of the Open-Source Arduino Platform …

9

Fig. 1.4 Arduino Uno Rev3 [9]

Arduino is an open-source platform that is divided into two parts; the hardware and the software. The Hardware, Arduino board, is a free hardware which allows to connect sensors to it to monitor incoming data and control the output through one or more actions. The term open source is used when the developer makes available information about the project in the software, the source code of the program is made available [34] to make changes in the code, such as updating removing bugs, optimizing and customizing the layout of the development or user interface or user interface layout [8, 40]. Arduino is being used in several areas such as in robotics [18], in the Environment [7, 29], in Public Transportation [37], in Education [30], among other areas of knowledge.

1.3 Methodology As the principal methodology used, the concept of a process model in Design Science Research – VAISHNAVI, KUECHLER, and PETTER [44], allows contributing with new knowledge to build artefacts that allow the materialization of projects and create artefacts. There are five stages of the Knowledge Utilization Process (Fig. 1.5): Awareness of the Problem, Suggestion, Development, Evaluation, and Conclusion. The five stages of the project development process have already been approved in other elaborate projects [17] and demonstrate their effectiveness for efficiently developing the prototype. Another methodology used in this work meets one of the aspects of Exploratory Research [43], in which the objective is to familiarize oneself with the subject, where the researcher searched for tutorials and handouts about Arduino, to be able

10

J. S. de Souza et al.

Fig. 1.5 General framework of strategy in design science research [44]

to propose a low-cost, accessible prototype, and easy to learn, which will help in the economy of water, light, and food production on a small scale. Arduino-based projects require little or no programming skills or knowledge programming skills or knowledge of electronics theory, and for the most part, this practicality is simply acquired along the way. Moreover, there are already documentation and tutorials freely available on the Internet to make learning easier. The Arduino platform is related to the concept of experiential learning [19], which argues that for there to be integration, reality, and learning, five primary conditions are required [11], they are: learning by doing, conscious reconstruction of the experience, learning by association, diversified learning, and learning integrated with life and our reality. And in turn, it corroborates with the term DIY [36], (Do It Yourself), which is used to refer to the method of building, modifying, and repairing things [24] without direct help from professional professionals.

1.4 Results In the Suggestion stages (Fig. 1.6), using the concepts of Design Thinking [45] generated from the studies around the concepts addressed by DE SOUZA et al. (2021) [17] and that is part of the process to meet the main objective. In the Development stage (Fig. 1.6), the Open-Source Program Fritzing [21] was used to make the Schematic of the Prototype Components (Fig. 1.7), thus showing the components that are being used and thus assisting in the performance of functionality testing.

1 Utilization of the Open-Source Arduino Platform …

Fig. 1.6 Design of the proposed model

Fig. 1.7 Schematic of the prototype’s electronic components

11

12

J. S. de Souza et al.

Fig. 1.8 Log screen with garden status

Regarding the functionality of the irrigation system, a test was performed using the mint herb because it is used as a medicinal herb, and used to make tea, and requires care with the amount of water; the mint can be planted at any time of year, being resistant to cold and heat if the soil is at the adequate humidity. Regarding the energy consumption of the project, it is sufficient to calculate as follows: The value of the Arduino’s operating voltage (5 V) times the maximum current of operation in Amperes (50 mA + 40 mA = 90 mA, or 0.09 Amperes), which equals the power (4.5 W, can reach 5 W), after this calculation multiply by the time of use in seconds. The total value of this project was 291.30 BRL (Brazilian Real), this is the initial value for the acquisition of tools, but from the moment that one has some of the equipment, it will be only the maintenance cost per piece. The initial test results were obtained (Fig. 1.8), showing the viability of the proposed system, which can provide multiple and adaptive results.

1.5 Conclusion Arduino makes it possible to develop several systems in the most diverse areas of studies. With the use of Arduino, we can obtain a benefit in energy savings and equipment costs. The differential of this work is the use of Logic Eτ to assist in the decision-making for the irrigation system and allow the prototype’s duplication in another scenario. The use of Arduino as a receiver of the data read by the sensors and controller along with the Para-Analyzer Algorithm as an information processor and decisionmaking about drip irrigation proved to be satisfactory in terms of cost (acquisition and maintenance), energy consumption, and the control of the hydric system. The objective of the prototype was to minimize the use of water from an irrigation system in the cultivation of its food. The prototype was built using low-cost parts, making it accessible to anyone. There are no strict construction guidelines; its

1 Utilization of the Open-Source Arduino Platform …

13

expansion requires an effort, as the hardware is designed to scale quickly by adding more sensors, i.e., more sensors. After the accomplishment of this work and the fulfilment of the purposes, it was realized the importance and the possibility of using the proposed prototype based on Paraconsistent Logic in the management of water resources and the efficient use of water, employing of low cost and accessible tools for small producers. The contribution of this research is the implementation of a multidisciplinary project that connects information technology with agriculture. Acknowledgements This study was fonded in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001, Process No. 23038.013648/201851.

References 1. Abe, J.M.: Fundamentos da lógica anotada [Doutorado, Universidade de São Paulo] (1992). http://dedalus.usp.br/F/?func=direct&doc_number=000736471 2. Abe, J.M. (Ed.).: Paraconsistent intelligent-based systems: new trends in the applications of paraconsistency, vol. 94. Springer International Publishing (2015). https://doi.org/10.1007/ 978-3-319-19722-7 3. Abe, J.M. (Ed.).: Tópicos de Sistemas Inteligentes Baseados em Lógicas Não Clássicas. Instituto de Estudos Avançados (2016). http://www.iea.usp.br/publicacoes/ebooks/topicos-de-sis temas-inteligentes-baseados-em-logicas-nao-classicas 4. Abe, J.M., Akama, S., Nakamatsu, K.: Introduction to Annotated Logics, vol. 88. Springer International Publishing (2015). https://doi.org/10.1007/978-3-319-17912-4 5. Abe, J.M., Da Silva Lopes, H.F., Anghinah, R.: Paraconsistent neurocomputing and biological signals analysis. In: Abe, J.M. (Ed.) Paraconsistent Intelligent-Based Systems, vol. 94, pp. 273– 306. Springer International Publishing (2015). https://doi.org/10.1007/978-3-319-19722-7_11 6. Akama, S. (Ed.).: Towards Paraconsistent Engineering, vol. 110. Springer International Publishing (2016). https://doi.org/10.1007/978-3-319-40418-9 7. Ali, A.S., Zanzinger, Z., Debose, D., Stephens, B.: Open-source building science sensors (OSBSS): a low-cost Arduino-based platform for long-term indoor environmental data collection. Build. Environ. 100, 114–126 (2016). https://doi.org/10.1016/j.buildenv.2016. 02.010 8. Androutsellis-Theotokis, S., Spinellis, D., Kechagia, M., Gousios, G.: Open-source software: a survey from 10,000 feet. Found. Trends® Technol. Inf. Oper. Manag. 4(3–4), 187–347 (2010). https://doi.org/10.1561/0200000026 9. ARDUINO UNO REV3: Arduino Official Store (2020). https://store.arduino.cc/usa/arduinouno-rev3. Accessed 08 Apr 2021 10. ARDUINO: [Arduino] (2018). https://www.arduino.cc/en/Guide/Introduction. Accessed 08 Apr 2021 11. Beard, C.: The Experiential Learning Toolkit: Blending Practice with Concepts. Kogan Page (2010) 12. Bernardo, S., Mantovani, E.C., Silva, D.D.D.A, Soares, A.A.: Manual de Irrigação (9ª ed.). Editora UFV (2019) 13. BOAS PRÁTICAS AGRONÔMICAS: Irrigação, uma prática que aumenta a produtividade no campo. Boas Práticas Agronômicas (2019). https://boaspraticasagronomicas.com.br/boas-pra ticas/irrigacao/

14

J. S. de Souza et al.

14. Calvo, R., Alejos, R.: Arduino: The Documentary [Documentary]. LABoral Centro de Arte y Creación Industrial (2010). https://vimeo.com/18539129 15. Carvalho, D.F.: ENGENHARIA DE ÁGUA E SOLO. Universidade Federal Rural Do Rio De Janeiro (2010) 16. Costa, N.C.A., Abe, J.M., J.I.F., Murolo, A.C., LeitE, C.F.S.: Logica Paraconsistente Aplicada. ATLAS (1999) 17. De Souza, J.S., Abe, J.M., De Lima, L.A., Nakamatsu, K.: A purpose of a smart vegetable garden model based on paraconsistent annotated evidential logic Eτ. In: Nakamatsu, K., Kountchev, R., Aharari, A., El-Bendary, N., Hu, B. (Eds.) New Developments of IT, IoT and ICT Applied to Agriculture, vol. 183, pp. 11–18. Springer Singapore (2021). https://doi.org/10.1007/978981-15-5073-7_2 18. Di Tore, S., Todino, M., Sibilio, M.: Disuffo: design, prototyping and development of an opensource educational robot. Form@re Open Journal per la formazione in rete 19(1), 106–116 (2019). https://doi.org/10.13128/FORMARE-24446 19. Felicia, P. (Ed.).: Handbook of Research on Improving Learning and Motivation Through Educational Games: Multidisciplinary Approaches. Information Science Reference (2011) 20. Ferrarezi, R.S., TestezlaF, R.: Performance of wick irrigation system using self-compensating troughs with substrates for lettuce production. J. Plant Nutr. 39(1), 147–161 (2016). https:// doi.org/10.1080/01904167.2014.983127 21. FRITZING: Fritzing (2019). http://fritzing.org/. Accessed 08 Apr 2021 22. Gamut, L.T.F.: Logic, Language, and Meaning. University of Chicago Press (1991) 23. Geddes, M.: Arduino Project Handbook. No Starch Press (2016) 24. Gelber, S.M.: Do-it-yourself: constructing, repairing and maintaining domestic masculinity. Am. Q. 49(1), 66–112 (1997). https://doi.org/10.1353/aq.1997.0007 25. Guimarães, V.G.: Automação e monitoramento remoto de sistemas de irrigação visando agricultura familiar [Universidade de Brasília] (2011). https://bdm.unb.br/handle/10483/15621 26. Hamilton, A. G.: Logic for Mathematicians. Cambridge University Press (1978) 27. IBGE: Instituto Brasileiro de Geografia e Estatística. Projeção da população - 2021. Disponível em (2021). https://www.ibge.gov.br/apps/populacao/projecao/index.html. Accessed 08 Apr 2021 28. Lima, L.A., AbE, J.M., Martinez, A.A.G., Santos, J., Albertini, G., Nakamatsu, K.: The productivity gains achieved in applicability of the prototype AITOD with paraconsistent logic in support in decision-making in project remeasurement. Procedia Comput. Sci. 154, 347–353 (2019). https://doi.org/10.1016/j.procs.2019.06.050 29. Lockridge, G., Dzwonkowski, B., Nelson, R., Powers, S.: Development of a low-cost Arduinobased Sonde for coastal applications. Sensors 16(4), 528 (2016). https://doi.org/10.3390/s16 040528 30. Lopez-iturri, P., Celaya-EcharrI, M., Azpilicueta, L., Aguirre, E., Astrain, J., Villadangos, J., Falcone, F.: Integration of autonomous wireless sensor networks in academic school gardens. Sensors 18(11), 3621 (2018). https://doi.org/10.3390/s18113621 31. MAPA: Ministério da Agricultura, Pecuária e Abastecimento. Agricultura de precisão/Ministério da Agricultura, Pecuária e Abastecimento. Secretaria de Desenvolvimento Agropecuário e Cooperativismo. – Brasília: Mapa/ACS, 2013. 36 pp. Brazil. ISBN 978-85-99851-90-6 32. Mazoyer, M., Roudart, L.: História das agriculturas no mundo: Do neolítico à crise contemporânea. Ed. UNESP: NEAD (2009) 33. Nakamatsu, K., Abe, J.M.: The development of paraconsistent annotated logic programs. Int. J. Reason. Based Intell. Syst. 1(1/2), 92 (2009). https://doi.org/10.1504/IJRIS.2009.026721 34. OPEN-SOURCE INITIATIVE: The open-source definition (Annotated). Open-Source Initiative (2007). https://opensource.org/docs/definition.php 35. Ormond, J.G.P. (Comp.): Glossário de termos usados em atividades agropecuárias, florestais e ciências ambientais. BNDES (2006) 36. Penzenstadler, B., Plojo, J., Sanchez, M., Marin, R., Tran, L., Khakurel, J.: The afordable DIY resilient smart garden kit. In: Proceedings of the 2018 Workshop on Computing within Limits, pp. 1–10 (2018). https://doi.org/10.1145/3232617.3232619

1 Utilization of the Open-Source Arduino Platform …

15

37. Rodrigues, S.G., Shimoishi, J.M.: Aplicação do Método Paraconsistente de Decisão na Seleção de Tecnologias de Transporte Público Urbano. J. Transp. Lit. 9(3), 20–24 (2015). https://doi. org/10.1590/2238-1031.jtl.v9n3a4 38. Silva Filho, J.I., Torres, G.L., Abe, J.M.: Uncertainty Treatment Using Paraconsistent Logic: Introducing Paraconsistent Artificial Neural Networks. IOS Press (2010) 39. Souza, S., Abe, J.M.: Paraconsistent artificial neural networks and aspects of pattern recognition. In: Abe, J.M. (Ed.): Paraconsistent Intelligent-Based Systems, vol. 94, pp. 207–231. Springer International Publishing (2015). https://doi.org/10.1007/978-3-319-19722-7_9 40. Stallman, R.: Why open source misses the point of free software (2020). https://www.gnu.org/ philosophy/open-source-misses-the-point.en.html 41. Testezlaf, R.: Irrigação: Métodos, sistemas e aplicações. Unicamp/FEAGRI (2017) 42. Torres, C.R., Reis, R.: The new hardware structure of the Emmy II Robot. In: Abe, J.M. (Ed.): Paraconsistent Intelligent-Based Systems, vol. 94, pp. 87–103. Springer International Publishing (2015). https://doi.org/10.1007/978-3-319-19722-7_4 43. Tozoni-reis, M.F.C.: Metodologia da Pesquisa Cientifica (2ª ed.). IESDE Brazil S.A (2009) 44. Vaishnavi, V., Kuechler, W., Pette, R.S. (Eds.): Design science research in information systems (created in 2004 and updated until 2015 by Vaishnavi and Kuechler). Last updated by Vaishnavi and Petter 2019. Disponível em. http://www.desrist.org/design-research-in-informationsystems/. Accessed 09 Apr 2021 45. Wenzel, M., Meinel, C.: Prototyper: a virtual remote prototyping space. In: Meinel, C., Leifer, L. (Eds.) Design Thinking Research, pp. 171–184. Springer International Publishing (2020). https://doi.org/10.1007/978-3-030-28960-7_11

Chapter 2

Ontology Implementation of OPC UA and AutomationML: A Review Johnny Alvarado Domínguez, Rachel Pairol Fuentes, Marcela Vegetti , Luciana Roldán, Silvio Gonnet , and Mario José Diván

Abstract Industry 4.0 requires standardized information models for heterogeneous platform integration. Two standards for interoperability are IEC 62,541–a machineto-machine communication protocol expressed in OPC Unified Architecture (OPC UA) format––and IEC 62,714 to describe production plants or plant components expressed in AutomationML format. Despite their wide adoption, these languages lack formal semantics for automatic data interpretation. This work describes a Systematic Mapping Study focused on recent studies on the transformation between the OPC UA and AutomationML languages and OWL or RDF. Articles on conferences and journals from 2015 onwards are queried and analyzed from the Scopus, IEEE, and ACM databases. The study is limited to articles written in the English language and published in journals. From 43 documents, 16 are retained following the inclusion and exclusion criteria. As a result of this study, the main challenges facing Industry 4.0 are identified. These are the lack of support for the ontology engineering tasks performed by the knowledge engineer, the representation of the semantic digital twin, data duplication, and communication overhead. Germany concentrates 87% of the involved articles on the subject.

J. A. Domínguez · R. P. Fuentes · M. Vegetti · L. Roldán · S. Gonnet INGAR (UTN - CONICET), 3000 Santa Fe, Argentina e-mail: [email protected] R. P. Fuentes e-mail: [email protected] M. Vegetti e-mail: [email protected] L. Roldán e-mail: [email protected] S. Gonnet e-mail: [email protected] M. J. Diván (B) National University of La Pampa, 6300 Santa Rosa, Argentina e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_2

17

18

J. A. Domínguez et al.

2.1 Introduction Industry 4.0 (I4.0) is an industrial environment composed of cyberphysical systems jointly with digital representations of the physical world. These systems monitor and orchestrate tangible processes [3]. To perform such activities, I4.0 requires standardized information models for heterogeneous platform integration. Therefore, an essential aspect of I4.0 is interoperability. In 2015, a general reference architecture [19] is proposed for creating a common understanding of I4.0 concepts. It is known as the Reference Architecture Model of Industry 4.0 (RAMI4.0). One of the main concerns of RAMI 4.0 is semantic information modeling [13]. In this context, different standards are employed to increase interoperability. On the one hand, IEC 62,714 enables the specification of production plants or plant components expressed in the Automation Mark-up Language (AutomationML/AML). This standard supports data exchanging between different disciplines during the engineering phase. On the other hand, IEC 62541defines a machine-to-machine communication protocol expressed in the OLE for Process Control Unified Architecture (OPC UA) language. This standard is employed in the operational phase [2]. Despite their wide adoption, AutomationML and OPC UA languages lack formal semantics for automatic data interpretation. Ontologies can provide such formal semantics [21]. Therefore, AutomationML and OPC UA can be enriched from the semantic point of view, using Semantic Web languages, such as Resource Description Framework (RDF) or Web Ontology Language (OWL). These formalized models will enhance the data processing capabilities of the physical components [13]. It represents an essential aspect, mainly thinking about how data are collected. Supposing that all the data sources are calibrated adequately, each single, heterogeneous, and distributed device provides raw data. They need to be interpreted and analyzed in context adequately for supporting a decision-making process in real time. Moreover, once the decision is obtained, a recommendation or courses of action need to be provided. It helps the production processes to reach their aim. There are different secondary studies available discussing I4.0. A good example is the work of Hofer [4], which introduces a systematic mapping study (SMS) that identifies architecture and design issues in software engineering in the context of cyber-physical systems for I4.0. However, there are no reviews or secondary studies about ontology implementation of standards in the context of I4.0. Therefore, as main contributions, this work describes the principal applications of ontology implementation in the I4.0 environments following an SMS methodology. The purpose and benefits of ontology implementation are synthesized jointly with the main challenges. This proposal represents a reference of interest for practitioners, academicians, researchers, and students in the field. This work is composed of five sections. Next, the section explains the methodology involved in the article’s searching, filtering, and processing. After that, Sect. 2.3 outlines the analysis of the results. Section 2.4 introduces some discussions about the particular interest in this area. Finally, some conclusions and final remarks are provided.

2 Ontology Implementation of OPC UA and AutomationML: A Review

19

2.2 Research Method This contribution introduces the results of an SMS that has been carried out to understand the formalization degree of I4.0 standards OPC UA and AutomationML by RDF and OWL ontologies. The SMS has been executed following the guidelines of Petersen et al. [16]. Principal primary studies on conferences and journals from 2015 onward are queried and analyzed from the Scopus, IEEE, and ACM databases. The objectives of this study were to: (i) identify the main contributions about the ontology implementation of AutomationML and OPC UA standards and (ii) compare the different implementations and points of view. The results of these SMS are used for identifying the main challenges of the formalization of I4.0 standards. Therefore, the research questions are the following: • (RQ1) What are the main contributions associated with the primary studies? • (RQ2) Which research methods have been employed to reach the contributions? • (RQ3) Which semantic web languages have been employed for implementing the proposed ontologies? • (RQ4) What are the essential or critical roles that ontologies play in the formalization of I4.0 standards? • (RQ5) What are the benefits of using ontology for formalization I4.0 standards? • (RQ6) What are the challenges of using ontologies in the formalization of I4.0 standards? The main concern is associated with the ontology implementation and the target languages for such an aim. For such a reason, RDF and OWL keywords were incorporated in the search string. Furthermore, the OPC UA and AutomationML were added. Therefore, the resulting keywords have been combined using Boolean operators to generate the following search string: ((RDF OR OWL) AND (“OPC UA” OR AutomationML)). The search string was applied to query the IEEE, Scopus, and ACM digital libraries. Queries ran on August 5 of 2021. They returned 53 elements. A synthesis of results is described in Table 2.1. Ten records were removed from the original result list because they are duplicated. Thus, a list containing 43 contributions remained for selection. After that, we established the following inclusion criteria: (a) articles need to be written in the English language; (b) the publishing time is bounded to 2015 onwards. 2015 has been chosen as the starting year because in that year 4.0 was presented for the first time in Germany as one of the reference architectures models in I4.0. The exclusion criteria are associated with those articles that do not fit well to the aim behind the main research question. For such a reason, surveys or reviews are removed from the query result. After the application of inclusion and exclusion criteria, 30 elements are retained. Thus, a detailed reading of every article is carried out for analyzing the alignment between an article and the research questions. This is important because some articles use to mention the terms though they are not

20 Table 2.1 Search strings and contributions retrieved by each repository

J. A. Domínguez et al. Repository

Query terms

Contributions retrieved

IEEE

((RDF OR OWL) AND (“OPC UA” OR AutomationML))

18

ACM

((RDF OR OWL) AND (“OPC UA” OR AutomationML))

18

Scopus

TITLE-ABS-KEY (((RDF OR OWL) AND (“OPC UA” OR AutomationML)))

17

Total

53

associated with the subject specifically. As a result, a total of 16 contributions are selected. They compose the final list under analysis in this work.

2.3 Results Analysis Figure 2.1 shows the distribution of selected contributions by years: one contribution in 2015 about AutomationML ([12]); four contributions in 2018 ([2, 5, 8, 11]); ten contributions in 2019 ([1, 6, 7, 9, 13–15, 17, 18, 20], most of them about OPC UA); and one contribution about OPC UA in 2020 ([10]). Furthermore, the figure illustrates the selected contributions by publication type. As it is seen from the figure, conference papers hold most of the contributions with 81.25%. 9 of 13 conference papers were published in “IEEE International Conference on Emerging Technologies and Factory Automation” (ETFA), an Annual Conference of the “IEEE Industrial Electronics Society”. Furthermore, 2 of 3 retrieved journals are journals that publish conference papers. The authors of 14 selected studies are researchers from Germany

Fig. 2.1 Distribution of contributions included in the SMS (2015: [12]; 2018: [2, 5, 8, 11]; 2019: [1, 6, 7, 9, 13–15, 17, 18, 20]; 2020: [10])

2 Ontology Implementation of OPC UA and AutomationML: A Review Table 2.2 Retained articles

21

Contributors

Standard

Citation

Research method

Kovalenko et al. [12]

AutomationML

16

Experience report

Katti et al. [11]

OPC UA

13

Solution

Perzylo et al. [15] OPC UA

10

Solution

Bunte et al. [2]

AutomationML, OPC UA

7

Evaluation

Schiekafer et al. [18]

OPC UA

7

Solution

Schiekafer et al. [17]

OPC UA

5

Solution

Bakakeu et al. [1] OPC UA

5

Solution

Huan and Hein [8]

4

Solution

Steindl et al. [20] OPC UA

3

Solution

Patzer et al. [14]

OPC UA

3

Evaluation

Huan and Hein [7]

AutomationML

2

Solution

Majumder et al. [13]

OPC UA

1

Evaluation

Katti et al. [9]

OPC UA

1

Solution

Huan and Hein [5]

AutomationML

1

Solution

Katti et al. [10]

OPC UA

0

Solution

Huan and Hein [6]

AutomationML

0

Solution

AutomationML

(87%) and 2 from Austria (13%). It is expected in a certain way due to I4.0 being a German initiative. Table 2.2 describes the 16 retained articles; they are ordered decently based on citations. In the following paragraphs, we share the finding of our review. It is interesting to appreciate in Fig. 2.1 how consolidated researches go taking participation in journals gradually. However, the dominance of partial results in conferences is observed. Thus, OPC UA is gaining recent attention from 2018 in contrast to AutomationML. Table 2.2 describes an interesting view from the citation perspective. The most cited articles correspond to the OPC UA standard, followed by AutomationML, or a combination of the previous ones. To answer the RQ1, the list of retained articles is synthesized. We grouped these articles according to the AutomationML and OPC UA standards. An ontology implementation using AutomationML-based Information models. Kovalenko et al. [12] propose a representation of AutomationML by an

22

J. A. Domínguez et al.

ontology. They compare the transformation process with a model-driven engineering process. The obtained OWL ontology specifies all concepts from a given AutomationML definition and enables them to perform reasoning and querying supported activities, such as consistency checking. Hua and Hein [7] introduce a bidirectional translation between AutomationML and OWL. They identify a subset of OWL building blocks that can be specified in AutomationML. This bidirectional translation enables domain experts to visualize OWL complex classes as AutomationML concept models. A complementary approach is presented in Hua and Hein [5, 8]. These authors apply a concept learning approach, using Inductive Logic Programming (ILP) and the DL-Learner framework, for supporting the derivation of the intended meaning of AutomationML concepts. The proposals use an OWL ontology as background knowledge. These proposals are integrated by [6], where a semi-automated learning system of engineering concepts is proposed. These concepts are learned from AutomationML data. Ontology Implementation of OPC UA Information models. Perzylo et al. [15] specify using OWL a semantic description language of OPC UA “NodeSets” for the creation of digital twins of manufacturing resources. Majumder et al. [13] provide a mapping between the modeling elements of OPC UA and the building blocks of RDF and OWL languages. Bunte et al. [2] propose a representation of OWL ontologies in OPC UA and AutomationML with the purpose of employing data through the whole industry life cycle. Therefore, this proposal tries to integrate AutomationML and OPC UA. In [14], also an OWL ontology to represent the OPC UA information model is proposed. A TBox is generated by mapping each OPC UA element of an OPC UA schema to an OWL concept, such as class, object property, or data property. This common strategy is complemented with the manual introduction of context information. Furthermore, the ABox is generated by the execution of a Python program that populates the proposed ontology. The base ontology is updated because the authors are optimizing their ontology for security analysis. Schiekofer and Weyrich [18] present how the OPC UA model can be queried. The proposal is based on the ontological representation of OPC UA specified in [17]. SPARQL is used as the querying language. In [17], a formal transformation of OPC UA models into OWL is proposed. A similar approach is introduced in Bakakeu et al. [1]. Therefore, existing reasoners can be used to infer new knowledge. The previous approaches have the disadvantage of data redundancy and communication overhead. Then, Steindl et al. [20] have implemented a linked data adapter that extends a SPARQL query engine and provides data from an OPC UA server as RDF triples. Therefore, the data access is on-demand, reducing data redundancy. Katti et al. [11] propose the specification of orchestration plans in a flexible way. The work describes a service ontology based on OWL-S (semantic specification of web services). This service ontology is composed of: the “profile” ontology, the “process model” ontology, and the “grounding” ontology. The first two ontologies specify the capabilities of the OPC UA methods, and the grounding ontology specifies method invocation details. This contribution is complemented in [9], where SAWSDL is used to describe the semantics of web services. In [10], Katti et al. automate the

2 Ontology Implementation of OPC UA and AutomationML: A Review

23

translation of other manufacturing resources, that are not specified in OPC UA, into an OWL ontology. RQ2. Research methods are employed to reach contributions. The research types were taken from [16]. The contributions [2, 13, 14] were classified as evaluation research, in which existing semantic web techniques were evaluated to represent the studied standards. Article [12] is the only contribution that was classified as an experience report. The remaining contributions were solution proposals (Table 2.2, Research method column). RQ3. Ontology implementation language. OWL was used as the ontology implementation language in all contributions (Fig. 2.2a). Katti et al. [9, 11] used OWL-S and SAWSDL for specifying services. The articles [9, 10, 12] used SWRL for specifying business rules (19% of contributions). Finally, 6 proposals (38% of retained articles) [1, 12, 13, 15, 18, 20] employed SPARQL for retrieving knowledge from the data sources. RQ4. Main roles that ontologies play in the formalization of I4.0 standards. Most of the contributions (94%) focused on knowledge representation (Fig. 2.2b). It is expected in a certain way due to ontologies serving this purpose. The contribution [2] evaluates the use of ontology as smart service integration and AutomationML and OPC UA data integration. Katti et al. [9, 11] specify a set of methods of OPC

Fig. 2.2 a Languages, b role, c benefits, and d challenges of the contributions included in the SMS

24

J. A. Domínguez et al.

UA with the purpose of synthesizing flexible orchestration plans. Schiekofer and Weyrich [18] analyze the use of ontology as a query language and present it as an alternative to formulating native OPC UA queries (knowledge retrieval in Fig. 2.2b). RQ5. Benefits obtained by using ontologies. Most of the contributions expect to use ontology as a knowledge base. It can be queried and supported by analytical data tasks (i.e., querying and reasoning in Fig. 2.2c, 38% of retrieved articles). Another common expected benefit of its use is consistency check (Fig. 2.2c, 25% of selected contributions). Katti et al. [9, 11] aimed to obtain a flexible framework for process orchestration (13% of articles). Perzylo et al. [15] aim to reach a semantic digital twin as a semantic specification of hardware and software components of manufacturing resources. Patzer et al. [14] propose to use the obtained ontology for security analysis. However, the expected benefit of this approach is not validated in the proposal. The contributions [5–8] have employed OWL ontologies in a significantly different way than the rest of the studies we analyzed. They have used OWL ontologies of AutomationML as background knowledge for interactive learning of main concepts (19% of retrieved contributions). RQ6. Challenges of employing ontologies. Most of the studies did not report challenges or lessons learned regarding their ontology implementation (Not Specified category in Fig. 2.2d). Steindl et al. [20] is the unique contribution that emphasizes the disadvantages of data redundancy (Duplicating data category in Fig. 2.2d). Therefore, they propose an extension of the SPARQL query engine that enables data access on-demand. The study [9] reports some problems related to communication overhead in the previous proposal [11] and compares it with a hybrid specification of the services. Hua and Hein [5] report some improvements in learning declarative class definitions of engineering objects also [8] (Accuracy of derived concepts category in Fig. 2.2d).

2.4 Discussions In this SMS, a total of 16 articles were retained and they have been analyzed in detail. The results analysis performed and described in Sect. 2.3 shows that the academic and industrial community has taken an interest in the specification and implementation of OWL ontologies in the I4.0 context for the past 5 years. The retrieved contributions have transformed the AutomationML and OPC UA standards into RDF and OWL ontologies with the aim of expanding the usefulness and scope of the information specified with these standards. In the terminological layer (TBox), AutomationML and OPC UA concepts have been transformed to OWL class taxonomies, which specify the semantics of the standards. In the axiomatic layer (A-Box), specific AutomationML and OPC UA elements are transformed to OWL individuals. The knowledge representation available on AutomationML and OPC UA through RDF and OWL ontologies enables the application of different tools available on the Semantic Web languages, such as querying and analytical data tasks

2 Ontology Implementation of OPC UA and AutomationML: A Review

25

[14]. The implementation of the main constructs of AutomationML and OPC UA is stable. Recent contributions are integrating these concepts with other manufacturing resources. Due to the complexity of the ontology engineering task, some proposals are employing ILP to assist the knowledge engineer. In most of the proposals, a prototype has been implemented. Usually, the implemented prototypes use the Apache Jena Framework to handle the OWL data. These data are stored in a triplestore database, and they are queried and retrieved using a SPARQL web interface provided by an Apache Jena Fuseki server. It is important to note that none of the studies analyzed in this SMS have been evaluated in a real-world scenario. Some of them evaluate and/or exemplify the proposal’s applicability through a proof of concept or a simple case study. Therefore, there is a need for empirical evaluation about the applicability and scalability of the proposals in real-world scenarios, such as (i) level of redundancy data, (ii) communication overhead, (iii) size of the knowledge base to support reasoning.

2.5 Conclusions This work described the main applications of ontology engineering to implement the AutomationML and OPC UA standards following an SMS methodology. The purpose and benefits of ontology implementation are synthesized jointly with the main challenges. The main challenges are associated with supporting the knowledge engineer in the ontology engineering tasks, semantic digital twin, duplication of data, communication overhead. Furthermore, the SMS shows a need for further empirical evidence on the benefits of implementing ontology in the I4.0 context.

References 1. Bakakeu, J., et al.: Automated reasoning and knowledge inference on OPC UA information models. In: Proceedings of 2019 IEEE International Conference on Industrial Cyber Physical Systems, ICPS 2019, pp. 53–60 (2019). https://doi.org/10.1109/ICPHYS.2019.8780114 2. Bunte, A., et al.: Integrating OWL ontologies for smart services into AutomationML and OPC UA. In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, pp. 1383–1390 (2018). https://doi.org/10.1109/ETFA.2018.8502593 3. Givehchi, O., et al.: Interoperability for industrial cyber-physical systems: an approach for legacy systems. IEEE Trans. Ind. Inform. 13(6), 3370–3378 (2017). https://doi.org/10.1109/ TII.2017.2740434 4. Hofer, F.: Architecture, technologies and challenges for cyber-physical systems in industry 4.0: a systematic mapping study. In: International Symposium on Empirical Software Engineering and Measurement (2018). https://doi.org/10.1145/3239235.3239242 5. Hua, Y., Hein, B.: Concept learning in engineering based on refinement operator. In: CEUR Workshop Proceedings, 2206, 688117, pp. 76–83 (2018)

26

J. A. Domínguez et al.

6. Hua, Y., Hein, B.: Interactive learning engineering concepts in AutomationML. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1248–1251. IEEE (2019). https://doi.org/10.1109/ETFA.2019.8869182 7. Hua, Y., Hein, B.: Interpreting OWL complex classes in AutomationML based on bidirectional translation. In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, pp. 79–86 (2019). https://doi.org/10.1109/ETFA.2019.8869456 8. Hua, Y., Hein, B.B.: Concept learning in AutomationML with formal semantics and inductive logic programming. In: IEEE International Conference on Automation Science and Engineering, pp. 1542–1547 (2018). https://doi.org/10.1109/COASE.2018.8560541 9. Katti, B., et al.: A jumpstart framework for semantically enhanced OPC-UA. KI Kunstl. Intelligenz. 33(2), 131–140 (2019). https://doi.org/10.1007/s13218-019-00579-0 10. Katti, B., et al.: Bidirectional transformation of MES source code and ontologies. Procedia Manuf. 42, 197–204 (2020). https://doi.org/10.1016/j.promfg.2020.02.070 11. Katti, B., et al.: SemOPC-UA: introducing semantics to OPC-UA application specific methods. IFAC-PapersOnLine 51(11), 1230–1236 (2018). https://doi.org/10.1016/j.ifacol.2018.08.422 12. Kovalenko, O., et al.: Modeling AutomationML: semantic web technologies vs. model-driven engineering. In: 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), pp. 1–4. IEEE (2015). https://doi.org/10.1109/ETFA.2015.7301643 13. Majumder, M., et al.: A comparison of OPC UA & semantic web languages for the purpose of industrial automation applications. In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, pp. 1297–1300 (2019). https://doi.org/10.1109/ETFA. 2019.8869113 14. Patzer, F., et al.: The industrie 4.0 asset administration shell as information source for security analysis. In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, pp. 420–427. https://doi.org/10.1109/ETFA.2019.8869059 15. Perzylo, A., et al.: OPC UA NodeSet ontologies as a pillar of representing semantic digital twins of manufacturing resources. In: IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, pp. 1085–1092 (2019). https://doi.org/10.1109/ETFA.2019.886 8954 16. Petersen, K., et al.: Guidelines for conducting systematic mapping studies in software engineering: an update. Inf. Softw. Technol. 64, 1–18 (2015). https://doi.org/10.1016/j.infsof.2015. 03.007 17. Schiekofer, R., et al.: A formal mapping between OPC UA and the semantic web. In: IEEE International Conference on Industrial Informatics, pp. 33–40 (2019). https://doi.org/10.1109/ INDIN41052.2019.8972102 18. Schiekofer, R., Weyrich, M.: Querying OPC UA information models with SPARQL. IEEE International Conference on Emerging Technologies and Factory Automation, ETFA, pp. 208– 215 (2019). https://doi.org/10.1109/ETFA.2019.8868246 19. Schleipen, M., et al.: OPC UA & industrie 4.0—enabling technology with high diversity and variability. Procedia CIRP 57, 315–320 (2016). https://doi.org/10.1016/j.procir.2016.11.055 20. Steindl, G., et al.: Ontology-based OPC UA data access via custom property functions. In: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 95–101. IEEE. https://doi.org/10.1109/ETFA.2019.8869436 21. Steindl, G., Kastner, W.: Transforming OPC UA information models into domain-specific ontologies. In: 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS), pp. 191–196 (2021). https://doi.org/10.1109/ICPS49255.2021.9468254

Part II

Regular Papers

Chapter 3

Airport Cab Scheduling Model Based on Queuing Theory Yuanfei Ma and Liyao Tang

Abstract With the development of society and economy, airplanes have become an indispensable mode of transportation, and the passenger flow in the airport has also shown an upward trend year by year. As a large-scale transportation hub, the airport has a non-negligible influence on the travel and return of passengers. As the number of passengers changes, the demand for taxis also changes accordingly. If the supply of taxis exceeds demand, it will seriously affect the travel of passengers. On the contrary, if the supply of taxis exceeds demand, it will cause traffic congestion. Therefore, reasonable allocation and dispatch of taxis can improve the efficiency of passengers’ trips. It can also bring more benefits to taxi drivers. Based on the situation in the topic, this article combines theory with practice, using a variety of mathematical models and algorithms, such as queuing theory, linear programming (return and risk of investment), different vehicle path models, etc. plus certain reasoning and calculations, design A set of “win–win” strategies for taxi drivers and passengers.

3.1 Problem Restatement 3.1.1 Problem Background After most passengers get off the plane, they mainly take taxis to the city (or surrounding) destinations. In most domestic airports, the channels for drop-off (departure) and pick-up (arrival) are separate and single channel. Taxis that send passengers to the airport face two choices: (A)

Go to the arrival area and wait in line to carry passengers back to the city. Taxis must wait for passengers at the designated “car storage pool”. The queuing method is based on the “first-come-first-served” approach. The waiting time

Y. Ma (B) · L. Tang Capital University of Economics and Business, Beijing, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_3

29

30

(B)

Y. Ma and L. Tang

depends on the number of taxis and passengers (and other influencing factors) in the queue, and requires time cost. After the driver sends the passengers to the airport, instead of waiting in the “car storage tank”, they return directly to the city to solicit passengers. The driver will pay no-load fees and may lose potential passenger revenue.

The number of parked flights in a certain period of time and the number of taxis already queued in the “car storage pool” are information that the driver can observe. Drivers usually make decisions based on experiences, such as the influence of seasons and time periods on the number of flights and waiting passengers. If passengers want to take a taxi after getting off the plane, they need to queue in the designated “ride area” and board in the order of “first come, first come”. Airport taxi management personnel are responsible for “quantitatively” letting taxis into the “riding area” and arranging a certain number of passengers to board. There are many deterministic and uncertain factors that affect the driver’s decision to varying degrees.

3.1.2 Problem Requirements Based on the background of the above topic and the data collected by yourself, the topic requires a mathematical model to discuss the following issues: 1.

2. 3.

4.

Analyze and study the relevant factors that affect the decision-making of taxi drivers, comprehensively consider the changing law of the number of passengers at the airport and the driver’s income, establish a taxi driver’s selection decision model, and give the driver’s selection strategy. By collecting data, give the driver’s choice at a certain airport, and analyze the rationality of the model and its dependence on related factors. There are two parallel lanes in the “ride area” of an airport. How can the management department set up the “boarding point” and arrange taxis and passengers reasonably to ensure the overall efficiency of the ride while ensuring the safety of passengers and drivers highest. The passenger income of taxis at the airport is related to the number of passengers traveled. The distance traveled by passengers is different. Drivers cannot choose passengers or refuse to travel. They can only pick up passengers in the order of queuing. Therefore, there are some drivers who have fewer passengers traveled. Passengers will be carried multiple times. The management department intends to give certain “priority” to such drivers who carry passengers on multiple return trips, so that the revenue of airport taxis will be as balanced as possible, and try to give a feasible priority arrangement.

3 Airport Cab Scheduling Model Based on Queuing Theory

31

3.2 Model Assumptions 1. 2.

3. 4.

5. 6. 7. 8. 9.

Assuming that the hourly passenger capacity of the airport reaches 1,312 people, and the waiting area for the bus reaches 300 people/hour Assuming that the nonlinear fluctuation of the airport passenger throughput is small, the linear theoretical prediction algorithm can accurately analyze and predict the change in the number of airport passengers. Suppose the time quantity model of arriving passengers is shown in Fig. 3.1, and the number model of taxis is shown in Fig. 3.2. Assuming that relevant data such as the number of passengers and the number of taxis are counted once every hour, so a day is divided into 24 time periods– –1:00–2:00, 2:00–3:00, … 24:00–1:00; Assuming that the probability of using a taxi to travel among passengers arriving at the airport in each time period is always 80%; Assuming that the interval between two adjacent taxis is the same; Regardless of whether it is in the airport or the city, as long as it carries passengers, it can be fully loaded (with 4 people seated, excluding the driver); Ignore the impact of large luggage on the time of taxi pick-up and drop-off; Suppose the city is a square area with side length a.

3.3 Symbol Description See Table 3.1.

3.4 Problem Analysis 3.4.1 Problem One Analysis Question one requires us to comprehensively consider the factors that affect the choice of taxi drivers, as well as the changing law of the number of airport passengers and the income of taxi drivers, and give the driver’s selection strategy [1]. We can use the queuing theory model to roughly draw the law of changes in the number of airport passengers over time, and then use mathematical methods and software to list the final earnings of the driver under the two choices A and B, expressed by a function, and then compare A. Compare the two functions with B, and compare their maximum value, and when the gain of A plan is greater than B, or the loss is less than B, so as to get the time period to choose A plan, when to choose B plan is appropriate.

32

Y. Ma and L. Tang

Table 3.1 Symbol description Symbol

Symbol meaning

i

The code of the time period (in “hours”)

t

Driver waiting time

si

Number of taxis in time period i

vi

Number of passengers in time period i

mi

Number of planes landing in time period i

u0

Passenger demand

t0

Time between departures of two adjacent taxis

t2

The time the driver drives from the airport to the city (equal to the city to the airport)

W0

The consumption of each passenger in the taxi

W1

No-load cost

W

The final profit of the taxi driver

M

Number of cars at the pick-up point

L

Lane side length

Q

Design hour traffic

l

Length of the lane occupied by each vehicle

m

Average drop-off time

3.4.2 Problem Two Analysis Question 2 requires us to start from the reality of life and verify the rationality of our previous hypothetical model and the dependence of related factors by collecting relevant data on taxis in a certain domestic airport and its city and the decisions of taxi drivers. This is a question of model verification and optimization. The goal of practice is to basically verify the reliability of the previously assumed model (not much different from the real value) and further optimize it. We have here Shanghai Pudong International Airport’s data on taxi services and many other indicators. Using these data, SPSS software is used to visualize the data. According to the image analysis, which time and which option the driver chooses will get more benefits, and then combine it with the actual situation, which is situation comparison.

3.4.3 Problem Three Analysis Question 3 requires us to think about how to set up airport taxi “pick-up points” to optimize the pick-up and drop-off time.

3 Airport Cab Scheduling Model Based on Queuing Theory

33

We can take another example of an airport in our country, through the statistical analysis of existing relevant data, combined with the pick-up point scale formula and lane-side formula, combined with the ideal value, we can comprehensively calculate how many “pick-up points” should be set. There is also the distance between the “boarding point” and the sidewalk.

3.4.4 Problem Four Analysis Question 4 requires us to formulate a feasible itinerary plan for taxis based on the difference in mileage of different taxis, under the condition that short-distance taxis can carry passengers back and forth. This problem can be analyzed by using the “dynamic vehicle routing problem queuing model”, using ideal assumptions and scientific calculation methods to find out how to minimize the system time and maximize the driver’s revenue under different circumstances.

3.5 Model Building and Solving 3.5.1 Give the Taxi Driver’s Decision Model in the Two Choices of A and B • Build a quantitative model of taxis Passengers who choose to take a taxi will line up in the taxi to wait for the arrival of the taxi, and the taxi arriving at the airport will wait in the airport tank. The trading area is the area where passengers take the taxi [2]. Fig. 3.1 Model of passengers arriving at the airport

34

Y. Ma and L. Tang

The general status of taxi pick-up includes: normal transaction time period, no passenger time period, no car time period, no car and no passenger time period [3]. The “passenger-free time period” and “car-free time period” here have a negative impact on the effective passenger carrying capacity of taxis. Through the analysis of the above-mentioned connecting transportation model of arriving passengers, especially the modeling analysis of the connecting transportation process of taxis, it can be concluded that most of the data can be obtained through real-time data collection, such as flight information and the corresponding number of passengers, the number of taxi waiting areas at different times, and the number of passengers in the corresponding taxi waiting area. However, the time distribution f(t) of passengers arriving at the airport exit, the passenger flow sharing ratio of each airport’s continuous transportation mode, and the supplementary capacity of taxis cannot be obtained directly or calculated through experience [4]. The time distribution of passengers arriving at the airport exit has an important influence on the decision-making of taxi drivers. • Explore the characteristics of the number of airport passengers over time. We use the queuing theory model hypothesis and use MATLAB software to plot the number of people arriving at the airport and the number of passengers successfully picked up at each time period (1:00–2:00, 2:00–3:00, … 24:00–1:00) The general distribution of the number of people. The difference in the ordinate reflects the demand for taxis by passengers at that moment. The larger the difference, it means that the number of people arriving at the airport is large and the number of people who are successfully picked up is small. It means that the demand for taxis is “in short supply”. “Supply exceeds demand” for taxis.

Fig. 3.2 Model of arrangement of passengers taking taxis

3 Airport Cab Scheduling Model Based on Queuing Theory

35

Fig. 3.3 The relation between the number of passengers being sent and arriving at the airport

• Use a functional model to express the benefits of A and B in choosing drivers (Fig. 3.3). Next, we will use the linear programming (return and risk of investment) model through mathematical calculations to specifically analyze the benefits of each moment, choose A to place a case and B to place a case, to better provide taxi drivers with high-yield options [5]. Option A: Go to the arrival area and wait in line to carry passengers back to the city. It is assumed here that there are 24 time periods (i = 1, 2, 3 … 24), which represent 1:00–2:00, 2:00–3:00, … 24:00–1:00, respectively. The income of the taxi driver depends on the waiting time t, and the waiting time is related to the number of taxis si and the number of passengers vi in that time period. Assuming that the number of planes landing in a certain period of time is m i , and the passenger capacity of each plane is 200 people, then vi = 200m i . In addition, the constant u 0 is introduced to represent the needs of passengers, because not every passenger who leaves the airport needs to take a taxi home. Here, we can assume u 0 = 0.8, which means that 80% of passengers need to use a taxi after arriving at the airport [6]. In time period i, the number of people who need to take a taxi is about vi × u 0 . If each taxi is limited to 4 people, then vi 4·u 0 taxis are needed in this time period,

36

Y. Ma and L. Tang

adjacent The departure time between the two taxis is t0 , and the waiting time t for  the last car is vi 4·u 0 − 1 · t0 [7]. If a taxi arrives at the airport, the number of taxis in the queue in front exceeds vi ·u 0 , which means that after the taxi in front picks up the passengers, it has to wait 4 for the next batch of passengers (time i + 1 Segment, vi × u 0 passengers), if this batch is not yet his turn to pick up the passengers [8], he still needs to wait for the passengers arriving at the airport in the i + 2 time period, and so on, the waiting time can be expressed as t=

v · u v · u v · u    i 0 i+1 0 i+2 0 − 1 · t0 + − 1 · t0 + −1 4 4 4 t0 + ...

  That is, t = t0 vi 4·u 0 + vi+14·u 0 + vi+24·u 0 − 3 .  u0  That is, t = t0 · 4 (vi + vi+1 + vi+2 ) − 3 . Because vi = 200m i , So u 0 t = t0 · (200m i + 200m i+1 + 200m i+2 ) − i (1 ≤ i ≤ 24) 4 (i means that i batches of passengers are picked up, the time interval is in “hours”.) The lost part is the part that can be won during this waiting time t, if it is directly emptied and returned to the city to solicit customers [9]. Assuming that the time it takes for a taxi to drive from the airport to the urban area and from the urban area to the airport is t2 hours, you can run 2tt 2 round trips to and from the urban area in t time, and each passenger consumes W0 yuan by taking a taxi. If they are fully loaded, the profit will be 2tt 2 × 4W0 yuan. Then the loss number   selected by A is tt02 × 2W0 u40 (200m i + 200m i+1 + 200m i+2 ) − i (1 ≤ i ≤ 24). The profit number selected by A is the passenger’s consumption from the airport once, which is 4W0 yuan. Then, if you choose A, the final profit is W = 4W0 −

u t0 0 × 2W0 (200m i + 200m i+1 + 200m i+2 ) − i (1 ≤ i ≤ 24) Yuan. t2 4

Option B: Directly return to the city to solicit customers. The main loss factors are as follows: No-load cost W1 ; Potential passenger income may be lost. If the return to the city to solicit passengers once can offset the loss of one less passenger load from the airport, the final loss will only be the no-load cost [10]. To sum up, the main loss of plan A is time cost, while plan B is no-load charge. Now, we need to compare at each moment (i takes all integers from 1 to 24) whether Plan A is profitable or the amount of loss is less than Plan B. If so, choose Plan A, and vice versa.

3 Airport Cab Scheduling Model Based on Queuing Theory

37

Now first use the MATLAB software to find the maximum value of the A scheme that meets the constraints [11]. ⎧ u t0 0 ⎪ W = 4W − × 2W + 200m + 200m − i ⎪ ) (200m 0 0 i i+1 i+2 ⎪ ⎪ t2 4 ⎪ ⎪ ⎪ ⎪ ≥ 15 W ⎪ 0 ⎪ ⎪ ⎪ ⎪ ≥0 t ⎪ 0 ⎪ ⎨ t2 ≥ 0 ⎪ ⎪ mi > 0 ⎪ ⎪ ⎪ ⎪ ⎪ m i+1 > 0 ⎪ ⎪ ⎪ ⎪ ⎪ m i+2 > 0 ⎪ ⎪ ⎪ ⎩ 1 ≤ i ≤ 24 Then compare when the loss of A is less than B. Here we use the  “difference method” for comparison:  = tt02 ×2W0 u40 (200m i + 200m i+1 + 200m i+2 ) − i −W1 ; Combine the abovementioned function image drawn with SPSS and MATLAB to solve the condition equation.

3.5.2 Analyze the Rationality of the Model Through the Relevant Data of a Domestic Airport and Its City Taxi • Actual data collection We collected relevant data of Shanghai Pudong International Airport (the number of taxis in the storage tank per unit time, the number of people arriving at the airport, the number of people leaving the airport [12], the number of people who need to travel by taxi), and comprehensively compared the various time periods (with hour), the passenger flow situation in the airport and the change in the number of taxis. By analyzing whether the taxis at each moment are in “supply exceeds demand” or “supply exceeds demand”, we can formulate a more reasonable and more efficient way for Pudong Airport taxi drivers. Profitable driving plan. Combining the actual solutions of Pudong Airport taxi drivers to verify the rationality of the previous model and the dependence of related factors. • Process and analyze the above data (Fig. 3.4). Use SPSS software to draw it into a scatter plot and obtain the following: When passengers have a large demand for taxis, and the number of taxis in the storage pool is small, choosing option A will obviously benefit more than option B.

38

Y. Ma and L. Tang

Fig. 3.4 The character of the change of passenger number with the time

According to the drawn image, it can be found intuitively that the number of taxis queued in the storage pool changes over time, and the change is not particularly dramatic [13]. Therefore, the dominant factor here is the passenger demand for taxis, and the demand here depends on the demand. “The number of people who need a taxi.” It can be seen from the scatter diagram that the difference between the ordinates of the green dots (the number of people who need taxis) and the purple dots (the number of taxis queued in the pool) during the time period from 10:00 to 17:00. Other times are bigger, which means that during this period, passengers have greater demand for taxis. Therefore, during this period, it is more wise and reasonable for taxi drivers to choose Plan A. According to the actual selection of taxi drivers at Pudong Airport, due to the peak period of demand for taxis (10:00–15:00), the number of taxis in the storage pool is on the rise, indicating that more taxi drivers are This time period has chosen the A scheme in the title, which is the same as the previous assumption, which shows that the above model assumptions are reasonable.

3.5.3 How to Set the “Boarding Point” to Maximize the Efficiency of the Ride • First, analyze by giving practical examples Here, we take another international airport in Shanghai—Shanghai Hongqiao International Airport as an example, and give a method for forecasting the scale of the airport supporting facilities for taxi pick-up and drop-off points, and provide a reference for airport traffic planning, design, and operation management.

3 Airport Cab Scheduling Model Based on Queuing Theory

39

By consulting relevant data and information, someone conducted investigations on drop-off points (drop-off time, number of passengers, baggage carried) and pickup points (drop-off time, number of passengers, waiting time in line, and departure efficiency). The plan is “the number of passengers at the taxi drop-off point and the drop-off time.” Combined with other relevant data, the recommended value for the average number of passengers carried by taxis at the departure level is: Hongqiao International Airport T1 = 1.25 people/vehicle; Hongqiao International Airport T2 = 1.50 people/vehicle. • Taxi passenger drop-off time The drop-off time refers to the time period from when the vehicle stops and prepares to drop off the passengers to when the vehicle starts and leaves after the drop-off. The time for disembarking the taxi also includes the time when the passenger pays [14]. (1)

Value of taxi drop-off time

The survey plan is “the number of passengers at the taxi drop-off point and the dropoff time.” Combining with other relevant data, the ideal value of the taxi drop-off time at the departure level is: Hongqiao Airport T2 = 39.8 s and Hongqiao Railway Station T1 = 34.7 s. (2)

Analysis of factors affecting taxi drop-off time

The number of passengers and the condition of carrying large luggage are the two most important factors that affect the drop-off time. According to the thought of control variables, in order to avoid other related variables from affecting the dependent variables, the above two factors should be analyzed separately [15]. Ignore the impact of carrying large luggage here, and only analyze the impact of the number of passengers on the time of pick-up and drop-off, so as to better solve the problem of “boarding point” and “dropping point”. The analysis shows that the number of passengers has a certain impact on the boarding time. Regardless of the factor of carrying large luggage, the more passengers carried, the longer the boarding time. • Determination and evaluation of the scale of taxi pick-up and drop-off points Determination and evaluation of the scale of the pick-up point: The scale of the pick-up point is measured by its traffic capacity. Traffic capacity is measured by the number of vehicles per hour. The calculation method of traffic capacity is also different for the nature of the vehicle and the place where the traffic demand occurs. In addition, the calculation process also requires a series of calculation parameters to be measured or assumed.

40

Y. Ma and L. Tang

3.5.4 A Theoretical Analysis of the Scale of “Boarding Points” and Lane Sides The formula for the pick-up point scale is: M=

Q×t 360

L = M ×l Among them, M is the number of vehicles at the pick-up point (units); L is the length of the lane (m); Q is the design hourly traffic volume, the 30th highest annual traffic volume is 30 HV, (veh/h); l is the occupancy per vehicle Lane length, m and t are the average drop-off time (s). The scale of taxi pick-up point in T2 of Hongqiao Airport: M1 =

1174 × 40 Q 1 · t1 = ≈ 14 3600 3600

L 1 = M1 × l1 = 14 × 8 = 112 m The scale of taxi pick-up point in T1 terminal of the airport: M2 =

3442 × 35 Q 2 · t2 = ≈ 34 3600 3600

L 2 = M2 × l2 = 34 × 8 = 272 m Since the taxi pick-up and drop-off time at Hongqiao Airport is reasonable and within the scientifically recommended value (ideal value), it can be used as a sample for reference. There are 14 pick-up points in the T2 terminal of Hongqiao Airport, and the pickup point is about 112 m away from the sidewalk; there are 34 pick-up points in the T1 terminal of Hongqiao Airport, and the pick-up point is about 272 m from the sidewalk. Other cities can also follow the example of Shanghai Hongqiao International Airport, and according to the scale of the airport, calculate the current airport taxi capacity, airport passenger flow, average pick-up and drop-off time, and other param1 ·t1 , take each parameter to the above ideal value, eters, using the formula M1 = Q3600 and find the value of the number of cars (M) at the pick-up point and the value of the length of the lane (L) under reasonable circumstances.

3 Airport Cab Scheduling Model Based on Queuing Theory

41

3.5.5 According to the Difference in the Mileage of Different Taxis, Develop a “Priority” Arrangement Plan to Maximize Time and Revenue • Queuing model for the dynamic vehicle routing problem According to the mileage of taxis, taxis with shorter journeys can carry passengers back and forth after sending off passengers, while taxis with longer journeys do not need to return to the airport to carry passengers again, thereby optimizing revenue and time. Here we use the dynamic vehicle routing problem queuing model to analyze. Assuming that the demand of the passenger destination appears in the form of Poisson flow, and the pick-up time obeys the general distribution, there are several possibilities regarding the actual demand and the taxi driver’s plan. The following will make assumptions about this one by one, using the queuing theory model. There is also system time for several situations such as geometric probability theory and related knowledge. Assuming that all taxis travel in the city, imagine the entire city as a square area with side length a. Let the airport z 1 (x1 , y1 ) and the other point z 2 (x2 , y2 ) be a random point (Can be in any position) and are independent of each other, thus   E |z 1 z 2 |2 =

¨

  (x2 − x1 )2 + (y2 − y1 )2 d x2 dy2

A



a 2

=



a 2

d x2 − a2

− a2

 1 (x2 − x1 )2 + (y2 − y1 )2 dy2 2 a =

a2 + x12 + y12 6 =

a2 3

E(|z 1 z 2 |) ≈ 0.52a     V ar (|z 1 z 2 |) = E |z 1 z 2 |2 − E |z 1 z 2 |2 ≈ 0.06a 2 • Queuing theory model Assuming that the demand is all dynamic demand, the location of demand points in area A obeys independent and even distribution, which means that after the taxi leaves the airport, it may go to any place in the city, where demand can be generated, and the time interval between adjacent demands u obeys negative exponential distribution, and u = λ1 , V ar (u) = λ12 . Suppose that the distribution of the time s staying at the

42

Y. Ma and L. Tang

destination of the taxi is general, and its average value s and variance Var(s ) exist. Define Ti as the time interval between the completion time of the demand i and the arrival time, and Wi is the time interval between the demand starting to be served time i and the arrival time, so Wi = Ti-si; steady-state system time T = lim E[Ti ], t→∞

steady-state waiting time W = T − s. According to the above definition, the dynamic vehicle routing problem can be regarded as an M/G/1 queuing model. It should be noted that for a certain demand i, the service time si should include the stay time S_iˆ of the truck at point i and the time when the truck drives to point i after receiving the demand information i dvi , where di is The distance traveled, V is the speed of the truck. For a steady-state system, there are n n   lim n1 i=1 si = lim n1 i=1 si + lim n1 nI=1 dVi , which is s = s  + dv . n→∞



n→∞

n→∞

Assuming that si and di are not correlated, the  distribution  of the random variable   s and d is independent, so that V ar (s) = V ar s + V ar dv . Combining the Little formula, we can get: 

d T = s + V



2     d λ s + V + V ar s +   + 2 1 − λ s  + Vd

V ar (d) V2

• The actual situation This situation can be divided into several situations for discussion: 1. 2.

3.

Demand continues to emerge, and short-distance taxis will immediately turn back after they have delivered passengers to their destinations. Even if there is no new demand, the short-distance taxi must return to the airport after the drop-off is completed, so as to wait for the new demand and reduce the waiting time of the passengers behind. When there is no new demand (the destination that the airport passengers want to go to) after the long-distance taxi has served a demand, it will stay in place until the next demand comes. Since the location of each demand point is independent and evenly distributed in the area, according to the above formula: 

d = E(|z 1 z 2 |) ≈ 0.52a V ar (d) = V ar (|z 1 z 2 |) ≈ 0.06a 2

Incorporating the deformation of Little’s formula, we can get: system time 2    0.52a   s + + V ar (s ) + V λ 0.52a    + T2 = s + V 2 1 − λ s  + 0.52a V

0.06a 2 V2

3 Airport Cab Scheduling Model Based on Queuing Theory

43

4.

For long-distance taxis, consider adjusting taxi stops when there is no new demand to reduce passenger waiting time     From the definition, |z 1 z 2 | = d, and set d = z 1 z 2 . When the taxi finishes sending passengers to point Z1, if there is already a queue for Z2, the taxi will immediately return to point Z0 (airport). 

d T 3 = s + V



2    d λ s + V + V ar (s ) +   + 2 1 − λ s  + Vd

V ar (d) V2

       as E z 1 z 2  = v λ1 − T ≥ v λ1 − T . thus,      z z 0  0.38a − V λ1 − T 1 x z1 = C x Z 1 xz ≈ xz = 1 |z 1 z 0 | 1 0.38a      z z 0  0.38a − V λ1 − T 1 yz 1 = C yz 1 yz ≈ yz  = 1 |z 1 z 0 | 1 0.38a   0.38a − V λ1 − T C= 0.38a Using the formula derivation method at the beginning of this question, we can also deduce:   a2 E (d )2 = (1 + C 2 ) · b ⎤ 2⎡ a a a a 2 2   2 2  c ⎢ 1 ⎥ d = (x z2 − cx z 1 )2 + (yz2 − cyz1 )2 d x z2 dyz2 ⎦d x z1 dyz1 ⎣ 2 a a − a2 − a2

− a2 − a2

   2  V ar (d ) = E (d )2 − d  With Minitab software, T2 ≈ 1.76, T3 ≈ 1.65 can be obtained. Therefore, short-distance taxis need to turn back immediately after sending passengers to their destinations. For long-distance taxis, they may not return to the airport immediately when there is no new demand to avoid overcrowding in the airport storage pool, but it is better to adjust the taxi stop to reduce the waiting time for subsequent passengers.

44

Y. Ma and L. Tang

References 1. Jiang, Q.Y., Xie, J.X., Ye, J.: Mathematical Model, 5th edn. Higher Education Press, Beijing (2018) 2. Si, S.K., Sun, Z.L.: Mathematical Modeling Algorithms and Applications, 2nd edn. National Defense Industry Press (2015) 3. Zhang, S., Huang, Y.: Overall design of Shanghai Hongqiao comprehensive transportation hub. Shanghai Construction Technology, (5), pp. 1–6 (2007) 4. Shanghai Municipal Engineering Design and Research Institute (Group) Co., Ltd.: Hongqiao Airport landside transportation system operation management research. Shanghai Municipal Engineering Design and Research Institute (Group) Co., Ltd., Shanghai (2014) 5. Huang, Y., Wang, G.Y.: Discussion on the optimization of the organization and management of the taxi pick-up system in Hongqiao Airport T2. Urban Roads Bridges Flood Control (12), 7–9 (2014) 6. Zong, C.L.: Study on the quantitative method for the scale of comprehensive passenger transport hub facilities. Chang’an University, Xi’an (2011) 7. Guo, Y.H., Li, J.: Optimum Dispatch of Vehicles. Chengdu University of Science and Technology Press, Chengdu (1994) 8. Xie, B.L., Guo, Y.H., Guo, Q.: Dynamic vehicle routing problem: current status and prospects. Application of System Engineering Theory and Method (2002) 9. Qian, S.D., Guo, Y.H.: Operational Research, Revised Edn. Tsinghua University Press, Beijing (1990) 10. Zhong, X.P.: Analysis of real-time strategy and technical support for dynamic vehicle routing problem. School of Economics and Management, Southwest Jiaotong University, Chengdu (2003) 11. Hu, F.: Research on short-term traffic flow forecast based on Markov model. Nanjing University of Posts and Telecommunications, Nanjing (2013) 12. Gao, Y.: Research on short-term traffic flow forecast models and forecasting methods. East China Normal University, Shanghai (2011) 13. Zhang, L.: Design and implementation of short-term traffic flow forecast algorithm based on cloud platform. Dalian University of Technology (2013) 14. Yao, Y.B.: Diversion prediction of large-capacity airport rail transit to landside traffic. Civil Aviation University of China (2006) 15. Zhang, Q.F.: Research and realization of coordination guarantee technology for connecting transportation of capital airport. University of Electronic Science and Technology of China (2015)

Chapter 4

Machine Reading Comprehension Model Based on Multi-head Attention Mechanism Yong Xue

Abstract Natural language processing is the core technology of artificial intelligence. Machine reading comprehension, as a hotspot in the field of natural language processing, has made a great breakthrough under the rapid development of deep learning technology. In this paper, starting from the DuReader dataset, we propose a machine reading comprehension model based on a multi-head attention mechanism for the newly released baseline model BiDAF in DuReader 2.0 for the problems of time-consuming training in terms of large data volume and complex tasks, and analyze two evaluation indexes of poor model performance, such as rouge lipid L and BLEU-4, through continuous experiments. baseline model. The model uses a multihead attention mechanism to replace the BiLSTM technology used in the coding layer and modeling layer of the DuReader baseline model to encode and model. In addition, it uses pre-training word vectors instead of randomly generated word vectors to embed words into the model, which effectively solves the shortcomings of the above baseline model. As a result, the model proposed in this paper has a ROUGE-L score of 39.76 and a BLEU-4 score of 35.66 on the Search test set of DuReader2.0, which is about 2.8 and 2.1% higher than the DuReader baseline model BiDAF. In the meanwhile, the ROUGE-L score and BLEU-4 score on the Zhidao test set are 48.61 and 43.63, respectively. Compared with the baseline model DiDAF, it is increased by about 2.4% and 2.0%, respectively. In the meanwhile, the training time of an epoch model in this paper is only about 2 beat 5 of the baseline model BiDAF. The results show that the model proposed in this paper is a good model not only in terms of time cost, but also in terms of accuracy of the model prediction results.

4.1 Introduction With the rapid development of Internet technology and the advent of the era of artificial intelligence, many scholars have started to tackle various technical problems Y. Xue (B) College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_4

45

46

Y. Xue

in the field of natural language processing. Machine reading comprehension, as a key technology in the field of natural language processing, is a hot research topic at present. The main task of machine reading comprehension is to make the machine learn to read and understand articles. Based on the given question, the accurate answer can be found from the relevant articles for the machine reading comprehension system. Machine reading comprehension technology is faced with many challenges, because it involves complex technologies such as language comprehension, knowledge reasoning, and abstract generation. The traditional machine reading comprehension technology often uses the pattern matching method based on artificial rules, or draws lessons from the method of information extraction to construct the relational database to find the answer. These methods cannot effectively deal with the diversity of expression, and the efficiency and accuracy are not high. In the meanwhile, fixed windows are often used to match articles and questions. Therefore, it is not possible to solve the long-term dependency problem between multiple sentences. The machine reading comprehension model based on deep learning has effectively solved the problems of traditional methods, especially in the English field with the development of deep learning technology. In addition, many classical models have achieved positive results in English machine reading comprehension tasks with the introduction of large-scale datasets. For example, MLSTM [2], BiDAF [3], RNet [4], QANet [5], BERT [6], and other models have achieved great success on the dataset after the SQuAD [1] dataset was proposed. However, in the Chinese field, Baidu did not release the first large-scale Chinese machine reading comprehension dataset DuReader [7] based on practical application scenarios until 2018, which is the largest Chinese machine reading comprehension dataset at present. In the meanwhile, Baidu built two baseline models suitable for DuReader datasets based on BiDAF and MLSTM, and held the first Chinese machine reading comprehension contest. Chinese machine reading comprehension has ushered in widespread attention. However, the DuReader baseline model has the following shortcomings: (1) the embedding layer uses the method of randomly generating word vectors to obtain the word embedding of a fixed length of each word, which cannot effectively represent the relationship between words. As a consequence, it cannot capture the global context information of the text very well. (2) the coding layer and modeling layer use Bi-directional Long Short-Term Memory (BiLSTM). To some extent, although this technology can effectively encode and model articles and questions, it not only takes time to train on tasks with large amounts of data, but also limits the model to obtain long-distance context information. This paper proposes a new machine reading comprehension model in view of the shortcomings of the DuReader baseline model: the machine reading comprehension model based on multi-head attention mechanism, which achieves better results than the DuReader baseline model on DuReader2.0 datasets, and verifies the effectiveness of the model proposed in this paper. The main contribution of this paper includes two aspects:

4 Machine Reading Comprehension Model …

1.

2.

47

A new machine reading comprehension model is proposed in this paper. Intends to replace the BiLSTM technology used in the coding layer and modeling layer of the DuReader baseline model by using the multi-head attention mechanism [8]. In addition, it can effectively shorten the training time of the model and improve the accuracy of the model to answer questions by using pre-training word vectors instead of randomly generated word vectors to embed words into the model. This paper makes an experimental comparison between the proposed model and the DuReader baseline model in terms of the two evaluation indicators of ROUGE-L and BLEU-4 and the time it takes to train an epoch. As a result, it is proved that the model proposed in this paper is a better Chinese machine reading comprehension model.

4.2 Related Work 4.2.1 DuReader Dataset Researchers have begun to use deep neural network [9] to study machine reading comprehension since 2015, which requires a large-scale training dataset. However, early datasets such as MCTest [10] are too small to be used for training but can be used for testing. As a consequence, Hermann et al. [9] proposed an efficient and feasible method to generate large-scale data for training in 2015. As a result, two large-scale datasets, CNN and Daily Mail [9], are constructed by using this method. The answers to these datasets are the entity words in the original text. Stanford University released a large-scale English dataset SQuAD [1] in 2016, whose answer is a continuous segment of the original text. It requires researchers to design a model according to a given article and question, and find out the answer sequence matching the question from the article. In addition, the dataset also involves answer generation technology, so it is more challenging. It has greatly promoted the development of machine reading comprehension technology after the release of SQuAD. The release of the SQuAD dataset promotes the rapid development of machine reading comprehension technology. However, this dataset has some shortcomings, such as artificial synthetic data, simple task, limited application field, and so on. As a consequence, it is very important to establish a real-world machine reading comprehension dataset. In fact, the MS MARCO [11] dataset released by Microsoft Research Asia comes from real application scenarios, which requires the model to have the ability of language generation. Baidu released a large-scale Chinese machine reading comprehension dataset DuReader [7] similar to MS MARCO due to its being inspired by the work done by Microsoft in 2018. The dataset is divided into two categories: Search and Zhidao,

48

Y. Xue

which come from Baidu search and Baidu know two real application scenarios respectively. The content is the real questions queried by Baidu users, each question corresponds to 5 candidate documents and their high-quality answers manually. Questions in Dureader datasets are divided into three types of answers in order to provide a richer variety of questions: Entity, Description, and YesNo. For entity questions, the answer generally contains one or more specific words. The answer to the description class question contains a continuous text description; The answer to the non-class question is “yes” or “no”. In addition, the initial version of the dataset contains 200 k questions, 1000 k original text, and 420 k answers. Baidu released the DuReader2.0 version on the basis of the initial version in 2019, which contains 271,574 training sets, 10,000 verification sets, and 120,000 test sets. As a consequence, it is the largest, most difficult, and more valuable Chinese machine reading comprehension dataset. Liu et al. [12] pointed out that it should be taken into account that multiple answers are equally valid for the same question, and each answer may appear multiple times in the context when building a model for the dataset of real application scenarios such as DuReader. Based on this, they proposed a multi-answer–multitasking framework. It trains multiple reference answers with different loss functions, and then combines a simple super-long file heuristic channel extraction strategy. Minimum risk training is used to solve the co-occurrence problem of a single answer. This method can effectively find the best matching answer from multiple articles corresponding to the question, and achieve good results on the DuReader dataset. The model proposed in this paper first uses this method to preprocess the DuReader dataset.

4.2.2 DuReader Baseline Model Baidu has constructed two open-source baseline models [7] based on DuReader datasets. The matching layers of these two baseline models come from the MLSTM [2] model and BiDAF [3] model proposed for the task form of SQuAD dataset, respectively. The main idea of the MLSTM model is to traverse each paragraph in turn. Besides, it dynamically matches the attention weight with each mark of the paragraph, and finally uses a pointer network layer [13] to find the answer span in the paragraph. As a result, it achieves the purpose of locating the answer segment in the paragraph. On the other hand, BiDAF achieves the purpose of highlighting the problem and the important part of the article by using the attention of the article to the problem and the attention of the problem to the article. In other words, two-way attention is also the biggest innovation of the model. In addition, the bidirectional attention mechanism is used to integrate all the useful information to get the vector representation of each position in the article. Except that the matching layer of the DuReader baseline model uses the matching mechanism of MLSTM and BiDAF, respectively, the other layers of the model are the same. For example, they include: (1) embedding layer (using the method of randomly generating word vectors); (2) coding layer (using BiLSTM technology);

4 Machine Reading Comprehension Model …

49

(3) matching layer; (4) modeling layer (using BiLSTM technology; and (5) output layer (using pointer network). The work of this paper is only based on the BiDAF model. After all, the performance of the BiDAF model is better than that of MLSTM on the SQuAD dataset. In addition, the time cost of the MLSTM model on the DuReader dataset is too high.

4.2.3 Multi-headed Attention Mechanism The coding layer of the current machine reading comprehension model mostly uses recurrent neural network (RNN) and various improved versions. Although this kind of technology can encode articles and questions to a certain extent, the information obtained by the coding is still limited. In addition, the global context information cannot be well encoded, and its complex structure and a large number of parameters limit the running efficiency of the whole model. In 2017, Vaswani et al. [8] proposed the multi-head attention mechanism. This method uses a new coding method to explicitly model the article with multiple granularities instead of BiLSTM. In fact, this method was first applied in the field of machine translation, and then applied to the field of machine reading comprehension by Wang et al. [14], achieving unprecedented results. For example, it significantly improved the overall performance of the model in the English dataset. However, the work of Wang et al. only uses the self-attention mechanism in the transformer to further enhance the matching of articles and questions, and does not completely abandon the BiLSTM method. In view of the problems of BiLSTM technology used in the coding layer and modeling layer of DuReader baseline model: (1) BiLSTM has a complex structure, large amount of computation, and time-consuming training on tasks with a large amount of data. (2) BiLSTM restricts the model to obtain long-distance context information. This paper creatively uses the multi-head attention mechanism in Transformer to replace the BiLSTM technology used in the coding layer and modeling layer of the DuReader baseline model. As a result, experiments show that our method can not only effectively improve the accuracy of the model to answer questions, but also effectively save the training time of the model.

4.3 Multi-head Attention Mechanism This section mainly introduces the principle of multi-head attention and its core algorithm scaled dot—product attention layer.

50

Y. Xue

Fig. 4.1 Overall structure of multi-head attention

4.3.1 Multi-head Attention Presidential Structure As shown in Fig. 4.1, it is a schematic diagram of the overall structure of the multiheaded attention. The processing steps are as follows: input Q (query), K (key), V (value), where Q is the query object and K-V is the key-value pair. q, K, V first pass through the linear transformation layer, then input to the scalar dot product attention layer for a separate h-time (headcount) operation, and the results of the h-time operation are combined and then pass through the linear transformation. The final value is used as the result of multi-headed attention. The purpose is to learn the dependent words in the sentence and to capture the internal structure of the sentence. The calculation formula is as follows: headi = Attention(QWiQ , K WiK , V WiV )

(4.1)

MultiHead(Q, K , V ) = Concat(head1 , . . . , headh )W o

(4.2)

In the formula, w represents weight, Concat() represents join operation.

4.3.2 Scaled Dot-Product Attention As shown in Fig. 4.2, it is a schematic diagram of the overall structure of scaled dot-product attention. The treatment idea is: the model first calculates the value of similarity between Q and K through the point multiplication operation, and then in

4 Machine Reading Comprehension Model …

51

Fig. 4.2 The overall structure of scaled dot-product attention

order to prevent the result from being too large, divide it by a factor, where is the dimension of K. Moreover, use the Softmax function to normalize the result, and then multiply it by V to get an attention value. This operation can be expressed as  QK T V Attention(Q, K , V ) = Softmax √ dk 

(4.3)

When Q = K = V, it is the self-attention model.

4.4 Model Structure This section first proposes the formal definition of machine reading comprehension. In addition, we also describe the structure of a machine reading comprehension model based on multi-attention mechanism. The model in this paper is designed based on bi-reader baseline model BiDAF. The model consists of embedding layer, coding layer, matching layer, modeling layer, and output layer.

4.4.1 Task Definition The task of reading comprehension can be expressed as a supervised learning problem. Now set a set of training examples: n {( pi , qi , ai )}i=1

52

Y. Xue

The aim is to learn a predictor f , which takes the text p and the corresponding question q as input, and the answer a as the output: f : ( p, q) → a

(4.4)

    In the formula, p = p1 , p2 , . . . p1 p , q = q1 , q2 , . . . q1q , l p and lq , respectively, represent the length of the text and the question; pi ∈ v, i = 1, 2, · · · l p , qi ∈ v, i = 1, 2, · · · lq , and v is a predefined vocabulary.

4.4.2 Embedded Layer The main function of this layer is to map each word to a high-dimensional vector space. The DuReader baseline model uses randomly generated word vectors to obtain fixed-length word embedding for each word. However, this method cannot effectively represent the relationship between words, so it can not capture the global context information of the article. This paper uses a 300-dimensional (−150 to 150) Chinese word vector based on location-based words trained through Baidu encyclopedia, and then processes through a two-tier highway network. As a result, the embedding of the input text and each word in the question are obtained, respectively: X ∈ R ∧ (d × T ), Q ∈ R ∧ (d × J ).

4.4.3 Coding Layer This layer encodes the internal relations between the X and Q sentences obtained by the embedded layer. In other words, the interdependence between words is encoded, and the context code H ∈ R ˆ (2d × T ) and the problem code U ∈ R ˆ (2d × J) are obtained. DuReader baseline model uses bidirectional long-term and shortterm memory network (BiLSTM). Although this technology can effectively encode articles and questions to some extent, BiLSTM not only has a complex structure, large amount of computation, and time-consuming training on tasks with large amounts of data, but also limits the model to obtain long-distance context information, even more unable to parallel computing. Using the multi-head attention mechanism to replace BiLSTM can extract the inter-word dependency information of sentences, such as common phrases and pronouns. In addition, it can directly calculate the dependency relationship regardless of the distance between words and learn the internal structure of a sentence. More importantly, the long-head attention mechanism does not depend on the calculation of the previous moment, so it can be well parallelized.

4 Machine Reading Comprehension Model …

53

4.4.4 Matching Layer The input of this layer is the text representation matrix H and the problem representation matrix U obtained through the coding layer. Besides, the query-aware context representation is obtained by using the bi-directional attention mechanism proposed by the BiDAF model. Let h i be the vector of the context word i, u j be the vector of the problem word j, and n q and n c be the length of the question and context, respectively. First, the attention between the context word i and the question word j is calculated as ai j = w1 · h i + w2 · u j + w3 · (h i ◦ u j )

(4.5)

In the formula, w1 , w2 , w3 denotes trainable weight vector, and ◦ represents element product. Next, calculate the vector ci of the context-question between the ai j obtained above and the question word j: eai j pi j = nq j=1

ci =

nq 

eai j

u j pi j

(4.6)

(4.7)

j=1

In addition, use the following formula to calculate the vector u c of the problemcontext: m i = max ai j , 1 ≤ j ≤ n q em i pi = nc i=1

uc =

nc 

em i

h i pi

(4.8) (4.9)

(4.10)

i=1

Finally, the h i , ci , h i ◦ ci , u c ◦ ci is concatenated and passed to the linear layer with the ReLU activation function as the layer’s final output gi : gi = ReLU(concat(h i , ci , h i ◦ ci , u c ◦ ci ))

(4.11)

54

Y. Xue

4.4.5 Modeling Layer The main purpose of this layer is to capture the interrelationship of words in the context of the problem. In fact, the DuReader baseline model uses BiLSTM technology, and the shortcomings of this technology have been introduced in the coding layer and will not be repeated here. Similar to the coding layer, this paper still uses the multi-attention mechanism to replace BiLSTM. After all, it can well capture the text context information related to the problem, and further shorten the training time of the model.

4.4.6 Output Layer The machine reading comprehension task requires the model to find a sentence or some sub-components in the paragraph for a certain summary to answer the question. However, these sentences are obtained by predicting the starting and ending position index. In this part, this paper uses the pointer network, which is commonly used in previous work, so as to achieve this purpose.

4.5 Experiments and Results In this section, ROUGE-L and BLEU-4 are used as the evaluation indicators of the model. In addition, we evaluate our model on the two datasets of DuReader2.0, Search, and Zhidao, from the aspects of experimental environment introduction, parameter setting, comparative experiment, and so on.

4.5.1 Evaluation Criteria ROUGE-L: The principle of calculation is to examine the accuracy and recall rate on the longest common subsequence of the candidate answer and the reference answer. As the specific formula shows

ROUGE-L =

(1 + β 2 )R LC S PLC S R LC S + β 2 PLC S

(4.12)

4 Machine Reading Comprehension Model …

55

R is the recall rate of the longest common subsequence on the reference answer, P is the accuracy of the longest common subsequence on the candidate answer, and β is the ratio of accuracy to recall. In the formula:

R LC S =

LC S(X, Y ) m

(4.13)

PLC S =

LC S(X, Y ) n

(4.14)

where X is the reference abstract, the length is m, Y is the generated summary, and the length is n. Bilingual evaluation substitute (BLEU) is a matching rule based on N-gram, which means that the larger the value of BLEU-4, the closer the semantic relationship between the two strings. As a result, the calculation formula of BLEU is as follows: ⎧  N

⎪ ⎪ BLEU = f exp Wn lg Pn ⎪ ⎨ n=1  1, l > l c r ⎪ ⎪ f ⎪ ⎩ (1− llrc ) e , l c ≤ lr

(4.15)

In the formula, N = 4, wn is the weight value of each order, f is the penalty factor, pn is the precision of order n, lc and lr represent the length of two strings c and r , respectively.

4.5.2 Dataset In this paper, experiments intend to be carried out on the DuReader2.0 dataset. The composition of the experimental data is shown in Table 4.1, and the composition is shown in Table 4.2. The model of this paper is trained with the full training set and verification set of Search and Zhidao, respectively, in the experiment. In other words, an epoch is trained on the training set to evaluate the model with the verification set, and the model with the highest ROUGE-L score is saved. The test set is read into the currently saved best model to obtain the answer file, and finally, the file is submitted to the Table 4.1 Data statistics

Question

Document

Answer

Amount

301,574

14,31,429

665,723

Avg len

26(char)

1793(char)

299(char)

56

Y. Xue

Table 4.2 Data composition

Train

Dev

Test

All

300,000

20,000

120,000

Search Zhidao

150,000 150,000

10,000 10,000

60,000 60,000

official evaluation platform of DuReader after training three epoch. As a result, the corresponding ROUGE-L and BLEU-4 scores are obtained.

4.5.3 Experiments and Results As shown in Table 4.3, it is the experimental environment of this paper. We will verify the feasibility and efficiency of the machine reading comprehension model proposed in this paper from many aspects such as the experimental results and the time it takes to train an epoch. This paper completely adopts the parameter setting of the baseline model in order to objectively compare this model with the DuReader baseline model. Finally, this paper uses the popular pre-training word vector in the word embedding part, and specifically uses the position-based word-to-word 300-dimensional (−150 to 150) Chinese word vector trained in the Baidu encyclopedia. Tables 4.4 and 4.5 are the experimental results of this model. As we can see from the above two tables, compared with the DuReader baseline model, this model has a significant improvement in ROUGE-L indicators, and slightly Table 4.3 Experimental environment

Table 4.4 Experimental results on search dataset

Name

Version

GPU

GeForceGTX1080ti

CPU

Intel(R) Xeon(R) CPU E5-2628 v3

RAM

32 GB

ROM

1T

OS

Ubuntu16.04

TensorFlow-GPU

1.14

CUDA

10.2

Python

3.6

Model

ROUGE-L

BLEU-4

TIME

Baseline (BiDAF)

0.3867

0.3494

~8 h 30

Baseline (MLSTM)

0.3861

0.3542

~13 h

OUR

0.3976

0.3566

~3 h 20

4 Machine Reading Comprehension Model … Table 4.5 Experimental results on Zhidao dataset

57

Model

ROUGE-L

BLEU-4

TIME

Baseline (BiDAF)

0.4747

0.4276

~8 h 30

Baseline (MLSTM)

0.4729

0.4257

~13 h

OUR

0.4861

0.4363

~3 h 20

improved in BLEU-4 indicators. In the meanwhile, compared with the DuReader baseline model, this model has a great improvement in the time spent on training an epoch, and effectively saves the time cost. It is proved that: (1) the multi-head attention mechanism can not only obtain the global context information of the document better, but also spend less time; (2) the use of pre-trained word vectors can significantly improve the overall performance of the model.

4.6 Conclusion In view of the problems of high time overhead and low accuracy of answering questions in the BiDAF coding layer and modeling layer in the DuReader baseline model, a machine reading comprehension model based on a multi-head attention mechanism is proposed in this paper. This model creatively replaces the BiLSTM technology used in the coding layer and modeling layer in the DuReader baseline model with a multi-head attention mechanism. In addition, it solves the problem of high time overhead in the baseline model. Finally, the pre-training word vector is used to embed the word into the model instead of a randomly generated word vector, which solves the problem of low accuracy of the baseline model in answering questions. Compared with the DuReader baseline model, this model has a significant improvement in the ROUGE-L index and a slight improvement in the BLEU-4 index. In the meanwhile, compared with the DuReader baseline model, the time spent on training an epoch in this model is greatly improved, and the time cost is effectively saved. It is proved that this model is an excellent model both in terms of time overhead and in terms of the accuracy of the model in answering questions. There is still a certain gap between the model of this paper and the optimal model of the DuReader2.0 dataset task at present. The reasons are as follows: this model does not carry out model fusion and parameter optimization; In addition, many researchers have adopted BERT or ELMO pre-training language model to further improve the overall performance of the model at present [15]. We have also tried this method, but on large-scale and complex datasets like DuReader2.0, the improvement is not obvious. In addition, the demand for experimental equipment is very high. In view of the limitations of time and experimental conditions, this paper does not continue to try to pre-train the language model. However, we intend to continue to follow up and hope to make a breakthrough in this aspect in the next research work, further upgrading this model to one of the best models at present.

58

Y. Xue

References 1. Eason, G., Noble, B., Sneddon, I.N.: On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. Phil. Trans. Roy. Soc. London A247, 529–551 (1955) 2. Clerk Maxwell, J.: A Treatise on Electricity and Magnetism, 3rd ed., vol. 2. Oxford, Clarendon, pp. 68–73 (1982) 3. Jacobs, I.S., Bean, C.P.: Fine particles, thin films and exchange anisotropy. In: Rado, G.T., Suhl, H. (Eds.) Magnetism, vol. III. Academic, New York, pp. 271–350 (1963) 4. Elissa, K.: Title of paper if known (unpublished) 5. Nicole, R.: Title of paper with only first word capitalized. J. Name Stand. Abbrev. (in press) 6. Yorozu, Y., Hirano, M., Oka, K., Tagawa, Y.: Electron spectroscopy studies on magneto-optical media and plastic substrate interface. IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–741, August 1987 [Digests 9th Annual Conf. Magnetics Japan, p. 301, 1982] (1987) 7. Young, M.: The Technical Writer’s Handbook. University Science, Mill Valley, CA (1989) 8. Rajpurkar, P., Zhang, J., Lopyrev, K., et al.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Method in Natural Language Processing. Stroudsburg: ACL, pp. 2383–2392 (2016) 9. Wang, S., Jiang, J.: Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905 (2016) 10. Seo, M., Kembhavi, A., Farhadi, A., et al.: Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603 (2016) 11. Wang, W., Yang, N., Wei, F., et al.: Gated self-matching networks for reading comprehension and question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 189–198 (2017) 12. Yu, A.W., Dohan, D., Luong, M.T., et al.: Qanet: combining local convolution with global self-attention for reading comprehension. arXiv preprint arXiv:1804.09541 (2018) 13. Devlin, J., Chang, M.W., Lee, K., et al.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1, pp. 4171–4186 (2019) 14. He, W., Liu, K., Liu, J., et al.: DuReader: A Chinese Machine Reading Comprehension Dataset from Real-world Applications. ACL 2018, 37 (2018) 15. Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 5998–6008 (2017)

Chapter 5

Design of Military Physical Fitness Evaluation System Based on Big Data Clustering Algorithm Dong Xia, Rui Ma, Ying Wu, and Ying Ma

Abstract The existing military fitness evaluation system has the defect of low accuracy, which cannot meet the needs of today’s military field. Therefore, this paper puts forward the design and research of military physical fitness evaluation systems based on big data clustering algorithm. The big data clustering algorithm is introduced to design the military physical fitness evaluation system. The hardware unit is mainly the input unit, controller, arithmetic unit, memory, and output unit; the software module is mainly the construction module of the military physical fitness evaluation index system, the application module of big data clustering algorithm, and the database establishment module. Through the design of hardware unit and software module, the operation of the military physical fitness evaluation system is realized. The experimental results show that through data comparison, compared with the existing system, the design system has higher accuracy of military physical fitness evaluation results, which fully confirms the effectiveness and feasibility of the design system.

5.1 Introduction In 2020, the outbreak of the epidemic made us realize the importance of a strong country, and a strong country is inseparable from a strong military escort. With the development of weapons and equipment, the progress of science and technology, and the adjustment of national policies and regulations, the environment faced by a single soldier in the task becomes more and more complex. War and military tasks put forward a variety of requirements for soldiers, and the premise of completing all tasks is the support of strong physical quality [1]. With the development of weapons and equipment from cold weapons to hot weapons, the victory of soldiers has changed from quick strength to special physical strength. With the continuous adjustment of the military service law system, the shortening of service time (3–2 years) requires soldiers to complete the process from physical reserve to physical output in a shorter D. Xia · R. Ma (B) · Y. Wu · Y. Ma Hunan Sports Vocational College, Changsha City, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_5

59

60

D. Xia et al.

time. The five times of general disarmament that our army has experienced from the early days of its founding to the present requires that the number of soldiers be reduced to accomplish more tasks. In 2016, the 7 major military regions were transformed into 5 world war zones. The joint operations of various services and arms require soldiers not only to complete their own tasks, but also to coordinate and cooperate with various auxiliary tasks in the military region. Facing the increasingly complex and changeable battlefield environment, it is very important for soldiers to be ready for battle at any time. Physical fitness is the foundation of sports and the mother of a strong army. Good physical fitness can make soldiers get twice the result with half the effort on the battlefield [2]. In the process of the development of military physical fitness, the existing physical training system has gradually been unable to meet the needs of the increasingly complex task environment for soldiers’ physical fitness level. In the training operation, multiple injuries, low income, and single operation means have become the problems that need to be solved in physical training [3]. In the selection of experimental subjects, the scientific training needs of recruits and veterans in the same period were analyzed. In addition to physique subjects, the physical fitness assessment indexes of recruits are: horizontal bar pull-up, parallel bar arm flexion and extension, sit-ups, 3000 m running, basic physical fitness combination 1, basic physical fitness combination 2. The physical examination indexes of veterans are: push up, sit-up, 3000 m running, 30 m × 2 snake running. In the setting of assessment subjects, the physical fitness requirements of military physical training subjects are more detailed, showing a more refined military physical training need. In terms of training tasks, the main task of recruits’ physical training is adaptive training + tempering physical foundation, mainly based on a physical foundation. The main task of veterans’ physical training is physical foundation tempering + special physical development, mainly based on special physical development. On the basis of training tasks, recruits’ military physical training shows greater demand. In terms of training time, the purpose of recruits’ training is to complete the transformation from an ordinary young man to a qualified armed police soldier as soon as possible within 3–5 months. Veterans can slowly improve their physical training level during the next full-service period until they can complete more difficult tasks. When the physical level is not high, they usually undertake some basic tasks. Based on this, the recruits with tight time and heavy tasks need more physical training. In terms of training injuries, the injury probability of recruits is more than that of veterans. In terms of training equipment and venues, veterans are mostly located in government compound, prison, and other places, and some units do not have all the standard training venues. Recruits are located in training bases, with more professional facilities and stronger operability. To sum up, recruits’ physical training shows a greater demand for scientific military physical training. In this study, we mainly select recruits’ physical training as the experimental object for operation [4]. According to the existing research results, due to the large data volume and complex big data processing, the existing military physical fitness assessment and evaluation system have the defects of poor accuracy, which cannot meet the needs of

5 Design of Military Physical Fitness Evaluation System Based …

61

today’s military field. Therefore, this paper proposes the design and research of military physical fitness assessment and evaluation system based on big data clustering algorithm. Big data clustering algorithm can process multiple data in parallel, which can greatly shorten the data processing time and improve the accuracy of physical fitness assessment.

5.2 Hardware Unit Design of Military Physical Fitness Evaluation System The hardware equipment of military physical ability assessment and evaluation system is mainly a computer device, which is mainly composed of five hardware: input unit, controller, arithmetic unit, memory, and output unit, which bear different functions. The specific design process is as follows. An input device is a device that inputs information to a computer [5]. It is an important human–computer interface, which is responsible for converting the input information (including data and instructions) into binary code that can be recognized by the computer, and sending it to the memory for storage. The controller is responsible for taking out the instructions from the memory and decoding the instructions; according to the requirements of the instructions, according to the time sequence, it is responsible for sending control signals to other components to ensure that all components work in coordination and complete various operations step by step. The controller is mainly composed of instruction register, decoder, program counter, and operation controller. The core of hardware system is the central processing unit (CPU). It is mainly composed of a controller, arithmetic unit, and so on, and is made of a large-scale integrated circuit technology chip, also known as a microprocessor chip [6]. The logic unit is also called the arithmetic unit. It is a part of computer processing data, including arithmetic operation (addition, subtraction, multiplication, division, etc.) and logic operation (and, or, not, XOR, comparison, etc.). Memory is a part of computer memory or temporary storage of data. All information in the computer, including the original input data. The intermediate data after preliminary processing and the useful information after the final processing are stored in memory. Moreover, various programs that command the computer to run, that is, a series of instructions that stipulate how to process the input data are stored in memory. Memory is divided into memory storage (memory) and external memory (external memory) two types [7]. An output device is a device that outputs the results of computer processing. In most cases, it transforms these results into a form that is easy for people to identify [8]. In this paper, the structure of the CPU controller and the arithmetic unit is analyzed in detail. The schematic diagram is shown in Fig. 5.1.

62

D. Xia et al.

Fig. 5.1 Structure diagram of CPU components

End cap Connector

housing

Circuit board

(1) Schematic diagram of controller structure

Bus 1

ALU

General purpose register

Special register

Buffer

Special register Bus 2

(2) Structure diagram of arithmetic unit

The above process realizes the design and selection of the hardware unit of the design system, but it is still unable to realize the evaluation of military physical fitness examination, so the system software module is designed based on the design of the hardware unit.

5.3 Software Module Design of Military Physical Fitness Evaluation System The software modules of the system are the construction module of the military physical fitness evaluation index system, the application module of the big data clustering algorithm, and the database establishment module. The specific design process is as follows.

5 Design of Military Physical Fitness Evaluation System Based …

63

5.3.1 Construction Module of Military Physical Fitness Evaluation Index System With the development of war form, the military physical fitness evaluation index system of our army is constantly innovating, and the status of military physical fitness training in military operations is constantly being verified. From the first formal military physical exercise standard issued by our army in 1994 to the military physical fitness standard implemented in 2006, and then to the current military physical training and assessment outline, the military physical fitness assessment system has evolved from a single basic physical fitness assessment to the assessment of personnel, arms, and combat types. It includes basic physical fitness items, special physical fitness subjects, and expanding physical fitness items. The refinement of military physical fitness assessment indicators also points out the direction for the military to carry out military physical fitness training [9]. The current assessment system of military physical fitness is an assessment outline for all the people in the army issued in 2018. Its name is “military sports training and assessment outline,” which mainly stipulates five parts: assessment items, assessment standards, assessment rules and test methods, reference personnel, assessment equipment, and venues. The part of assessment items is divided into three categories: basic physical fitness items, special physical fitness subjects, and expansion physical fitness items. Each physical fitness assessment item is marked with the standard of achievement according to the age level. The minimum age range is under 25 years old, and the upward order is 25–29 years old, 30–34 years old, 35–39 years old, 40–44 years old, 45–49 years old, 50–54 years old, and 55–59 years old. According to each 5-year-old as a level, personnel under the age of 60 need to participate in all subjects of basic physical fitness examination and some subjects of special physical fitness examination, and expansion type physical fitness items are not included in the scope of examination (but applicable to all people in the whole army) [10]. The time of military physical training should be arranged throughout the year, which should not be less than 240 h a year. In principle, it should be one hour a day. The general examination of physical training is conducted once a year, which is usually organized by the corresponding or superior business departments. The general examination organization authority of personnel of the division (including division) department (Branch) team shall be in accordance with the provisions of the outline of military physical training and assessment; the general examination of personnel of units above the military, colleges and scientific research institutions shall be organized by the corresponding level, and the physical training and sampling examination shall be organized by the superior. The results of military physical training are evaluated by a four-level system. The results of individual physical training are usually evaluated by a four-level system. The results of unit physical training are usually evaluated by a four-level system. The results of military physical training of recruits are evaluated by a four-level system. The evaluation grades are excellent, good, pass, and fail [11].

64

D. Xia et al.

In order to pay attention to the accuracy of military assessment results, this paper expounds on the general physical fitness standards of the five major arms of the army, as follows: First is the basic physical fitness projects. Assessors: The basic physical fitness items of military physical fitness include all soldiers under the age of 60, including soldiers, sergeants, officers, and other personnel in any position [12]. Assessment items: There are five assessment subjects for military physical fitness × training of veterans, including physique. The subjects that can organize training include pushups, sit-ups, 30 m × 2 snake running, and 3000 m running. In this study, recruits training subjects are the basis of soldiers’ physical training in the whole service cycle, which is slightly different from the above subjects. The subjects of military physical fitness assessment for recruits are: body shape, horizontal bar pullup (horizontal bar flexion arm suspension for female soldiers), parallel bar arm flexion and extension (parallel bar support forward for female soldiers), sit-ups, 3000 m running, basic physical fitness combination 1 (prone bridge + T-type running), basic physical fitness combination 2 (back bridge + 30 m × 2 turn back running). Among them, the assessment method of basic physical fitness combination is a sampling test, that is to say, each soldier participates in the assessment of six subjects including body shape [13]. Assessment methods: the military physical fitness assessment of recruits adopts four levels of assessment standards, which are excellent, good, pass, and fail, respectively. The assessment of male and female soldiers is carried out separately. Horizontal bar pull-up (female soldiers bending arm hanging) and parallel bar arm flexion and extension (female soldiers supporting forward) were strength assessment items. The sit-ups are tested by completing a specified number of sit-ups quickly in less than two minutes. The assessment method of 3000 m race is the shortest time to complete the displacement within the unit distance. The basic physical fitness combination 1 and 2 are the assessment methods of standard items and timing items. Second is the special physical fitness subjects. Special physical fitness subjects are the training subjects that new recruits are exposed to. They are mainly set up according to the classification of the armed police force of the Chinese people’s Liberation Army, the air force of the Chinese people’s Liberation Army, the navy of the Chinese people’s Liberation Army, the rocket force of the Chinese people’s Liberation Army, and the strategic support force of the Chinese people’s Liberation Army. In the outline of military sports training and assessment, according to the combat tasks of various services and arms, corresponding professional physical training subjects are set up to provide physical support for soldiers to complete specific military tasks in combat. The special physical fitness subject is carried out on the premise that the basic physical fitness items develop to a certain level. Only by training, the high special level on the high physical fitness level is the prerequisite to becoming an excellent soldier [14]. The main tasks of the Chinese people’s armed police force are to deal with conflicts, maintain stability, and maintain the internal security and stability of the country. The main fighting places are streets and houses in cities or villages, and the

5 Design of Military Physical Fitness Evaluation System Based …

65

targets are serious violent criminals, organizers of events seriously hindering public security, underworld organizations, terrorist organizations, and ringleaders who use national religion to split the motherland. According to the combat characteristics and targets, the special physical fitness subjects of the armed police force mainly include: catching fist, emergency stick, 400 m obstacle, etc. The PLA army is one of the longest growing services in our army and has always been the core of operational development. Under modern conditions, the army’s operational styles mainly include mobile attack and mobile defense operations, positional attack and positional defense operations, landing and anti-landing operations, urban attack and urban defense operations, mountain attack and mountain defense operations, border defense and counter-attack operations, airborne and anti-airborne operations, blockade and anti-blockade operations, anti-air raid operations, special operations and anti-terrorist operations, and so on. The key points of army operations are large-scale landing operations and antiair attack operations. According to the characteristics of its military tasks, the items of army special physical examination are: military boxing, assassination exercises, obstacle surmounting, etc. [15]. Third is the development of physical fitness projects. There is no classification of arms and services in the setting of military physical fitness training and expansion items, and all soldiers can carry out it according to the field equipment. The main events are: ball games, track and field, gymnastics, water sports, martial arts, ice and snow, and others. There is no assessment for the setting of physical fitness items of the expansion category, which is organized flexibly by various services and arms.

5.3.2 Application Module of Big Data Clustering Algorithm Because a large amount of data will be generated in the process of military physical fitness assessment, which puts forward higher requirements for the system operation ability, the big data clustering algorithm is applied to cluster the data, which is convenient for the calculation and generation of the follow-up evaluation results. According to the design system requirements, the K-means clustering algorithm is selected as the main means of data processing [16]. K-means is a clustering algorithm based on centroid technology. K-means clustering algorithm was proposed by Steinhaus, Lloyd, Ball and Hall, and McQeen in 1955, 1957, 1965, and 1967, respectively. Then K-means algorithm is applied to business intelligence, marketing, information retrieval, biology, and other fields. K-means algorithm is the most popular clustering algorithm based on partition. As the simplest and most basic step of cluster analysis, “partition” composes the unknown object set into multiple mutually exclusive groups or clusters, so that the objects in the same group or cluster have high similarity, while the objects in other groups or clusters have obvious differences [17]. The judgment of similarity and dissimilarity between an object with unknown attributes and each group or cluster usually involves the calculation of distance, and then partition.

66

D. Xia et al.

Suppose a dataset D contains n unknown objects. First, it needs to determine the number k of packets (clusters). Then n objects are assigned to k groups (clusters) C1 , C1 , . . . , Ck by partition method. And make the elements in each group or cluster different from each other, i ≥ 1, j ≤ k, Ci ⊂ D and Ci ∩ C j = ∅. Finally, we need to determine an objective function to achieve high similarity within groups (clusters) and high dissimilarity between groups (clusters). In addition, the K-means algorithm can use a division technology based on centroid. Centroid is the center of each group (cluster), which can be defined in many ways, such as the definition of the mean value or center point of the objects assigned to the group (cluster). At the same time, the centroid can represent the group (cluster) and be recorded as µi . Object p ∈ Ci , the difference between the object and the group (cluster) representing µi is measured by dist( p, µi ), and dist(x, y) is the Euclidean distance between x and y. The quality of group (cluster) Ci can be measured by intragroup variation, which is the square sum of the errors between all objects in Ci and the centroid µi : E=

k  

dist( p, µi )2

(5.1)

i=1 p∈Ci

In Eq. (5.1), E is the sum of square errors of all objects. p is the point in the space, which represents the given data object; µi is the centroid of the group (cluster), which is also the center. The main goal of the formula is to find the sum of squares of the distance between each object in each group (cluster) and the center of the group (cluster). In K-means algorithm, E is negatively correlated with k, and E tends to decrease with the increase of k. Therefore, the results calculated by the objective function make each group (cluster) as compact as possible, and each cluster as independent as possible. However, it is not guaranteed that the K-means algorithm can make the result converge to the global optimal solution. According to the experimental results, the algorithm often ends at the local optimal solution, and the result is closely related to the selection of the initial group (cluster) center [18]. Therefore, in order to make the experimental results as accurate as possible, we usually need to select different initial group (cluster) centers, run the K-means algorithm many times, and make an analogy [19].

5.3.3 Database Building Module Database is the foundation of a system operation. The database of the design system is mainly presented in form of a table. Due to the limitation of space, this study only displays some database tables, as follows: The data object table is shown in Table 5.1.

5 Design of Military Physical Fitness Evaluation System Based … Table 5.1 Data object table

67

Serial number

Data name

Data meaning

1

Name

Full name

2

Sex

Gender

3

Status

State

4

Age

Age

5

Birthday

Date of birth

6

Part

Work unit

7

Notes

Remarks

8

IsQualified

Qualified or not

9

Time

Assessment time

10

TotalScore

Total score

Through the design of the above hardware unit and software module, the operation of the military physical fitness assessment and evaluation system is realized, which provides more accurate data support for military physical fitness exercise and assessment [20].

5.4 Experiment and Result Analysis In order to verify the application performance difference between the design system and the existing system, the MATLAB simulation platform is used to design the experiment.

5.4.1 Construction of Experimental Environment In order to successfully apply the big data clustering algorithm, the Spark distributed computer platform environment is built, which consists of 6 nodes, including 1 master node, and 5 slave nodes. The distributed cluster software environment is shown in Table 5.2. Table 5.2 Distributed cluster software environment Application

Edition

Application

Edition

Linux

CentOS 6.7

Hadoop

Hadoop-2.7.4

JDK

jdk-8u91-linux

Spark

Spark-2.2.0-bin-hadoop2.7

Zookeeper

Zookeeper-3.4.9

Accumulo

Accumulo-1.8.1

IntelliJ IDEA

IntelliJ IDEA 2017.2.6

GeoMesa

GeoMesa-1.3.3

68

D. Xia et al.

Among them, the network configuration of a cluster is very important. Because the data transmission between cluster nodes is based on the network transmission, the network configuration must be accurate. If you are not familiar with the firewall, you can directly turn off the firewall to prevent unnecessary trouble. If you are familiar with the firewall rules, you can configure the firewall port according to your own need. By modifying the configuration of hosts, the seven host nodes are set as master, slave 1, slave 2, slave 3, slave 4, and slave 5, and the corresponding IP address is set. The access between hosts is not easy to access each other, they also need permissions. Because the name node of Hadoop manages the daemons on the data node by SSH, SSH password-free access mechanism needs to be configured between the machines in the cluster. In this way, password-free mutual access can be carried out in the cluster.

5.4.2 Experimental Data Analysis In the face of a large number of data, due to the constraints of storage and computing power, the data cannot be successfully clustered at one time, or it takes a lot of time to cluster at one time. In order to solve this problem, this paper proposes a classification preprocessing scheme to preprocess the clustering data. In the face of a large number of data compression, this paper proposes to focus a block of d-dimensional data on a certain point of the data, so that a large number of data can be simplified, how much can it be reduced, or even the data can be reduced to one-fifth, one-tenth, or less of the original. According to the quantity and quality of data, the clustering data can fine-tune the parameters in the experiment. Taking two-dimensional plane data points as an example, the experimental data processing diagram is shown in Fig. 5.2.

5.4.3 Acquisition of Experimental Results According to the above experimental environment and the analysis of experimental data, the military physical fitness evaluation experiment is carried out. The accuracy data of military physical fitness assessment results obtained through the experiment are shown in Table 5.3. As shown in Table 5.3, the accuracy range of military physical fitness assessment results of the existing system is 56.23–64.25%; the accuracy range of military physical fitness assessment results of the design system is 75.46–88.25%. Through data comparison, it can be found that compared with the existing system, the design system has higher accuracy of military physical fitness evaluation results, which fully proves the effectiveness and feasibility of the design system.

5 Design of Military Physical Fitness Evaluation System Based …

Experimental data before processing

69

Experimental data after processing

Fig. 5.2 Schematic diagram of experimental data processing

Table 5.3 Accuracy data table of military physical fitness assessment results

Number of experiments

Existing system (%)

Design system (%)

1

56.23

75.46

2

60.12

84.12

3

64.25

80.23

4

60.00

85.01

5

59.15

77.45

6

57.45

79.50

7

59.33

81.41

8

60.03

80.25

9

61.12

87.46

10

62.45

88.25

5.5 Conclusion This research introduces big data clustering algorithm to design a new military physical fitness evaluation system, which greatly improves the accuracy of military physical fitness evaluation results, provides more accurate data support for military physical fitness exercise and evaluation, and also provides some reference for physical fitness evaluation research.

70

D. Xia et al.

References 1. Wang, Z.F., Liu, J.: A teaching quality evaluation system of massive open online courses based on big data analysis. Int. J. Emerg. Technol. Learn. (iJET) 14(14), 81 (2019) 2. Wang, Y.: Comprehensive evaluation system of teaching quality based on big data architecture. Int. J. Cont. Eng. Educ. Life-Long Learn. 30(1), 1 (2020) 3. Liu, Z., Wang, C.: Design of traffic emergency response system based on internet of things and data mining in emergencies. IEEE Access 7(99), 113950–113962 (2019) 4. Xia, D., Ning, F., He, W.: Research on parallel adaptive canopy-K-means clustering algorithm for big data mining based on cloud platform. J. Grid Comput. 18(2), 263–273 (2020) 5. Choi, W.W., Ahn, J.W., Shin, D.B.: Study on the development of geo-spatial big data service system based on 7V in Korea. KSCE J. Civ. Eng. 23(1), 388–399 (2019) 6. Zhao, Y., Xu, J., Wu, J.: A new method for bad data identification of oilfield system based on enhanced gravitational search-fuzzy C-means algorithm. IEEE Trans. Industr. Inf. 15(11), 5963–5970 (2019) 7. Zhao, J., Wei, S., Zhang, Q.: effective intra mode prediction of 3D-HEVC system based on big data clustering and data mining. Int. J. Perform. Eng. 15(12), 3219 (2019) 8. Yang, Z., Feng, B.: Design of key data integration system for interactive English teaching based on internet of things. Int. J. Continuing Eng. Educ. Life-Long Learn. 31(1), 53 (2021) 9. Lee, J.W., Kim, H.J., Kim, M.K.: Design of Short-Term Load Forecasting based on ANN Using Bigdata. Trans. Korean Inst. Electr. Eng. 69(6), 792–799 (2020) 10. Xing, Z., Li, G.: Intelligent classification method of remote sensing image based on big data in spark environment. Int. J. Wirel. Inf. Netw. 26(3), 183–192 (2019) 11. Al-Majidi, S.D., Abbod, M.F., Al-Raweshidy, H.S.: Design of an efficient maximum power point tracker based on ANFIS using an experimental photovoltaic system data. Electronics 8(8), 858 (2019) 12. Hu, K.Z., Jiang, M., Zhang, H.F., Cao, S.: Design of fault diagnosis algorithm for electric fan based on LSSVM and Kd-Tree. Appl. Intell. 51(6), 1–15 (2021) 13. Zhang, J.: Research on adaptive recommendation algorithm for big data mining based on Hadoop platform. Int. J. Internet Protoc. Technol. 12(4), 213–220 (2019) 14. Zhou, H.B., Chen, R., Zhou, S., Liu, Z.Z.: Design and analysis of a drive system for a series manipulator based on orthogonal-fuzzy PID control. Electronics 8(9), 1051 (2019) 15. Yung, C.: A systematic model of big data analytics for clustering browsing records into sessions based on web log data. J. Comput. 14(2), 125–133 (2019) 16. Koh, E.H., Ryu, K.S., Sung, S.: PILS design based on unity 3D for performance verification of multi-sensor integrated navigation system using 3D GIS map in urban canyon. Trans. Korean Inst. Electr. Eng. 69(7), 1117–1124 (2020) 17. Wu, J.J., Jia, D.N., Wei, Z.Q., Xin, D.: Development trends and frontiers of ocean big data research based on cite space. Water 12(6), 1560 (2020) 18. Yan, H.: Microbial control of river pollution during COVID-19 pandemic based on big data analysis. J. Intell. Fuzzy Syst. 39(1), 1–6 (2020) 19. Fan, Y.M., Liu, Y., Chen, H.S., Ma, J.L.: Data mining-based design and implementation of college physical education performance management and analysis system. Int. J. Emerg. Techn. Learn. (iJET) 14(6), 87 (2019) 20. Meng, Z.C., Tang, T., Wei, G.D., Yuan, L.: Analysis of ATO system operation scenarios based on UPPAAL and the operational design domain. Electronics 10(4), 503 (2021)

Chapter 6

ResNet-Based Multiscale U-Net for Human Parsing Luping Fan and Peng Yang

Abstract Human parsing is a subtask of semantic segmentation, which only segments the characters in the picture and ignores the background information. This technology has many application scenarios: such as pedestrian re-identification, smart home, human–computer interaction, etc. Due to the complexity of human semantic segmentation tasks, the existing network is not accurate enough. In view of this situation, this paper proposes a human parsing model based on deep learning to improve the accuracy of human body image semantic segmentation. The model includes three modules. In the human body feature extraction stage, this paper proposes an encoder module based on an improved residual network, which uses the residual network to continuously downsample the human body image; the decoder module uses bilinear interpolation and channel compression to continuously upsample the human body feature map; the encoder module and the decoder module merge the low-level and high-level features of the human body through the feature fusion module. The experimental results show that the details of the model proposed in this paper can process details better and effectively improve the accuracy.

L. Fan (B) · P. Yang School of Computer Science and Engineering, Southeast University, Nanjing 211189, China e-mail: [email protected] P. Yang e-mail: [email protected] P. Yang School of Cyber Science and Engineering, Southeast University, Nanjing 210000, China L. Fan · P. Yang Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing 211189, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_6

71

72

L. Fan and P. Yang

6.1 Introduction Human parsing is a subtask of semantic segmentation. The goal is to identify parts of a person’s body or clothing accessories, also known as clothing parsing. All the pixels that make up the human body are marked and classified into corresponding categories. Human parsing has been applied in many fields, such as video surveillance [1], human behavior analysis [2, 3], person reidentification [4–6]. Therefore, human parsing has important research significance and application value. Unlike general semantic segmentation, human parsing focuses on human-centered segmentation, which identifies areas such as face, hair, coat, trousers, etc. Human parsing based on convolution neural network has made great breakthroughs, but in the face of challenges such as changing human posture, complex scenes, diverse clothing, and inadequate learning of human semantics by the network model itself, it is easy to cause problems such as discontinuous parsing area, wrong recognition, and imprecise results, which seriously affect the parsing accuracy. To address the above problems, this paper proposes a human parsing model based on deep learning, which applies the traditional semantic segmentation model to the human body. The model is divided into three phases. In the first phase, we input the human image into the encoder module, which uses an improved residual network to continuously downsample the human image; in the second phase, the human feature map is continuously upsampled through the decoder module; and in the third phase, the low-level features and highlevel features of the human body obtained from the upsampling and downsampling processes are fused through a multiscale feature fusion module.

6.2 Related Work With the great progress of deep learning, many researchers devote themselves to human parsing researches [7–13]. The recent works apply the convolutional neural network (CNN) to learn the parsing result. U-Net [14] consists of a downsampling path and an upsampling path. The downsampling path is used to get the feature information of the image, and the upsampling path is used to locate the pixels. The two paths are symmetrical, namely encoder–decoder structure. DeepLab series [15– 18] is a deep convolution neural network (DCNN) using atrous [15] convolution for multiscale feature fusion. This paper proposes a novel model, which adopts the encoder–decoder structure and improves the multiscale feature fusion module of DeepLab.

6 ResNet-Based Multiscale U-Net for Human Parsing

73

6.3 Our Proposed Method We propose the ResNet-based multiscale U-Net, namely RU-Net. The overview of the proposed method is shown in Fig. 6.1. There are three modules, the encoder, the decoder, and the multiscale feature fusion module. The encoder module is responsible for downsampling the human image through the residual network, the decoder module is responsible for upsampling the encoder output through bilinear interpolation, and the multiscale feature fusion module is responsible for fusing the high-level and low-level features of the human body.

Fig. 6.1 Overview of our model. The encoder module adopts ResNet to downsample images. The decoder module adopts bilinear interpolation and 1 * 1 conv to upsample images. The multiscale feature fusion module combines high-level and low-level feature

74

L. Fan and P. Yang

Table 6.1 Modified resnet

Layer name

Output size

34-layer

conv1

128 × 128

conv2_x

128 × 128

conv3_x

64 × 64

conv4_x

32 × 32

conv5_x

16 × 16

[3 × 3, 64]   3 × 3, 64 ×3 3 × 3, 64   3 × 3, 128 ×3 3 × 3, 128   3 × 3, 256 ×3 3 × 3, 256   3 × 3, 512 ×3 3 × 3, 512

6.3.1 Encoder We adopt ResNet-34 as our backbone network. However, ResNet itself is used for image classification tasks, so we modify the structure of ResNet, as shown in Table 6.1, to make it suitable for RU-Net.

6.3.2 Decoder The decoder is used to restore image size. Currently, the common upsampling methods are the nearest interpolation, bilinear interpolation, bicubic interpolation, transposed convolution, and unpooling. We adopt bilinear interpolation as our decoder method followed by a 1 × 1 conv for channel compression. Let P11 = (x1 , y1 ), P12 = (x1 , y2 ), P21 = (x2 , y1 ), P22 = (x2 , y2 ) are four known coordinate points, bilinear interpolation needs to calculate the coordinates of point T. The algorithm first performs a linear interpolation in the X-axis, as follows: f (Q 1 ) ≈

x2 − x x − x1 f (P11 ) + f (P21 ) x2 − x1 x2 − x1

(6.1)

f (Q 2 ) ≈

x2 − x x − x1 f (P12 ) + f (P22 ) x2 − x1 x2 − x1

(6.2)

where f is an unknown function, f (P11 ), f (P12 ), f (P21 ), f (P22 ) are four known values at four points. Then the algorithm performs a linear interpolation in the Y-axis, as follows:

6 ResNet-Based Multiscale U-Net for Human Parsing

f (T ) ≈

y2 − y y − y1 f (Q 1 ) + f (Q 2 ) y2 − y1 y2 − y1

75

(6.3)

where T = (x, y) is the coordinate point to be evaluated, f (T ) is the value to be evaluated.

6.3.3 Multiscale Feature Fusion Existing studies [19] have shown that the use of atrous convolution in semantic segmentation can significantly increase the receptive field. Inspired by atrous convolution and DeeplabV3++ [18], we propose a multiscale feature fusion module, as shown in Fig. 6.2. The input is a feature map of size h × w × c. The first layer is a 1 × 1 conv, the output is a feature map of size h × w × c/4. The second, third, and fourth layers are four atrous convolutional blocks with different rates. According to formula (6.4), when the padding size is the same as the atrous rate, the size of the output feature map remains the same, so all three atrous blocks output a feature map with a size of h × w × c/4.   i + 2 p − k − (k − 1) ∗ (r − 1) +1 (6.4) o= s

Fig. 6.2 Multiscale feature fusion consists of four different convolutional layers. The first one is a 1 × 1 conv. The rest are three atrous convolutional layers with rate = 1, 2, 3, respectively. All layers generate a feature map with the same width and height as input, whereas the channel is 1/4 size as the input. Then we concatenate the four smaller feature maps with batch normalization and ReLU.

76

L. Fan and P. Yang

Table 6.2 Comparisons on the LIP validation set

Methods

PA (%)

MPA (%)

MIOU (%)

SegNet

76.63

40.11

33.29

U-Net

78.89

44.38

35.01

DeepLabv3+

83.42

53.98

43.47

RU-Net

83.69

55.36

45.31

6.4 Experiments 6.4.1 Datasets The dataset used in this paper is the look into person (LIP) dataset. The LIP dataset is by far the largest single human parsing dataset with a total of 50,462 pictures and 20 categories (including background classes). There were 30,462 pictures in the training set and 10,000 pictures in the validation and test sets, respectively.

6.4.2 Evaluation Metrics We use mean pixel accuracy (MPA) and mean intersection over union (MIOU) to evaluate the performance of RU-Net. pi j 1  k k + 1 i=0 i=0 pi j k

MPA =

pi j 1  k k k + 1 i=0 j=0 pi j + j=0 p ji − pii

(6.5)

k

M I OU =

(6.6)

6.4.3 Results We report the results of our method and other state-of-the-art methods in Table 6.2.

6.5 Conclusion In this paper, we propose a novel human parsing model. Our model uses an improved residual network to extract features and uses bilinear interpolation to restore the image

6 ResNet-Based Multiscale U-Net for Human Parsing

77

resolution. Besides, we introduce a multiscale feature fusion module to aggregate low-level and high-level features. The experimental results show that the proposed method is superior to other benchmark models in the LIP dataset.

References 1. Wang, L., Ji, X., Deng, Q., Jia, M.: Deformable part model based multiple pedestrian detection for video surveillance in crowded scenes. In: 2014 International Conference on Computer Vision Theory and Applications (VISAPP), vol. 2, pp. 599–604 (2014) 2. Gan, C., Lin, M., Yang, Y., De Melo, G., Hauptmann, A.G.: Concepts not alone: exploring pairwise relationships for zero-shot video activity recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence (2016) 3. Liang, X., Lin, L., Wei, Y., Shen, X., Yang, J., Yan, S.: Proposal-free network for instance-level object segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2978–2991 (2017) 4. Chen, D., Zhang, S., Ouyang, W., Yang, J., Tai, Y.: Person search via a mask-guided twostream cnn model. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750 (2018) 5. Kalayeh, M.M., Basaran, E., Gökmen, M., Kamasak, M.E., Shah, M.: Human semantic parsing for person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1062–1071 (2018) 6. Song, C., Huang, Y., Ouyang, W., Wang, L.: Mask-guided contrastive attention model for person re-identification. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1179–1188 (2018) 7. Chen, L.C., Yang, Y., Wang, J., Xu, W., Yuille, A.L.: Attention to scale: scale-aware semantic image segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3640–3649 (2016) 8. Gong, K., Liang, X., Zhang, D., Shen, X., Lin, L.: Look into person: self-supervised structuresensitive learning and a new benchmark for human parsing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 932–940 (2017) 9. Liang, X., Xu, C., Shen, X., Yang, J., Liu, S., Tang, J., Lin, L., Yan, S.: Human parsing with contextualized convolutional neural network. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1386–1394 (2015) 10. Liu, S., Liang, X., Liu, L., Shen, X., Yang, J., Xu, C., Lin, L., Cao, X., Yan, S.: Matchingcnn meets knn: Quasi-parametric human parsing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1419–1427 (2015) 11. Simo-Serra, E., Fidler, S., Moreno-Noguer, F., Urtasun, R.: A high performance crf model for clothes parsing. In: Asian Conference on Computer Vision, pp. 64–81 (2014) 12. Xia, F., Wang, P., Chen, L.C., Yuille, A.L.: Zoom better to see clearer: human part segmentation with auto zoom net. ECCV, pp. 648–663 (2016) 13. Yamaguchi, K., Kiapour, M.H., Ortiz, L.E., Berg, T.L.: Retrieving similar styles to parse clothing. IEEE Trans. Pattern Anal. Mach. Intell. 37(5), 1028–1040 (2014) 14. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp. 234–241, Springer, Cham (2015) 15. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs, vol. 4, pp. 357–361 (2014) 16. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

78

L. Fan and P. Yang

17. Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking Atrous Convolution for Semantic Image Segmentation (2017) 18. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018) 19. Yu, F., Koltun, V.: Multi-Scale Context Aggregation by Dilated Convolutions. ICLR (2016)

Chapter 7

Short-Term Electricity Price Forecast Based on SSA-SVM Model Zhenyu Duan and Tianyu Liu

Abstract In the electricity market environment, price forecast has the characteristics of periodicity and unpredictability. Based on these characteristics, this paper proposes a sparrow-algorithm-optimized support vector machine (SSA-SVM) model to predict short-term electricity prices. The sparrow algorithm (SSA) has high convergence performance and local search ability, and the parameters of SVM are optimized, the SSA-SVM prediction model is established, and the electricity price data is predicted and analyzed, and then compared with the simulation results of the SSA-SVM model. Finally, the SSA-SVM method has a good effect, error, and convergence speed of the SSA-SVM model are improved.

7.1 The Introduction In the power market, accurate forecasts of electricity prices are of great global value. Accurate price forecast plays an important role in power producers and power market transaction decision-makers. They can adjust production plans and make some related power decisions in real time by electricity price forecast, in order to achieve the optimal allocation of resources and the maximum benefit. For the participants of the electricity market, their risk is reduced to a minimum and their interests are maximized. In this context, this paper improves the prediction accuracy by studying the historical data of electricity price, and provides a favorable scientific basis for the demand and supply of the electricity market. In terms of electricity price prediction, there are many factors affecting electricity price, such as factors inside the power market, factors outside the power market, and other factors such as natural disasters, which will have an impact on the electricity price. Therefore, the price of electricity has a strong fluctuation, making the price of Z. Duan (B) · T. Liu School of Electrical Engineering, Shanghai DianJi University, Shanghai 201306, China T. Liu e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_7

79

80

Z. Duan and T. Liu

electricity has the characteristics of non-stationarity and nonlinear. Based on these characteristics, it is very difficult to predict the price of electricity, and a single price forecast cannot reach the expected effect. In this paper, the sparrow algorithm has high convergence performance and high local search ability to optimize the internal parameters of the SVM, and builds a model based on the sparrow-algorithmoptimized support vector machine (SSA-SVM) to predict the short-term electricity price. This paper uses the open electricity price data of the Australian electricity market to train and simulate it. By comparing the simulation results of SVM and SSASVM models, it is proved that the optimization of the sparrow algorithm improves the accuracy of electricity price prediction and reduces the error, and the SSA-SVM model has strong anti-interference.

7.2 SSA SSA is a new algorithm proposed by Xue Jiankai in 2020 [1]. SSA is mainly inspired by sparrow’s food-seeking behavior and anti-predation behavior. The algorithm is new, has very good search ability, convergence is also fast, small fluctuation, and other advantages. In the process of food search, sparrows are divided into finders and entrants. Finders are responsible for finding areas and directions with rich food in the whole population, providing a place and direction for all incoming sparrows. Entrants follow the finder’s location to get food. In the process of searching for food, sparrows can adopt the behavior of finder and participant to forage. And sparrows in the group monitor the behavior of other sparrows in the community, and predators in the community compete for food with sparrows with abundant food to increase their rate of predation. When the group perceives danger, it behaves in an antipredatory manner [1]. In the algorithm, we can use imaginary sparrows to find their items. The population of n sparrows can be expressed as follows: ⎡

x1,1 ⎢ x2,1 ⎢ ⎢ .. X =⎢ ⎢ . ⎢ . ⎣ .. xn,1

x1,2 x2,2 .. . .. . xn,2

··· ··· .. . .. . ···

··· ··· .. . .. . ···

⎤ x1,d x2,d ⎥ ⎥ .. ⎥ . ⎥ ⎥ .. ⎥ . ⎦ xn,d

(7.1)

7 Short-Term Electricity Price Forecast Based on SSA-SVM Model

81

where, n represents the number of sparrows, and d represents the dimension of variables to be optimized. Second, the fitness value of sparrows is expressed in the following form [2]: ⎡ 

⎤ f x1,1 x1,2 · · · · · · x1,d

⎥ ⎢  ⎢ f x2,1 x2,2 · · · · · · x2,d ⎥ ⎥ ⎢ .. ⎥ ⎢ Fx = ⎢ ⎥ . ⎥ ⎢ .. ⎥ ⎢ ⎦ ⎣ .

 f xn,1 xn,2 · · · · · · xn,d

(7.2)

where f represents the adaptive value. In the sparrow search algorithm (SSA), the more adaptable finders get things first in the process of exploration. Furthermore, the finder must find food for the whole colony and lead all the entrants. So the discoverers had to look for food in a much wider area. According to formulas (7.1) and (7.2), the finder keeps changing in the iteration, and the formula is as follows:

X i,t+1 j

⎧   −i ⎪ ⎨ X t · exp if R2 < ST i, j αitermax = ⎪ ⎩ X t + Q · L if R ≥ ST 2 i, j

(7.3)

In the above formula, t represents the current iteration number, itermax is constant, and represents the maximum iteration number. j = 1,2…., d. X i, j is the place of the ith sparrow in the j. α ∈ (0,1) is random. R2 (R2 ∈ [0, 1])[0, 1]) is the warning number, ST (ST ∈ [0.5, 1]) is the safety number, and Q is normal distribution, it is a random value. L is a matrix of 1 × d, where all the elements in this entry are 1’s [3]. The position of the subscriber will constantly change, and the formula is

X i,t+1 j =

⎧   t t ⎪ X wor ⎪ st − X i, j ⎪ Qex p ⎪ if i > n/2 ⎪ ⎨ i2 ⎪ ⎪   ⎪ ⎪  t ⎪ t+1  + ⎩ X t+1 + − X X i, j P P  • A • L other wise

(7.4)

In the formula, X P is the best thing about the current finger, X wor st is the worst thing about the current discoverer. A represents a matrix of 1 × d, and the values in it can be randomly assigned, either 1 or −1. In addition to that, A+ = A T (A A T )−1 . If the species think it is dangerous, they will engage in anti-predation behavior, which can be mathematically expressed as follows:

82

Z. Duan and T. Liu

X i,t+1 j =

  ⎧ t t  if f i > f g X best +β  X i,t j − X best ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ X i,t j +K

  t X i,t j −X wor st 

 

( f i − f w )+ε

(7.5) if f i = f g

The formula X best is the best place to be right now. β is a parameter that controls the step size. It follows a normal distribution. K ∈ [−1,1] is uncertain, f i is the number of individual fitness at present, f i is the best value, f w is the worst value. ε is the smallest constant, its function is to prevent the occurrence of zero in the denominator.

7.3 SVM SVM is a new machine learning algorithm. SVM was proposed by Vapnik et al. [4]. It is a kind of supervised learning algorithm that can solve the prediction problem of not too large data. SVM follows the principle of structural risk minimization and has very good performance. The basic principle is Fig. 7.1. The data in this graph is a two-dimensional data. If a group of data is placed on this graph and the data are gathered in different places according to the classification criteria, the classification boundary can be trained. Among them, one side is a linear partition, the other side is a nonlinear partition. In other words, SVM is the partition surface, and if the support vector changes, the partition surface changes. So, this surface is determined by a support vector and a classifier is a support vector machine. The main idea of SVM is that the input is mapped to a high-dimensional space S through nonlinearity [4], and then the optimal decision function is obtained by minimizing the empirical risk to minimize the structural risk, and finally, the nonlinear problem is dealt with by linear method.

Fig. 7.1 Basic schematic diagram

Type 1 Type 2 Support vector Support vector

Optimal hyperplane

Maximum interval

7 Short-Term Electricity Price Forecast Based on SSA-SVM Model

83

7.4 The SSA-SVM Model The prediction quality of SVM is mainly affected by the parameters, and the penalty parameter C and kernel parameter g have a very important influence on prediction. The penalty parameter C is used to weigh the weight of the loss, and the kernel parameter g affects the radial action range of the kernel function and determines the range and distribution characteristics of the training data. In order to mention the accuracy of prediction, it is particularly important to select the optimal parameters. In this paper, the SSA with strong global search ability and local optimization ability is used to optimize the relevant parameters of SVM. The process of the SSA-SVM combined model is as follows: (1) (2) (3)

(4) (5) (6)

Read the electricity price data, preprocess the data, and build the SVM model. Initialize sparrow population parameters and determine the value range of C and g. Determine the fitness function of SSA and take its value as the number of food searched by the sparrow. According to the principle of SSA, search the optimal function value, that is, determine the best sparrow individual position. The optimal values of parameters C and g were obtained through the optimal individual positions of sparrows. Assign the optimal parameters C and g to SVM for training, and obtain the optimized prediction model of SVM. Input the test sample, and the prediction model outputs the predicted data for the test set, and compares it with the prediction of support vector machine before optimization and analyzes the error. The specific process is shown in Fig. 7.2.

7.5 Example Analysis 7.5.1 Data Processing The data of the article is selected from the measured electricity price data of the Australian electricity market to verify the validity of the proposed model. In this paper, the data of November, February, May, and July 2006 of the Australian electricity market are selected as the simulation data of the experiment, corresponding to the spring, summer, autumn, and winter of Australia, respectively. The time resolution of the short-term electricity price predicted in this paper is 1 h. For data that is not within a reasonable range, look for the same data as a substitute. In this paper, there are 24 samples every day and 720 samples are selected as sample points to forecast short-term electricity price one hour in advance. After analysis, the first 600 samples were selected as the training set, and the 601–700 sample points as the test set [5].

84

Z. Duan and T. Liu The population is divided into discoverers and accessors

start

Input data, data normalization

Update the location of discoverers and enrollees

Constructing SVM model

Sensing predators and updating population locations

Initialize SSA parameters

Calculate the best fitness and update the best individual position

Yes

Optimal parameter C, g

Meet the termination principle

No

SVM training

Optimized SVM prediction model

Fig. 7.2 Flowchart of the SSA-SVM model

7.5.2 Data Normalization We need to normalize the data [6], and formula is xend =

xi − xmin xmax − xmin

(7.6)

In the formula, xend is result, xi is the input value, xmax is the largest number of xi , and xmin is the smallest number of xi , and the obtained data are mapped to the interval of [0,1].

7 Short-Term Electricity Price Forecast Based on SSA-SVM Model

85

7.5.3 Evaluation Indicators Root Mean Square Error (RMSE) is used to evaluate the error of prediction results [7], as follows:

RM S E =

   n  (z(i) − y(i))2  i=1 n

(7.7)

In the formula, z(i) is the expected value, and y(i) is the actual value of the sample.

7.5.4 Analysis of Prediction Results In the article, MATLAB2019 is used to build a short-term electricity price prediction model, and the simulation results are as follows. Figures 7.3, 7.4, 7.5, 7.6, respectively, show the prediction results of electricity price in spring, summer, autumn, and winter of the Australian electricity market. The following table lists the prediction error results. The three curves in Table 7.1, respectively, represent the real data curve, SVM prediction curve [8], and SSA-SVM prediction curve. It can be seen from the three predicted curves that the prediction curve of SSA-SVM [9] almost overlaps with the real data curve, and the overall error is relatively small, while the prediction accuracy of the SVM model is far less than that of the SSA-SVM model. Fig. 7.3 Prediction curve in spring

86

Z. Duan and T. Liu

Fig. 7.4 Prediction curve in summer

Fig. 7.5 Prediction curve for autumn

It can be seen from the table that the prediction error of the SSA-SVM model is smaller than that of the SVM model. Therefore, the prediction accuracy of the SSA-SVM model has been improved to a certain extent, and the SSA-SVM model meets the requirements of short-term electricity price prediction to a certain extent.

7 Short-Term Electricity Price Forecast Based on SSA-SVM Model

87

Fig. 7.6 Prediction curve for winter

Table 7.1 Results error of different prediction models

Season

RMSE

Spring

5.5526

The SSA-SVM of RMSE

The SVM of RMSE 6.5614

Summer

6.5237

8.0551

Autumn

4.9952

5.3975

Winter

8.0912

10.0596

7.6 Conclusion In this paper, the SSA-SVM prediction model is proposed. First, SSA has high convergence performance and local search ability, and each parameter of SVM is optimized, and then the optimal value of parameters is determined and trained. The SVM model optimized by SSA accelerates the training speed and makes the prediction more accurate. Therefore, this model will have better application in short-term electricity price forecasting in the power sector.

References 1. Xuan, J. K.: Research and Application of a New Swarm Intelligence Optimization Technique. Donghua University (2020) 2. Wang, Z.D., Wang, J.B., Li, D.H.: Research on coverage optimization of wireless sensor networks based on enhanced sparrow search algorithm. Chin. J. Sensor Technol. 34(06), 818–828 (2021) 3. Liu, D., Wei, X., Wang, W. Q., Ye, J.H., Ren, J.: Short-term wind power prediction based on SSA-ELM. Smart Power, 49(06), 53–59+123 (2021)

88

Z. Duan and T. Liu

4. Wu, D.L., Tian, L.: Short-term price Forecasting based on BAPSO-SVM model. J. Xinyu Univ. 22(01), 113–116 (2017) 5. Liu, S. Y.: Research on Short-Term Electricity Price Forecast in Power Market. Guangdong University of Technology (2020) 6. Liu, D., Lei, Z. Q., Sun, K.: Short-term electricity price forecast based on wavelet packet decomposition and long short-term memory network. Smart Power 48(04), 77–83 (2020). 7. Engineering; Investigators at University of Malaysia Detail Findings in Engineering (hybrid Ann and artificial cooperative search algorithm to forecast short-term electricity price in de-regulated electricity market). J. Eng. (2019) 8. Zhang, S.H.: Research on fault diagnosis of mechanical equipment based on support vector machine. Bonding 47(09), 129–132 (2021) 9. Wei, P.F., Fan, X.C., Shi, R.J., Wang, W.Q., Cheng, Z.J.: Optimization of short-term pv power forecasting based on improved Sparrow search algorithm and Support vector machine. Therm. Power Gener. 1–7 (2021)

Chapter 8

Research on Mobile Advertising Click-Through Rate Estimation Based on Neural Network Songjiang Liu and Songxian Liu

Abstract The promotion of modern information technology has brought important theoretical and technical basis for the innovation of advertising forms. Different from traditional advertising forms, online mobile advertising has the advantage of breaking through the limitations of space and time, which is an important basis for the innovation and development of enterprises in the future. Therefore, on the basis of understanding the pricing and delivery forms of mobile ads, this paper analyzes how to estimate the click-through rate of mobile ads based on the neural network according to the AUC index.

8.1 Pricing and Placement of Advertisements At present, advertising pricing is mainly divided into three forms: the first price refers to the advertiser in the personal concept and have the influence of economic strength of the advertising price; The second highest price means that if there is only one advertising space, then the advertiser with the highest bid in the auction can get this advertising space and obtain corresponding income. The specific formula is as follows: qs = μs+1 bs+1 /μs Vs + ;VCG refers to the last advertiser to get the advertising space during the bidding period. The actual consumption cost should be the loss  brought to other advertisers after obtaining the space. The specific formula (μt−1 − μt )vt .[1–3]. is qs = 1>s

Combined with the analysis of Fig. 8.1, in order to allow consumers to get the required information faster, the accuracy of advertising must be guaranteed. By using real-time bidding (RTB), the interface is divided into two processes, on the one hand, the early cache mapping of ADX and DSP user identity, and on the other hand, bidding and delivery when online advertising is requested [4–6].

S. Liu (B) · S. Liu Gongqing College of Nanchang University, Gongqingcheng, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_8

89

90

S. Liu and S. Liu

Fig. 8.1 Flowchart of bidding

8.2 The AUC Indicators The AUC metric is critical for estimating click-through rates for mobile ads, because as it increases, the more evidence that the algorithm is effective in classifying positive samples before negative ones. At the same time, assuming that the prediction results are very accurate, the actual advertising effect will also be very perfect, which can not only promote consumers’ desire to consume and meet their consumption needs, but also maximize the final revenue. In the calculation of the AUC index, there are two very critical values, that is, the true class rate (TPR) and the negative and positive TP and class rate (FPR), and their definition formulas are, respectively, TPR = TP+FN FP FPR = FP+TN . In the above formula, TP represents the positive sample number in the prediction process, while TN is the negative sample number. FP represents the number of positive samples with problems in the predicted results, while Fn represents the number of negative samples. According to this numerical analysis, the AUC value will also increase with the increase of the predicted advertising click-through rate, and the more accurate the prediction result is, the stronger its performance will be. Therefore, in order to estimate the click-through rate of mobile ads based on neural network, it is necessary to collect data according to the market and calculate an accurate AUC index, so as to provide an effective basis for subsequent estimation [7].

8.3 Estimating Click-Through Rates of Mobile Ads Based on Neural Networks 8.3.1 Model To solve the problem of advertising click-through rate estimation is to get a floatingpoint value from 0 to 1. Because there are a lot of hidden layers in the deep neural network model, and the structure between layers is fully connected, the parameters

8 Research on Mobile Advertising Click-Through …

91

Fig. 8.2 Structure diagram of neural network

need to be calculated layer by layer finally. If the dimension is too high, the training efficiency will be affected, which requires more dimensionality reduction to optimize the model features. Because there is a close relationship between the hidden layer and the dataset, the model studied in this paper selects 4 hidden layers, and each layer has 512 nodes. The number of nodes in the hidden layer of the first layer is 76 more than that of the input layer, which is helpful to obtain more effective combination features. For the input layer, because the two nodes contained in it are independent logistic regression models with little influence on each other, the input corresponding model is, in essence, the output full connection. At the same time, the sum of the two probabilities is 1, which represents the specific probability of users clicking and not clicking the advertisement [8]. According to the analysis in Fig. 8.2, for the deep neural network, data will enter the network system from the “input layer”, and the corresponding value will be input in the “output layer” after calculation in the “hidden layer”, which is also the probability value needed for the study. The nonlinear sigmoid function is used to process the activation function of each layer node, then the output formula of the    1 1 and p(Y = 0x, θ ) = 1 − . output layer is p(Y = 1  x, θ ) = −θ T x+θ0 −θ T x+θ0 1+e

1+e

8.3.2 Objective Function and Algorithm By using the maximum likelihood method to deal with the dichotomy problem, we  N  ( pi ) yi (1 − pi )1 − yi this model. Use the same method can finally get L(θ x) = i=1

92

S. Liu and S. Liu

to clarify the negative logarithm function contained in it, the specific formula is. E(θ |x ) = − log L(θ |x ) = −

N 

(yi log pi + (1 − yi ) log(1 − pi ))

i=1

Although the objective function of neural network and logistic regression is basically the same in form, the final parameters are not the same. The latter requires only one level of parameters in the solution, and the weights of nodes in different levels are also different [9].

8.3.3 Key Points of Optimization Because the training data of search advertisement is sparse to a certain extent and contains a very large number, it is very difficult to continue to train all the data training based on neural network and thereby predict the click-through rate of an advertisement. Therefore, the following is to use the dropout method to extract the nodes of part of the hidden layer according to the rules required to ensure that their weights will not participate in the training period. Based on the practical case analysis, it can be seen that the dropout method can effectively prevent the occurrence of the fitting phenomenon, which is embodied in the following aspects: firstly, since the weight update is random, the probability of simultaneous occurrence of two nodes is not large, so it can effectively control the occurrence of node dependence phenomenon, so as to improve the generalization level of the model; Secondly, as part of weights will be excluded from training in each calculation, and these contents will also be regarded as 0, and the process is random, different forms of network structures can be obtained in each calculation, which can not only better share weights, but also avoid the transition fitting problem during the prediction. It should be noted that because the number of mobile ads is very large, and the number of layers and nodes of the neural network is very large, if the CPU with high performance is not installed during the training period, the overall work efficiency will be affected. Therefore, when studying the click-through rate prediction of mobile advertising based on neural network, it is necessary to ensure that the model has strong computing power, which can not only deepen the training, but also improve the training speed.

8.3.4 Result Analysis The open-source parallel computing platform, Petuum, is used for prediction based on a neural network, and the experimental prediction is carried out according to Table

8 Research on Mobile Advertising Click-Through … Table 8.1 Analysis of the estimated experimental scheme

Table 8.2 Comparison diagram of deep neural network

93

Experiment scheme

Selection of model

A

Logistic regression

B

factoring

C

The neural network

D

Deep.neural.network + dropout

Evaluation index AUC

Model

Characteristics of the

AUC

Logistic regression

Click rate + position + similarity

0.7413

Support vector machine

Click rate + position + similarity

0.7490

The neural network

Click rate + position + similarity

0.7785

Deep.neural.network + dropout

Click rate + position + similarity

0.7834

8.1. Finally, the comparison results shown in Table 8.2 are obtained. It can be seen that compared with other schemes, the deep neural network is simpler to operate and includes multiple artificial neural networks, which enables researchers to construct complex functions only by grasping a few parameters. At the same time, in order to avoid the transition fitting during training, using the dropout and CPU method to solve the problem can quickly improve the training speed.

8.4 Conclusion To sum up, with the development of network technology, the computer has become one of the essential contents of human daily life and work, and the related industries have been comprehensively innovated. For the advertising industry, in particular, mobile advertising is one of the most important forms of advertising today, and in order to increase the revenue of the website, it must be based on the neural network to estimate its click-through rate. Therefore, all enterprises should make clear the advantages and content of the neural network, combined with the development of the advertising industry in the past, and build a perfect forecast model. Only in this way, we can master the effective settlement method and at the same time, give full play to the application value of advertising, so as to bring more benefits for the development of all enterprises.

94

S. Liu and S. Liu

References 1. Liu, G., Yin, Z., Jia, Y., et al.: Passenger flow estimation based on convolutional neural network in public transportation system. Knowl.-Based Syst. 123, 102–115 (2017) 2. Chen, Q.H., Yu, S.M., Guo, Z.X., et al.: Estimating ads’ click through rate with recurrent neural network. Itm Web Conf. 7, 04001 (2016) 3. Jie-Hao, C., Zi-Qian, Z., Ji-Yun, S., et al.: a new approach for mobile advertising click-through rate estimation based on deep belief nets. Comput. Intell. Neurosci. 1–8 (2017) 4. Zhou, L.: Product advertising recommendation in e-commerce based on deep learning and distributed expression. Electron. Commer. Res. 20(2), 321–342 (2020) 5. Wei, N.F.: Research on recognition method of handwritten numerals segmentation based on B-P neural network. Appl. Mech. Mater. 484–485, 1001–1005 (2014) 6. Ma, Y., Han, R.: Research on stock trading strategy based on deep neural network. In 2018 18th International Conference on Control, Automation and Systems (ICCAS). IEEE (2018) 7. Zhang, Y., Jansen, B.J., Spink, A.: Identification of factors predicting clickthrough in Web searching using neural network analysis. J. Am. Soc. Inform. Sci. Technol. 60(3), 557–570 (2014) 8. Gao, M., Ma, L., Liu, H., et al.: Malicious network traffic detection based on deep neural networks and association analysis. Sensors (Basel, Switzerland), 20(5) (2020) 9. Abolfazli, S., Sanaei, Z., Gani, A., et al.: Rich mobile applications: genesis, taxonomy, and open issues. J. Netw. Comput. Appl. 40(7), 345–362 (2014)

Chapter 9

Research on Macroeconomic Prediction Technology Based on Wavelet Neural Network Tao Wang, Yuxuan Du, and Zheming Cui

Abstract The trend of future macroeconomic changes presented by countries and regions is real time, and will be affected by a variety of factors during the practical operation. In previous research projects, the forecasting methods proposed for the macro-economy could not scientifically deal with the problems such as the inability to accurately evaluate the nonlinear system. Therefore, researchers proposed a macroeconomic forecasting technology based on the optimization of wavelet neural networks in practice and exploration. Different from the current BP neural network prediction model, the wavelet neural network is used to carry out prediction analysis, and the weight proposed by the model is studied by combining with the intelligent swarm algorithm, and then it is optimized. At the same time, combining with the economic data of a certain province has been normalized, the training analysis is carried out on the research model proposed in this paper. According to the final results, the wavelet neural network is more accurate than the BP neural network in predicting results.

9.1 Prediction Technology In the continuous development of China’s social economy, the reform speed of the market economic system is getting faster and faster. At this time, social and economic development is the focus of People’s Daily life, and the comprehensive strength of a country can be reflected in various economic indicators to a certain extent. In this context, local governments and relevant research institutions must accurately evaluate the future macroeconomic development trend of a certain country or region in T. Wang (B) National University of Malaysia, Bangi, Malaysia Y. Du University College London, London, UK Z. Cui Beijing Huijia Private School, Beijing, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_9

95

96

T. Wang et al.

Fig. 9.1 Components of time series

order to obtain more accurate data information and put forward scientific processing and decision-making, and finally judge whether the actual development needs to be stimulated or restrained according to the results [1–4]. In the traditional sense, the macroeconomic forecasting method involves the following points: First, the stochastic time series prediction method. The most representative algorithm is the moving average model ARMA model, and the calculation formula of the single-variable ARMA model is as follows: xt = φ1 xt−1 + φ2 xt−2 + ... + φ p xt− p + εt − θt xt−1 − ... − θq xt−q where P represents the autoregressive order, ϕ represents the autoregressive coefficient, Q represents the average order of the movement, θ represents the average coefficient of the movement, and ε1 represents the random interference noise. Figure 9.1 shows the components of the time series: Second, the prediction method of a regression model. This kind of method is mainly divided into two forms, one refers to one variable linear, the other refers to multiple linear regression model. Taking the former as an example, the expression formula of the explained variable in the model is Yi = β0 + β1 X i + u i , i = 1, 2, . . . , n where XI represents the explanatory variable, β0 and β1 are theoretical parameters, and UI represents random interference term. As shown in Fig. 9.2, it is a unary linear regression model: Third, artificial neural networks. The most representative of this kind of method is BP neural network prediction model. Figure 9.3 shows the structure of the BP neural network. In practical application, the factors of the first two methods need to be selected by empirical method, which not only has strong artificial influence factors, but also

9 Research on Macroeconomic Prediction Technology …

97

Fig. 9.2 Structure diagram of unary linear regression model

Fig. 9.3 Structure diagram of BP neural network

reduces the relationship between some variables and predictive variables, and affects the accuracy of the final prediction results. Compared with the traditional artificial neural network, the wavelet neural network is a new artificial neural network model with multi-layer and multi-resolution based on the theory of wavelet analysis, which

98

T. Wang et al.

can effectively solve the problems existing in the traditional artificial algorithm, such as too difficult data selection and too low convergence speed [5–7].

9.2 Optimized Wavelet Neural Network Model As shown in Fig. 9.4, as the current structure of wavelet neural network according to the results, combined with the case study, in-depth studies are proposed in this paper using wavelet neural network model to analyze the original operation experience at the same time, in the study of the current market macroeconomic level, finally combined with network model and forecast experience, analyze the macroeconomic trends for the future. In order to further promote the development and innovation of macro-economy in the new era. Nowadays, although China’s reform and opening up and the market economic system have been comprehensively promoted and reformed, from the perspective of practice, the prediction and evaluation of the macro-economy is still the focus of China’s economic construction and development. According to the above research changes and final results, the application effect of the optimized wavelet neural network is more superior. According to the accumulated experience analysis of the previous application of macroeconomic forecasting technology, different from the BP neural network, this paper selects the wavelet neural network for macroeconomic forecasting analysis, and combines the Wolf swarm algorithm in the intelligent swarm algorithm to optimize the weight of the model in real time. To understand the wavelet neural network algorithm applied in the past, it can be seen that M D nodes of the input layer, N nodes of the output layer and S nodes of the hidden layer need to be designed, and  represents the wavelet basis function constituted by a single function (x), and Wes.  can be obtained by using transformation and translation operations. The specific calculation formula is as follows:

Fig. 9.4 Topological structure of wavelet neural network

9 Research on Macroeconomic Prediction Technology …

99

⎧ ⎨

⎫   ⎬ x − bj 1 ϕ = ϕi =   ϕ : a j , b j ∈ Rn , j ∈ Z ⎩ ⎭ aj a j  wherein, (x) represents a parent

wavelet in time space and frequency space, and vector a j = a j1 , a j2 , . . . , a jm represents scale parameter, { b j1 , b j2 , . . . , b jm represents transformation parameter, and x = {x1 , x2 , . . . , xm } refers to the input of the wavelet neural network. In the model, the internal network activities of Neuron J can be calculated by using the following formula: vj =

m 

Wi j · xi

i=0

where W ij represents the weight between the input I and the hidden node j. In this case, the parent wavelet  (v) is used to calculate the output value of the JTH neural network. The sigmoid function is needed as the excitation function, and the specific form is as follows: ϕ(v) =

1 , 0 < ϕ(v) < 1 1 − e−v

It can be seen that the output of the JTH neuron is mainly affected by the following formula:   vj − bj 1 ϕi =   ϕ aj a j  In other words, the value change of the JTH element of the hidden layer will change with the frequency parameter Aj and the time parameter Bj . The corresponding transformation formula of the initial wavelet is ai = 0.2(xmax − xmin ), andthe translation formula is b j = 0.5(xmax + xmin ). The output formula is f (x) = nj=1 W j ϕ j (v), where W j represents the weight relationship between the JTH neuron and the output node. Morlet wavelet function is regarded as the excitation function, and the specific calculation formula is as follows:         f j − bj f j − bj 2 f j − bj = cos 1.75 × exp −0.5 × ϕ aj aj aj The calculation formula of error E is as follows: E(t) =

1 2 1 e (t) = ( f (t) − d(t))2 2 2

100

T. Wang et al.

where F represents the model output results, and D represents the target output results. After the basic model of the wavelet neural network is defined, it is necessary to analyze the prediction model in depth by combining the Wolf pack algorithm. It should be noted that the application performance of the wavelet neural network model studied in this paper will be affected by the threshold and weight in the initial values, so it is very important to use the Wolf pack algorithm to optimize the weight of the model [8, 9]. Assuming that the total number of wolves in the pack is N and the number of variables waiting to be searched is D, then the position transformation of wolves in the d-dimensional space is as follows: p

xid = xid + sin(2π × p/ h) × stepad , p = 1, 2, . . . , h where X id represents the area where the ith artificial Wolf is located in the ddimensional space, h represents the number of directions in which the Wolf prowl, and stepad represents the sp. The way wolves update their positions in d-dimensional space is as follows:    k  k+1 k k k  /gd − xid = xid + stepad × gdk − xid xid wherein, K refers to the algebra of wolves, and gdk represents the position of the head Wolf in the d-dimensional space of the k-generation wolves. The way wolves update their position during a siege is as follows:   k+1 k k  = xid + λ × stepad × G kd − xid xid where gdk in the formula represents the location of prey, and λ represents a random value within a preset value range [−1,1]. The above theory is used to optimize the macro-prediction steps in the wavelet neural network, and the actual operation steps are divided into the following points: first, scientific preprocessing should be carried out for the collected data. The formula x−min should be used to normalize the value of the original economic sample, y = max − min in which X represents the original sample information, Max represents the maximum value of the sample information, and Min represents the minimum value of the sample information. Second, the values to be studied and analyzed are divided into two types, one is the training sample and the other is the test sample. Third, the initial value of the wavelet neural network should be defined, and the relevant parameters of the wolf pack algorithm should be determined. Fourth, the weight of the wavelet neural network model studied in this paper is optimized and analyzed by using the wolf swarm algorithm. Fifth, the wavelet neural network with optimized weights is used for sample training, and the corresponding output values and error values are calculated.

9 Research on Macroeconomic Prediction Technology …

101

Sixth, combined with the test sample to the current market macroeconomic forecast [10, 11].

9.3 Case Study In order to verify the scientificity and effectiveness of the research model designed in this paper, the economic index of a certain country accumulated in the recent 6 years is selected in the experimental operation for simulation experiment analysis, and other algorithm models proposed in the first chapter of this paper are compared and analyzed, so as to accurately evaluate the application value of the wavelet neural network. In the practical design, the staff chose Intel Core 2 Duo Core 2.4 GHz CPU and 4 GB memory, and the actual simulation environment is MATLAB R2012.

9.3.1 Specific Introduction Due to the development of practice, the development level of future macro-economy will inevitably be affected by many factors such as internal and surrounding environment, and the overall practice shows a real-time change trend, which belongs to a relatively complex nonlinear research system. In the traditional sense, the time series analysis method is difficult to guarantee the accuracy of the forecast in practice because of the problems such as error series and multiple contributions in the economic forecast. At the same time, compared with other traditional prediction methods, the artificial neural network has the characteristics of nonlinear and nonconvex linear, has a strong parallel distribution information processing structure and ability in practical operation, and can directly deal with the highly complex nonlinear prediction work. For example, some researchers have used artificial neural networks to predict the negative impact of multi-disaster hurricane events on the social economy, and some scholars have used the BP algorithm to build and train the economic forecasting model of the artificial neural network, and proposed the forecasting work of GDP. Although it has shown a strong technical advantage in practical application, it still fails to meet the requirements of practical work considering the accuracy of prediction. Therefore, this paper combined the optimized wavelet neural network for macroeconomic forecasting analysis. This method is based on the artificial neural network, based on the construction of a wavelet model to make a scientific prediction of macroeconomic GDP economy. In practice, in order to get more effective results, the wolf pack algorithm should be used for weight optimization. Finally, by combining with the experimental analysis, it is proved that the macroeconomic forecasting technology and related models based on the wavelet neural network have strong application advantages, and the actual obtained values are also very perfect and reliable.

102 Table 9.1 The comparison of the results of different prediction models

T. Wang et al. BP neural network prediction

Autoregressive models predict that

The prediction method in this paper

0.23

0.22

0.21

0.25

0.23

0.22

0.22

0.21

0.20

9.3.2 Training of Learning Samples The accumulated economic values of the first three years in the collected data are regarded as learning samples. According to the picture analysis, when the number of training increases to 30, the data error of the prediction model will decrease to 0.0096.

9.3.3 Analyze the Macroeconomic Forecast Results The accumulated economic values in the following three years can be regarded as training samples, and Table 9.1 shows the numerical comparison obtained under different forecasting models. Through comparative analysis, it can be seen that the model selected and studied in this paper has more application value when the known macroscopic conditions are consistent [12]. By integrating and comparing the predicted results of different models, based on the macroeconomic characteristics under various conditions, and research requirements, choose the suitable sample information, and put forward the optimized model of wavelet neural network. It can not only solve the problems left over by previous forecast operation, but also can further speed up the efficiency and quality of data analysis and exploration.

9.4 Conclusion To sum up, the wavelet neural network model with the optimization concept as the core proposed in the experimental study in this paper should preprocess the research data in the way of normalization when carrying out the macroeconomic forecasting work. In other words, it is to carry out the preliminary prediction experimental analysis on the model construction. Combining with the analysis of the final results, it can be seen that compared with BP neural network and other models, wavelet neural network has higher application advantages. The reason is that this forecasting model not only shows a high accuracy, but also can obtain more effective data in the development of practice. Therefore, in the future macroeconomic forecast analysis, it is

9 Research on Macroeconomic Prediction Technology …

103

very important to use intelligent swarm algorithm to optimize the analysis, obtain a new wavelet neural network and model weight.

References 1. Liu, J.W., Zuo, F.L., Guo, Y.X., et al.: Research on improved wavelet convolutional wavelet neural networks. Appl. Intell. 1–2 (2020) 2. Liu, L., Lu, Z., Ma, D., et al.: A new prediction method of seafloor hydrothermal active field based on wavelet neural network. Marine Geophys. Res. 41(4) (2020) 3. Chen, W.Q., Zhang, R., Liu, H., et al.: A novel method for solar panel temperature determination based on a wavelet neural network and Hammerstein-wiener model. Adv. Space Res. (2020) 4. Ková, S., Micha’Onok, G., Halenár, I., et al.: Comparison of heat demand prediction using wavelet analysis and neural network for a district heating network. Energies 14 (2021) 5. Wang, H., Lu, H., Alelaumi, S.M., et al.: A wavelet-based multi-dimensional temporal recurrent neural network for stencil printing performance prediction. Robot. Comput. Integrated Manuf. 71(4), 102129 (2021) 6. Shan, X., Liu, H., Pan, Q.: Research on fault tolerant control system based on optimized neural network algorithm. J. Intell. Fuzzy Syst. 39(1), 1–11 (2020) 7. Zhao, J., Qu, H., Zhao, J., et al.: Spatiotemporal traffic matrix prediction: a deep learning approach with wavelet multiscale analysis. Trans. Emerg. Telecommun. Technol. (10) (2019) 8. Bao, W.B., et al.: Sea-water-level prediction via combined wavelet decomposition, neuro-fuzzy and neural networks using SLA and wind information. Acta Oceanologica Sinica 39(05), 161–171 (2020) 9. Li, P., Hua, P., Gui, D., et al.: A comparative analysis of artificial neural networks and wavelet hybrid approaches to long-term toxic heavy metal prediction. Sci. Rep. 10(1), 13439 (2020) 10. Zhang, J., Zhang, X., Niu, J., et al.: Prediction of groundwater level in seashore reclaimed land using wavelet and artificial neural network-based hybrid model. J. Hydrol. 577, 123948 (2019) 11. Jafarzadeh Ghoushchi, S., Manjili, S., Mardani, A., et al.: An extended new approach for forecasting short-term wind power using modified fuzzy wavelet neural network: a case study in wind power plant. Energy 223 (2021) 12. Li, J., Wang, J.: Stochastic recurrent wavelet neural network with EEMD method on energy price prediction. Soft. Comput. 17, 1–19 (2020)

Chapter 10

Design Research on Financial Risk Control Model Based on CNN Zhuoran Lu

Abstract In the new era, with the rapid entry of a large number of financial products into the market environment, network finance has become an indispensable part of people’s daily life. From the practical point of view, the network loan has the advantages of simple operation and a short audit cycle, so it has attracted the attention of the public once it is promoted. However, due to the influence of the environment and their own technology, it is easy to appear the risk of fraud during the operation. In the face of this development phenomenon, it is very important for the future development of the internet finance industry to construct a financial risk control model based on the credit evaluation of loan applicants. Based on the understanding of the risk control model in an unbalanced data environment, this paper constructs the design and implementation of the financial risk control model based on the convolutional neural network (CNN), and analyzes and discusses the final results.

10.1 Risk Control Model in Unbalanced Data Environment In essence, the financial risk control model is a classification problem, and the most important thing during the design and construction is to guarantee the numerical quality. This paper mainly uses the risk control model under unbalanced data to analyze the existing problems. The specific operational steps are as follows.

10.1.1 Features of Financial Data When applying for a loan, not only verifiable quantitative hard information should be stored, but also unverifiable soft information should be provided. However, as some lenders hide their own soft information, it is difficult to guarantee the perfection of Z. Lu (B) Xi’an Jiaotong University China, Xi’an, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_10

105

106

Z. Lu

Table 10.1 Data sets The training set

Validation set

The test set

Negative samples

78,391

27,542

54,701

Is the sample

663

186

607

No labels

0

0

23,579

Positive and negative sample ratio

1:118

1:148

1:90

Totals

79,054

27,728

78,887

data on a basic basis, which will lead to the imbalance of sample proportion. The dataset shown in Table 10.1 was selected for analysis in this paper [1–3].

10.1.2 Pretreatment On the one hand, this paper needs to design a normalization method in advance when filling in the missing contents of application data. The specific formula is as follows: xnor m =

x − xmin xmax − xmin

Xmin and Xmax , respectively, represent the maximum and minimum values in the original dataset, while the normalization of the linear function can directly scale the original data and control it within [0,1], and the specific operation steps are shown in Fig. 10.1. On the other hand, unbalanced treatment. The most common methods of this processing are divided into two kinds: one is from the data level, and the other is from the algorithm level. In this paper, the balance cascade algorithm and threshold movement are effectively combined to deal with the imbalance problem. The specific operation process is shown in Table 10.2 [4–7].

10.1.3 Risk Control Model After the above operation is completed, relevant features need to be optimized after filling in the missing values after enhancement, and the balance cascade algorithm is used to optimize the unbalanced operation. In order to ensure the effectiveness of data processing, logistic regression, and BP neural network were selected in this study for comparative analysis with the XGBOOST model, which has a better effect in traditional application. First, logistic regression. As an evaluation model frequently used in practical work, its greatest advantage lies in its convenience to understand and apply, and the

10 Design Research on Financial Risk Control Model Based on CNN

107

Fig. 10.1 Processing flowchart of missing values

specific regression equation is as follows:   y (1) = σ w T x + b   Among them σ z i represents the frequently used activation function such as sigmoid function, W represents the weightmatrix relative   to the inputfeatures, and  B represents the bias. Specify a dataset x (1) , y (1) , x (2) , y (2) , ..., x (m) , y (m) . Finally, the deviation between model results and practice results should be effectively controlled. By using cross-entropy loss function to replace the error square in the traditional sense, its formula as loss function is      ∧ ∧ ∧ L y , y = − y log y +(1 − y) log 1 − y After the loss function of all samples is defined, it is necessary to use the cost function formula to analyze the cost of the existing model. The specific formula is as follows:

108

Z. Lu

Table 10.2 Algorithm flow of balance cascade Algorithm balance cascade Input: Positive sample set P, negative sample set N, and the positive sample size level is far less than the negative sample size level |P| 0, xs = μe.

(18.4)

Then we call (x(ν, μ), s(ν, μ)) is the μ-center of (LC Pν ). The main process of solving LCP by IPM is to approximately track the center path at each iteration while reducing μ. Take ν = μ/μ0 , in this way, ν, μ are reducing simultaneously. Then the solution of (LCP) will be got from (LC Pν ).

18.2.2 Search Directions Let  v :=

xs . μ

(18.5)

198

H. Bi et al.

Then the pair (x, s) is the μ-center (x(μ), s(μ)) ⇐⇒ v = e. If there is a function that has the minimum value at v = e, and it is a smooth strict convex function that satisfies the value of 0 at e. Let this function be (v), then not only can it be used as the distance measurement between (x, s) and μ-center, it can also be used to define the search direction. Modify the search direction of feasible steps For some θ in (0, 1), applying Newton’s method directly to the system (18.4) will get feasible search directions  f x and  f s: M f x −  f s = θ νr 0 , s f x + x f s = −xs + μe.

(18.6)

In this paper, we modify the second equation as s f x + x f s = 0.

(18.7)

Then we obtain the feasible direction ( f x,  f s): M f x −  f s = θ νr 0 , s f x + x f s = 0.

(18.8)

Since the matrix of the system is non-singular, we have a unique solution. Defining f f scaled feasibility directions dx and ds as follows: dxf :=

v f x f v f s , ds := . x s

(18.9)

Then system (18.8) can be transformed to M S −1 X dx − ds = θ νvs −1 r 0 , f f dx + ds = 0, f

f

(18.10)

Among them, X and S are the diagonalized matrices of the vectors x and s, respectively. Search directions for centering steps Like the Sect. 2.2 of [17] discussed for LO, for some μ > 0, a natural idea is to define the search direction (c x, c s): Mc x − c s = 0, sc x + xc s = −μv∇(v). We define scaled search directions dxc , dsc by

(18.11)

18 Simplified Full-Newton Step Method with a Kernel Function

dxc := vc x/x, dsc := vc s/s.

199

(18.12)

Then system (18.11) becomes M S −1 X dxc − dsc = 0, dxc + dsc = −∇(v).

(18.13)

18.2.3 The Iterative Process of the Algorithm Assuming that the initial point is close to the neighborhood of μ-center, for the (small) threshold τ > 0, set the neighborhood to be (v) ≤ τ . The initial iteration point of the algorithm satisfies the condition, obviously (x 0 , s 0 ) is the μ0 -center of (LC Pν ), at this time there are ν = 1 and (v 0 ) = 0. Each major iteration includes a feasibility step and some central steps. For θ ∈ (0, 1), we reduce μ to μ+ := (1 − θ )μ, and then ν is reduced to ν + := (1 − θ )ν at the same time. Perform the feasibility step first. The perturbation problem (LC Pν + ) is strictly feasible and the new iteration tends to μ+ -center. However, the new iteration may still have a certain distance from the center of μ+ but satisfy (x f , s f ; μ+ ) ≤ τ f (τ f < 1). At this point, some central steps are taken to obtain the strictly feasible point (x + , s + ) of (LC Pν + ) and limit it to the required neighborhood, i.e., (x + , s + ; μ+ ) ≤ τ. Algorithm 1. IIPM for Monotone LCP.

200

H. Bi et al.

A neighborhood measurement parameter Precision parameter

;

;

Update parameter

;

begin: initial iteration points While solve (18.10) to get the feasibility step:

reduce

While

and update

do

solve (18.13) to get centering steps:

end end end

18.3 The Properties of Simple Function and the Proximity Function This paper uses the same kernel function used by Zhang and Xu [14] in solving semi-definite programming, namely ψ(t) = (t − 1)2 .

(18.14)

It is easy to know from the definition of the function that ψ  (t) = 2(t − 1), and the corresponding scale center equation becomes dx + ds = 2(e − v). So we got the same search direction as Darvay [4].

(18.15)

18 Simplified Full-Newton Step Method with a Kernel Function

201

Define proximity measure δ(υ) := e − v.

(18.16)

Obviously (v) = δ(υ)2 and (v) = 0 ⇐⇒ v = e. Lemma 3.1 The relationship between v and δ(v) is v ≤



n + δ(v).

Proof. It is easy to get from v = e − (e − v) ≤ e + δ(v). Lemma 3.2 The relationship between the elements of v and δ(v) is 1 − δ(v) ≤ vi ≤ 1 + δ(v), 1 ≤ i ≤ n. Proof. Obviously |vi − 1| ≤ v − e = δ(v).

18.4 Algorithm Analysis 18.4.1 The Feasibility Step In this section, our main task is to find the appropriate value of the step size θ and the threshold parameter τ f so that the new iteration is still feasible and satisfies (v f ) = (x f , s f ; μ+ ) ≤ τ f . The new iterates are x f := x +  f x, s f := s +  f s. After reducing μ+ = (1−θ )μ, by the definition of ν, we know that ν + = (1−θ )ν, from (18.8) we have s f − M x f − q = ν +r 0 . Defining 

 vˆ :=

xfsf f , v := μ f

xfsf . μ+

(18.17) f

From (18.9) we get x f = (x/v)(v + dx ) and s f = (s/v)(v + ds ), by xs = μv 2 , we get

202

H. Bi et al.

 f

vˆ =

f

(v + dx )(v + ds ).

(18.18)

Lemma 4.1 ([15], Lemma 3.2) Assuming t1 , t2 ≥ 1/2, then √ ψ( t1 t2 ) ≤ (ψ(t1 ) + ψ(t2 ))/2.

(18.19)

Following the definition of exponential convexity in [17, 18], when t1 , t2 ≥ 1/2, it is also said that ψ(t) is e-convex in this paper. Lemma 4.2 ([15], Lemma 4.4) Let 0 < θ < 1. Then 



v

 √ 1−θ

≤ (v) +

θ v2 . 1−θ

(18.20)

By Lemma 4.1, we may assume that vi + dxfi ≥ 1/2, vi + dsfi ≥ 1/2, 1 ≤ i ≤ n,

(18.21)

And the above inequality holds if vmin − dxf  ≥ 1/2.

(18.22)

Hence, by Lemma 3.2, we assume that dxf  ≤ 1/2 − δ(v).

(18.23)

After performing the feasible step, the value of (v) will become larger when μ is reduced. The following inequality shows that this change is controllable. Lemma 4.3

(v f ) ≤ (v) +

√ 1 (θ ( n + δ(v))2 + dxf 2 ). 1−θ

Proof. By (18.17), (18.18), (18.20) and exponential convexity we have θ (v f )  21 ((v + dx ) + (v + ds )) + 1−θ (v2 + dx 2 )  f f f 2 f2 n = (v) + 21 i=1 [2(vi − 1)(dxi + dsi ) +(d xi + d si )] f

θ v2 + + 1−θ

f

f

f θ dx 2 , 1−θ

f

the lemma follows from ds = −dx .

f

(18.24)

18 Simplified Full-Newton Step Method with a Kernel Function

203

f

18.4.2 Upper Bounds for d x  For the convenience of analysis, we assume that the LCP has a solution x ∗ , s ∗ such that x ∗ ∞ ≤ ρ p , s ∗ ∞ ≤ ρd , Me∞ ≤

ρd , q∞ ≤ ρd . ρp

(18.25)

We choose the initial points with x 0 = ρ p e, s 0 = ρd e, μ0 = ρ p ρd .

(18.26)

Lemma 4.4 ([14], Lemma 16) Let the initial point (x 0 , s 0 ) be the value in (18.26). Then for any (x, s) that makes the perturbation problem (LC Pv ) feasible, there is √ x1 ≤ (( n + δ(v))2 + 2n)ρ p . Lemma 4.5

dxf  ≤

√ 3θ (( n + δ(v))2 + 2n). (1 − δ(v))

(18.27) (18.28)

Proof. By [12], Lemma 5.5, let u = dx , z = ds , a = θ νv Ds −1 r 0 , b = 0, then f

f

Ddxf 2 + Ddsf 2 ≤ 2θ νv Ds −1 r 0 2 . f

f

since ds = −dx , we have Ddxf 2 ≤ θ νv Ds −1 r 0 2 . Elementary properties of norms imply that Ddxf  ≤ Ddxf , θ νv Ds −1 r 0  ≤ Dθ νvs −1 r 0 . We obtain the weak condition dxf  ≤ θ νvs −1 r 0 . Using ν = μ/μ0 , v =



xs/μ, we have f

dx  ≤ ≤ ≤



θ μ μ0



xs −1 0 s r  μ

μ θ  xs r 0 ∞ x1 μ0 θ r 0 ∞ x1 . μ0 vmin

204

H. Bi et al.

Since r 0 ∞ = s 0 − M x 0 − q∞ ≤ s 0 ∞ + M x 0 ∞ + q∞ ≤ 3ρd , by Lemmas 3.2 and 4.4 the result follows.

18.4.3 Values of θ and τ By (18.23) and (18.28), the exponential convexity holds if √ 1 δ(v) ( n + δ(v))2 + 2n θ≤ − . 1 − δ(v) 6 3

(18.29)

On this condition, setting δ(v) =

1 , 4

simple calculation can get θ=

1 1 ,τ = . 54n 16

Then Lemma 4.3 implies that

1  v f ≤ τ f := . 4

18.4.4 Centering Steps (v f ) is less than τ f = 41 , at this time, a central step needs to be performed to make

1  v f ≤ τ = 16 . The centering step is given by the system (18.11): x + := x + c x, s + := s + c s. Let

18 Simplified Full-Newton Step Method with a Kernel Function

v+ =



205

x + s + /μ.

(18.30)

The following theorem comes from Lemma 7 and Theorem 1 in [14]. Theorem 4.1 If (v) < 1, then the new iteration (x + , s + ) is strictly feasible, and the central step is quadratic convergent, that is (v + ) < (v)2 . Therefore, at most one center step iteration (x + , s + ) satisfies

1 .  v+ ≤ τ = 16

18.4.5 Total Number of Main Iterations A central step requires at most 2 internal iterations, since θ = bound of all the iteration is 108nlog

1 , 54n

then the upper

max{(x 0 )T s 0 , r 0 } . ε

Finally, we give the iterative complexity results of the algorithm. Theorem 4.2 Assume that in (LCP) exists an optimal solution (x ∗ , s ∗ ) with x ∗ ∞ ≤ ρ p , s ∗ ∞ ≤ ρd , when the algorithm finds a solution that meets the accuracy requirements, the upper bound of all the iteration is 108nlog

max{(x 0 )T s 0 , r 0 } . ε

18.5 Conclusion Using a simple kernel function, we propose a modified full-Newton step IIPM for monotone LCP. The property of exponential convexity is also involved for simply the analysis of the algorithm. Finally, the algorithm has the same complexity of these types of methods. In fact, our algorithm is a small-update method; in the actual calculation, how to adjust the parameters or design a large-update IIPM needs further study.

206

H. Bi et al.

Acknowledgements The authors appreciate the financial support from the Research Fund of Fundamentals Department of Air Force Engineering University (JK2019203).

References 1. Kojima, M., Mizuno, S., Yoshise, A.: A primal-dual interior point algorithm for linear programming. In: Megiddo N. (Ed.) Progress in Mathematical Programming, pp. 29–47. Springer, New York (1989) 2. Megiddo, N.: Pathways to the optimal set in linear programming. In: Megiddo N. (Ed.) Progress in Mathematical Programming, pp. 131–158. Springer, New York (1989) 3. Kojima, M., Mizuno, S., Yoshise, A.: A polynomial-time algorithm for a class of linear complementarity problems. Math. Program. 44, 1–26 (1989) 4. Darvay, Zs.: New interior-point algorithms in linear programming. Adv. Model. Optimz. 5(1), 51–92 (2003) 5. Mansouri, H., Pirhaji, M.: A polynomial interior-point algorithm for monotone linear complementarity problems J. Optim. Theory Appl. 157, 451–461 (2013) 6. Bai, Y.Q., El Ghami, M., Roos, C.: A new efficient large-update primal-dual interior-point method based on a finite barrier. SIAM J. Optim. 13(3), 766–782 (2002) 7. Zhang, L., Xu, Y.: A full-newton step interior-point algorithm based on modified newton direction. Oper. Res. Lett. 39, 318–322 (2011) 8. Kheirfam, B., Mahdavi-Amiri, N.: A new interior-point algorithm based on modified NesterovTodd direction for symmetric cone linear complementarity problem. Optim. Lett. 8, 1017–1029 (2014) 9. Liu, Z., Sun, W.: A full-NT-step infeasible interior-point algorithm for SDP based on kernel functions. Appl. Math. Comput. 217, 4990–4999 (2011) 10. Wang, W., Bi, H., Liu, H.: A full-Newton step interior-point algorithm for linear optimization based on a finite barrier. Oper. Res. Lett. 44(6), 750–753 (2016) 11. Roos, C.: A full-Newton step O(n) infeasible interior-point algorithm for linear optimization. SIAM J. Optim. 16, 1110–1136 (2006) 12. Mansouri, H., Zangiabadi, M., Pirhaji, M.: A full-Newton step O(n) infeasible-interior-point algorithm for linear complementarity problems. Nonlinear Anal. Real World Appl. 12(1), 545–561 (2011) 13. Zhang, D., Zhang, M.: A full-Newton step infeasible interior-point algorithm for P*(κ)-linear complementarity problems J. Syst. Sci. Complex. 27, 1027–1044 (2014) 14. Zhang, L., Bai, Y., Xu, Y.: A full-Newton step infeasible interior-point algorithm for monotone LCP based on a locally-kernel function. Numer. Algor. 61, 57–81 (2012) 15. Wang, W., Liu, H., Bi, H.: Simplified infeasible interior-point algorithm for linear optimization based on a simple function. Oper. Res. Lett. 46(5), 538–542 (2018) 16. Kojima, M., Megiddo, N., Noma, T.: A unified approach to interior point algorithms for linear complementarity problems. In: Lecture Notes in Computer Science. Springer (1991) 17. Bai, Y., El Ghami, M., Roos, C.: A comparative study of kernel functions for primal-dual interior-point algorithms in linear optimization. SIAM J. Optim. 15(1), 101–128 (2004) 18. Peng, J., Roos, C., Terlaky, T.: A new and efficient large-update interior-point method for linear optimization. J. Comput. Tech. 6, 61–80 (2001)

Chapter 19

Marine Ship Identification Algorithm Based on Object Detection and Fine-Grained Recognition Xingyue Du, Jianjun Wang, Yiqing Li, and Bingling Tang

Abstract To solve the problem of low accuracy, reliability, and generalization ability of traditional ship recognition methods, a fusion classifier for ship detection and recognition based on deep learning is presented. Accurate positioning and identification of offshore ships in visible images can be achieved by building data sets, object detection, fine-grained recognition, and other technical means. Experiments show that the recognition accuracy of this method can reach 84.7% on the built ship image test set, and the recognition efficiency is high. The validity of this method is verified.

19.1 Introduction As an important transportation equipment in the new era, it is of great significance for national defense security to quickly obtain the information of ships in territorial sea and international hot spots. Since most of the scenes detected and identified by naval vessels are in the vast sea, remote sensing satellite or UAV aerial photography is usually used for image acquisition. Based on big data and deep learning technology, it has achieved good results in the fields of image classification, target detection, and target segmentation, and landed in many practical industries. Image classification is the most basic task in computer vision, and it is also the task for almost all the benchmark models to compare. Now, in the image classification network based on deep learning network, represented methods mainly include aggregated residual transformations for deep neural networks [1] and a convolutional neural network with dense connections [2]. Target detection is essentially a numerical optimization problem [3–5], and it is possible to use distributed techniques when the amount of training data is large [6, 7]. X. Du (B) · J. Wang · Y. Li · B. Tang CSSC Ocean Exploration Technology Institute Co. Ltd, Wuxi, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_19

207

208

X. Du et al.

Fig. 19.1 Overall process

In this paper, by means of data augmentation, a set of marine ship data set under visible light images is established. The target detection network and the fine-grained recognition network with skills are used for training, and the two trained networks are fused to obtain a fusion classifier. Experimental results show that the fusion classifier has good classification performance and higher accuracy than the common deep learning network.

19.2 Overall Process The method in this paper is divided into three steps. First, collect and annotate ship data, and construct different types of ship data sets. Then the target detection model is trained, and the YOLOV5 model is used as the basic model to locate the specific position of the ships at sea. Finally, the detected ship image frame is input into the fine-grained identification network. The overall process is shown in Fig. 19.1.

19.3 Design Scheme 19.3.1 Construction of Data Sets The rationality of the data set design will directly affect the performance of the algorithm. The more the number of images in the data set, the more complete the types, and the stronger the accuracy and robustness of the algorithm. Our article collected a set of data sets of sea vessels, classified and summarized the data sets according to the types of vessels, and labeled the pictures in the classified data sets. Seven categories of ships were selected based on initial requirements. As the data itself is not necessarily accurate, the pictures obtained are not necessarily the pictures of the corresponding countries or ships of the corresponding types, so it is necessary to carry out data cleaning, that is, manually carry out classification inspection on each picture. Our article ended up building a data set of 10, 599 images for seven types of aircraft carriers, destroyers, cruisers, frigates, amphibious warships, other ships, and civilian ships. Among them, 1,000 were aircraft carriers, 287 were cruisers, 2,909 were destroyers, 2,783 were frigates, 1,121 were amphibious warships, 1,152 were other ships, and 1,347 were civilian ships.

19 Marine Ship Identification Algorithm Based on Object …

209

Fig. 19.2 YOLOV5 network structure

19.3.2 Target Detection Network In view of the many problems encountered in target detection in the shipboard scene, this paper adopts YOLOV5 as the basic detection model, and the network structure is shown in Fig. 19.2. As a single-stage detection network, YOLOV5 completes the task of target detection and classification in the same network. For a single input image, YOLOV5 extracts the features of the image at three different scales based on the preset anchor points through convolutional neural network. The offset of the four dimensions of the detection box and anchor coordinates and the confidence and classification score of the target in the box are predicted. Then the non-maximum suppression method is used to de-noise the prediction results with low confidence and large overlap area, and the final detection results are obtained. As the above operation is realized in a neural network, the detection speed is greatly improved compared with the traditional detection network. The multi-scale prediction method enhances the detection ability of the network for targets of different sizes, especially in the detection of small targets.

19.3.3 Fine-Grained Identification Network The difference between fine-grained image classification and general image classification is that in general image classification, the target object belongs to the

210

X. Du et al.

(a)

(b)

(c)

Fig. 19.3 Different types of similar ships

meta-category of the coarse category, such as cat, dog, bird, and car, and these categories look very different in appearance. However, in fine-grained image recognition, the objects to be recognized all belong to the same meta-category. The fine-grained characteristics will lead to the high similarity of subcategories, resulting in small inter-class differences, and the large intra-class differences due to the pose, size, angle, and other factors. These challenges undoubtedly increase the difficulty of fine-grained ship classification. Ship identification is a fine-grained classification task with large variance within the class. Ships generally belong to the ship category, and we need to distinguish the subcategories, that is, the ships belonging to the seven types. Because objects are all subclasses of a meta-category, the fine-grained nature of the objects causes them to look very similar. As shown in the following figure, Fig. 19.3a and b belong to the same category as destroyers, and Fig. 19.3c belongs to frigates, which are different from Fig. 19.3a and b. Such classification is extremely difficult, and the general deep learning recognition model cannot well distinguish (a), (b), and (c). An end-to-end approach is used to train the fine-grained recognition network. The method is divided into the following four parts: data enhancement design, loss function design, optimizer design, and learning strategy design.

19.3.3.1

Data Enhancement Design

In the field of deep learning, the larger the scale and the higher the quality of the data, the better the generalization ability of the model can be, and the data directly determines the upper limit of the model learning. However, the data set is often not large enough to cover all scenarios. This problem exists in the ship fine-grained identification mission. Currently, the database collected has 10,563 training pictures, under 7 categories, and the number of pictures is small. In addition, there were only 287 training data in the least category of images, and there was a serious data imbalance. Therefore, data enhancement methods are used to make the limited data produce more equivalent data. Various methods of data enhancement have been adopted for data expansion to improve the generalization ability of the model.

19 Marine Ship Identification Algorithm Based on Object …

19.3.3.2

211

Model Design

For the convolutional neural network, its core calculation is the convolutional operator, which learns from the input feature graph to the new feature graph through the convolutional kernel. In essence, convolution is the fusion of features of a local region, including spatial (H and W dimensions) and inter-channel (C dimensions) fusion of features. The ResNext101 is used as the backbone model, followed by a classifier for training, so as to obtain the recognition features with strong discriminative power. ResNext is a combination of ResNet and Inception, as shown in the figure below. ResNet’s residual structure is on the left and ResNext’s residual structure is on the right. ResNext performs grouping convolution in the residual module, as shown in Fig. 19.4. The network is widened and the performance is improved without increasing the parameter complexity. In particular, we want the model to focus on the relationships between channels. We want the model to automatically learn the importance of the different channel characteristics. Therefore, the Squeeze-and-Director-module is applied on the ResNext network. As shown in Fig. 19.5, Squeeze-and-Excitation module, was the module first to feature map obtained by convolution advertisement operation, got the channel level global features, then Excitation was carried out on the characteristics of global operation, studying the relationship between each channel, The weight of different channels is also obtained, and the final feature is finally obtained by multiplying the original feature graph. Essentially Squeeze-and-Excitation module is on the channel dimension for attention or gating operation, and this attention mechanism model can pay more attention to the channel characteristics of the largest amount of information, and inhibit the channel characteristics of what is not important. Another

Fig. 19.4 ResNet and ResNeXt

Fig. 19.5 SENet

212

X. Du et al.

point is that SE modules are generic, which means they can be embedded into existing network architectures.

19.3.3.3

Loss Function Design

The loss function evaluates the error between the predicted value and the true value. It is usually expressed by L(Y,f(x)). The smaller the value obtained by the loss function, the better the robustness of the model. Assuming that the training data are distributed independently from the population, the generalization error of the model can be reduced by minimizing the empirical error of the training data. The classification labels were softened and the cross-entropy classification loss was used to measure the loss. Due to the problem of small inter-class variation and large intra-class variation in ship fine-grained identification, we optimized the classification boundary based on margin classification method. The multi-classification cross-entropy loss function is shown in Formula (19.1): Loss = −yi

n  i=1

log( p(xi )) = −yi

n 

 log

i=1

T

e W T xi +

eW 



xi

j, j=i

eW

Tx

j

(19.1)

yi is the softened one-hot classification label. As the formula shows, a boundary value to the probability of   is needed W x T ,x = , get cos θ , i = W x , then the loss function classification.W = W j i j  x as shown in Formula (19.2): 

 escos(θ yi ,i ) log( p(xi )) = −yi log Loss = −yi  escos(θ yi ,i ) + j, j=i escos(θ j ,i ) i=1 i=1 (19.2) n 

n 

We add the margin value to optimize the classification boundary, and the final loss function can be obtained as shown in Formula (19.3): 

 es(cos(θ yi ,i )−m) log( p(xi )) = −yi log Loss = −yi  es(cos(θ yi ,i )−m) + j, j=i escos(θ j ,i ) i=1 i=1 (19.3) n 

n 

As shown in the figure below, Fig. 19.6a is the classification boundary of the crossentropy loss function, and Fig. 19.6b is the schematic diagram of the classification boundary after optimization. It can be found from the figure that the optimized boundary in Fig. 19.6b is more reasonable and differentiated.

19 Marine Ship Identification Algorithm Based on Object …

213

Fig. 19.6 Classification boundary

Fig. 19.7 Gradient descent diagram

19.3.3.4

Optimizer Design

The optimizer is used to reduce the loss function and adjust the model parameters. The most basic one is the stochastic gradient descent method. In the actual operation, the batch gradient descent method is generally adopted because the training data are inputted into the model in batches. The updating of the model is related to the gradient of the loss function to the model parameters, that is to say, the model parameters are reduced along the gradient direction to minimize the loss function. The basic strategy can be interpreted as “finding the fastest way down the hill within a limited visual range,” so for each step, take the direction of the steepest position (the gradient) and take the next step. It can be graphically expressed as Fig. 19.7. However, this method is easy to fall into the local optimal solution. Momentum factor is added to update the gradient, which has the effect of accelerating gradient descent. Because the change of the current weight is affected by the change of the last weight, it is similar to the ball rolling down with inertia. This accelerates the ball to roll down and get out of the saddle point. The manual setting of the learning rate greatly ignores the possibility of other changes in the learning rate. The setting of learning rate has a great influence on the performance of the model, so it is necessary to adopt some intelligent methods to update the learning rate. When updating the model parameters, we use gradient change to update the learning rate, so that the place with a slower gradient can get a smaller learning rate. Adam optimizer is used to incorporate momentum into the estimation of the gradient first-order moment and modify the estimation of the first-order moment and second-order moment.

214

X. Du et al.

19.4 Experimental Verification The weight files of the two networks are obtained by training according to the network and training methods mentioned above. Target detection is carried out on the predivided data sets, and the data sets are input into the target detection network, finegrained identification network, and the network combining target detection and finegrained identification, respectively, for comparison. The classification accuracy rate (ACC) and average classification accuracy rate (MACC) are showed for each class. The classification accuracy is calculated as shown in Formula (19.4): Acc =

N 1  I I ( pr ed(xi ) == yi ) N i=1

(19.4)

N is the total number of samples of this category,I I (·) is an indicator function, when (·) is true, I I (·)=1. The average classification accuracy, namely mean accuracy (MACC), is averaged for each category, as shown in Formula (19.5): m Acc =

K 1  Acci K i=1

(19.5)

K is the total number of categories, ACCi is the classification accuracy of any category. Comparison of recognition accuracy of different networks are shown in Table 19.1. It can be seen from Table 19.1 that the fusion method that uses the target detection network to locate first and then feeds the detected ship image frame into the finegrained identification network has an overall classification accuracy of 2.5% higher Table 19.1 Comparison of recognition accuracy of different networks Category

YOLOV5 network/%

Fine-grained identification network/%

YOLOV5 + Fine-grained identification network /%

Aircraft carrier (Acc)

84.7

86.3

87.3

Destroyer (Acc)

85.9

86.7

87.3

Frigate (Acc)

90.0

90.7

90.3

Cruiser (Acc)

90.1

90.7

92.4

Amphibious warship (Acc)

87.6

89.3

91.1

Civilian (Acc)

87.0

90.2

90.5

Other ships (Acc)

50.3

51.7

54.1

Overall classification Accuracy (mAcc)

82.2

83.7

84.7

19 Marine Ship Identification Algorithm Based on Object …

215

than that of the target detection network alone. The overall classification accuracy is 1% higher than that of the fine-grained identification network alone. Except for the cruiser, the classification accuracy of each category is also the highest, indicating that the performance of the fusion classifier has been effectively improved.

19.5 Conclusions and Prospects In this paper, a set of marine ship data set under visible light images is established. The target detection network and fine-grained recognition network are used for training, and the two trained networks are fused to obtain a fusion classifier. The fusion classifier is used to detect and recognize the images in the test set. The experimental results show that the fusion classifier has the best classification performance, and the overall classification accuracy can reach 84.7%. Compared with other deep learning networks, the method in this paper has certain advantages. As an important transportation equipment in the new era, it is of great significance for national defense security to quickly obtain the information of ships in national territorial sea and international hot spots. This project is oriented to the actual requirements of ship identification in the marine environment, and explores the method of fine-grained neural network, which has achieved certain results and provided an auxiliary role for intelligent marine decision-making. The following work is to add the top view image taken by the satellite to detect and identify the nationality and type of the warship and other information.

References 1. Xie, S., Girshick, R., Dollár, P. et al.: Aggregated residual transformations for deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016) 2. Huang, G., Liu, Z., Laurens, V., et al.: Densely connected convolutional networks. IEEE Comput. Soc. (2016) 3. Abualigah, L., Diabat, A.: Advances in sine cosine algorithm: a comprehensive survey. Artif. Intell. Rev. 54, 2567–2608 (2021) 4. Abualigah, L., et al.: Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 157, 107250 (2021) 5. Abualigah, L., Diabat, A., Mirjalili, S., Abd Elaziz, M., Gandomi, A.H.: The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 376, 113609 (2021) 6. Abualigah, L., Alkhrabsheh, M.: Amended hybrid multi-verse optimizer with genetic algorithm for solving task scheduling problem in cloud computing. J Supercomput. (2021) 7. Abualigah, L., Diabat, A., Elaziz, M.A.: Intelligent workflow scheduling for big data applications in IoT cloud computing environments. Cluster Comput. (2021)

Chapter 20

Automatic Detection of Safety Helmet Based on Improved YOLO Deep Model Nan Ni and Chao Hu

Abstract Safety helmets are important equipment to ensure worker safety on construction site, so it is necessary to detect whether the construction workers are wearing safety helmets. In this paper, we propose an improved YOLO deep model based on YOLOv5 algorithm to detect the head and helmet objects. First, K-means clustering is used to select 12 representative anchor boxes with appropriate size for the head and helmet objects. Then the model’s detection scale is improved to better catch the features of small or overlapping objects. Because the goal is to ensure safety, the model performance on head detection is more important. To minimize the false negative rate of head detection, the loss function is modified to separately calculate the loss caused by different classes of objects. Through experiments, the improved model increases the overall mAP by 1.7% compared to basic YOLOv5 and reaches 96.2%. The recall rate of head detection is increased by 3.3% and reaches 96.8%. The overall performance shows that the improved model has practical significance for construction safety.

20.1 Introduction It is an important issue to better guarantee the safety of workers on the construction sites. Due to lack of safety awareness, many workers enter the construction sites without wearing safety helmets. This may cause serious safety hazards. Therefore, it is necessary to efficiently detect whether a worker is wearing a helmet. The traditional human surveillance system is inefficient and consumes lots of human resources. With the improvement of techniques of image processing, machine learning and deep learning, automatic object detection algorithms have become the basis for realizing automatic surveillance systems. N. Ni (B) Zhejiang University, Zhejiang Province, Hangzhou 312000, China e-mail: [email protected] C. Hu NingboTech University, Zhejiang Province, Ningbo 315199, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_20

217

218

N. Ni and C. Hu

In the early days, object detection was usually processed through feature extraction and classifier classification. The representative features include histogram of oriented gradient (HOG) [1], scale-invariant feature transform (SIFT) [2] and speeded up robust feature (SURF) [3]. The representative classification algorithms are support vector machines (SVM) [4] and XGBoost [5]. In industrial applications, the environment varies significantly, and the features of the objects themselves are much more complex. It is hard for designer to build a feature extractor to take all those factors into consideration. The detector trained on those features may have poor generalization ability. In the field of deep learning, image object detectors can be roughly divided into two categories. Firstly, two-stage detectors, such as R-CNN [6], Fast R-CNN [7], Faster R-CNN [7] and Mask R-CNN [8], are based on region proposal. The two stages can be divided by region of interest polling layer. In Faster R-CNN, the first stage is a region proposal network, which generates candidate object bounding boxes. In the second stage, classification and bounding box regression are done based on each candidate box features extracted by region of interest pooling layer. This type of detectors has relatively high localization ability and object detection accuracy, but it is difficult to perform real-time detection task. Secondly, one-stage detectors, such as YOLO [9], SSD [10] and DSSD [11], propose predict box directly from the input image without region proposal within 1 entire convolutional neural network. One-stage detectors can meet the need of real-time detection, although may lose detection accuracy and localization ability. After several versions, the YOLO network model has stood out in the field of real-time detection. YOLOv2 [12] borrows idea from Faster R-CNN to take predefined anchor boxes as prior bounding boxes to improve the performance of predict bounding boxes. It also introduces batch normalization [13]. YOLOv3 [14] adopts FPN [15] architecture to increase the detection scale of the prediction bounding box. Therefore, we implement safety helmet detection based on the recent YOLOv5 network model, and then improve the network model according to the task. The final experimental results show that the improved model has made obvious progress in key performances and has good application value.

20.2 YOLOv5 Network Model YOLOv5 is an improved implementation of network model based on YOLOv4 [16]. Figure 20.1 exhibits the network structure of YOLOv5.

20.2.1 Input YOLOv5 still uses predefined anchor boxes as prior bounding boxes like YOLOv3, and Mosaic data augmentation algorithms which is introduced by YOLOv4. Mosaic

20 Automatic Detection of Safety Helmet Based on Improved …

219

Fig. 20.1 YOLOv5 network structure can be roughly divided into four parts. Each part is responsible for corresponding function and is composed of some modules. The internal structure of the module can be seen in the lower left part of the figure

randomly picks four pictures. Pictures are scaled and spliced randomly, which greatly enriches the detection dataset. In particular, random scaling adds many small targets, making the trained network model more robust.

20.2.2 Backbone YOLOv4 adopts CSPDarknet53 as new backbone, and introduced CSP module, referring to the 2019 CSPNet [17]. CSPnet optimizes the heavy calculation caused by repeated gradients information in the network, which reduces the learning cost of CNN. YOLOv5 has reduced the number of CSP module and added focus module, which results in faster speed without obvious performance degradation. The SPP [18] module uses the maximum pooling of multi-scale fusion, which is more effective than just using the k∗k maximum pooling to increase the receiving scale of the backbone features.

20.2.3 Neck In the field of object detection, to better extract fusion features, a Neck is usually inserted between Backbone and Output. YOLOv5 uses feature pyramid networks (FPN) and pixel aggregation network (PAN) [19] combined structure introduced by YOLOv4, as shown in Fig. 20.2. In addition, YOLOv5 uses CSP2 structure instead of ordinary convolution layer to strengthen the ability of feature fusion.

220

N. Ni and C. Hu

Fig. 20.2 The FPN layer transmits strong semantic features from top to bottom and the PAN layer conveys strong localization features from bottom to top

20.2.4 Output The output of the network is the prediction bounding box parameters of three different scales. Each box contains four position parameters, a confidence parameter and a classification parameter. During training, the loss of YOLOv5 consists of three parts, GIoU loss, object confidence loss and classification loss, as showed in (20.1). loss = L giou + L obj + L cls

(20.1)

GIoU loss, showed in (20.2), represents the loss caused by difference between the prediction bounding box and the ground truth box, which replaces the MSE loss used in YOLOv3. At the same time, compared to the basic IoU, GIoU can also perceive the distance between the prediction bounding box and ground truth box. In (20.2), i represents the i th grid. j represents the j th prediction bounding box of a certain grid. Bi j represents the prediction bounding box. B i j represents the corresponding ground truth box and Ci j represents the smallest box covering both Bi j and B gt . obj The value of the function 1i j is 1 when the current prediction bounding box is a positive sample, or 0 otherwise. The positive and negative samples are determined by comparing the predefined IoU threshold with the I oUi j which is the IoU between corresponding prediction bounding box and the ground truth box. If the IoU is greater than threshold, the prediction bounding box is classified as a positive sample during training, which will also alleviate the bias caused by the imbalance of positive and negative samples. 



 B S2   |Ci j − Bi j B i j | obj = λiou 1 1i j ∗ (1 − I oUi j + ) |Ci j | i=0 j=0 

L giou

(20.2)

Object confidence loss is calculated separately for positive and negative samples, which uses MSE loss of confidence parameters. The classification loss adopts the classification cross-entropy loss of the positive samples. When used for detection, YOLO uses non-maximum suppression [20] to filter the prediction bounding box. DIoU-based NMS has been added to YOLOv5, which enhances the detection ability for some overlapping objects.

20 Automatic Detection of Safety Helmet Based on Improved …

221

20.3 Improvements The main purpose of this paper is to propose a model that performs well on the safety helmet detection task. Based on the analysis of the task, we improve the YOLOv5 network model accordingly.

20.3.1 Anchor Boxes and Detection Scale To better understand the shape distribution of the bounding boxes of the objects in the construction site images, we first analyze the ground truth boxes in the training dataset used in the latter experiment. We use K-means clustering here to find out representative shape of object bounding boxes. Distance parameter between object bounding box and clustering center box is defined in (20.3). The objective function is shown in (20.4). k represents the number of clustering center boxes and n represents the total number of objects bounding boxes. c j represents the j th clustering center box, and oi j represents the i th object bounding box belonging to the j th clustering center. When objective function converges, representative anchor boxes are obtained.     distance oi j , c j = 1 − I oU oi j , c j   argmax avg iou = argmax

n k i=1

1

k j=1

n

  I oU oi j , c j

(20.3)

(20.4)

When selecting the number of clustering centers, we find that before the number of clustering centers reaches 13, each additional center increases the overall avg iou obviously, as shown in Fig. 20.3. Therefore, when the number of clustering centers is 13, the clustering center boxes are more representative to serve as anchor boxes. We finally selected 12 cluster center boxes as anchor boxes. On the one hand, these Fig. 20.3 The curve shows the relationship between the value of object function and the number of cluster centers. As the value increases, the currently selected cluster center boxes can better reflect the size information of the objects in the data

222

N. Ni and C. Hu

anchor boxes are as representative as possible. On the other hand, it is convenient to distribute them equally in different detection scales. At the same time, we find that small-sized objects in the task appear frequently and often overlap. The number of objects bounding boxes belonging to the smallest size cluster center box, which size is 10*12 pixels, accounts for about 10%. Overlapping makes it more difficult to capture the features of those objects. The smallest grid size of detection scale in basic YOLOv5 model is 8*8 pixels. It is difficult to detect small objects with a size of about 10*12 pixels. We choose to add a detection scale to guarantee small objects detection performance, as shown in Fig. 20.4. The newly added detection scale has 160*160 grids. Each grid covers 4*4 pixels. Small objects can be detected more accurately. At the same time, because the backbone network needs to have stronger feature extraction capabilities, we moderately deepen it and expand the max pooling fusion scale of SPP module. In this way, the previous 12 anchor boxes can be equally divided into four detection scales according to their size, as shown in Table 20.1.

Fig. 20.4 The improved network model has added a new detection scale with 160*160 grids to detect small objects or overlapping objects with unobvious features

Table 20.1 Anchor boxes shape and related detection scales

Detection Scale

Anchor box shape Shape 1

Shape2

Shape3

20 * 20

(59,68)

(88,85)

(122,166)

40 * 40

(34,42)

(65,33)

(45,52)

80 * 80

(20,25)

(40,18)

(27.32)

160 * 160

(10,12)

(18,10)

(16,17)

20 Automatic Detection of Safety Helmet Based on Improved …

223

20.3.2 Loss Function In the task, the head detection performance is important. The false negative rate of head detection directly reflects the safety performance of the model. Therefore, we make corresponding improvements to the confidence loss and the classification loss as shown in (20.5) and (20.6). c represents the class of the current ground truth object. First, in the process of training, since there are generally fewer human head objects than helmet objects in construction site images, and also every single object shared the same weight when computing the confidence loss or classification loss, this will cause the final model to tend to prioritize helmet detection, in order to balance the gap in contribution to loss caused by objects of different classes, we add a balance multiplier λc whose value is inversely proportional to the number of objects of the certain class. At the same time, to improve the false negative rate of head detection, we amplify the weights of two related types of loss. The amplification factors are αc and βc , shown in (20.7). When the prediction box is a positive sample and the class of its corresponding ground truth is head, the confidence loss caused by this prediction box is magnified by αc . Therefore, the error in prediction confidence of head objects will be more penalized. βc magnifies the loss of classification when the ground truth class is head, which means misclassification of head objects will be more penalized. S  B 

L obj = λobj

obj

1i j

i=0 j=0

+ λnoobi

 con f 2 con f λc αc Bi j − B i j



2

C∈classes

B S2   i=0 j=0

nobj

1i j

 con f 2 con f λc Bi j − B i j



(20.5)

C∈classes

S B   obj − 1 1i j 2

i=1

j=1



λc βc pi j (c) lg( pi j (c)) + (1 − pi j (c)) lg(1 − pi j (c)) 



c∈classes

(20.6) αc =

α 1

c = head c ! = head

βc =

β 1

c = head c ! = head

(20.7)

224

N. Ni and C. Hu

20.4 Experimental Result and Analysis 20.4.1 Dataset The dataset used in experiment is composed of images collected online. Images have different quality and diverse coverage of construction scenes, so that a model with better generalization ability can be trained. Objects in dataset are labeled in VOC format, containing 6789 head objects and 18,966 helmets objects. Dataset is divided into train set, validation set and test set which contain 3500 images, 500 images and 1000 images, respectively. The training set uses Mosaic data augmentation before input.

20.4.2 Training The basic model and the improved model are trained with fine-tuned hyperparameters. The newly added parameters α and β in (20.7) are both set to 1.5. The backbone network of the basic model is also deepened in the same way as the improved model. Figure 20.5 shows the convergence of the loss during the training process of the improved model. Stochastic gradient descent algorithm is used in the training. It is helpful to break away from the local extreme points and prevent overfitting. Fig. 20.5 Curves between different types of loss and training epochs

Fig. 20.6 Curves of model’s performance on validation set detection during training process

20 Automatic Detection of Safety Helmet Based on Improved …

225

Figure 20.6 shows the detection performance of the improved model on the validation set during the training process. After each epoch, we use the improved training model to perform on the validation set. Detection uses the default non-maximum suppression parameter 0.65 and the confidence threshold parameter 0.01. As the training progresses, the performance of the improved model on the validation set is continuously improving and finally stabilized.

20.4.3 Results We finally test the two trained models on the test set. First, we need to set the thresholds to filter prediction bounding boxes to get detection output. After comparing the detection performance of trained model on validation set, we use 0.6 as the nonmaximum suppression IOU threshold, which performs better on validation set. To give priority to the detection performance of the false negative rate of head detection, and to balance the overall performance, we set the confidence thresholds separately for different object classes. When the classification of the prediction bounding box is head, the confidence threshold is set to 0.35 which is lower, and 0.4 when it is classified as helmet. This means prediction bounding box classified as head is easier to be adopted as detection output to reduce the cases of not detecting the actual head object. The output example of the improved model’s detection is shown in Fig. 20.7. To evaluate the detection performance of the model, we use the following parameters, where P stands for precision, R stands for recall, TP stands for true positive, FN stands for false negative, FP stands for false positive and FNR stands for false negative rate, which equals to 1-recall. Fig. 20.7 The output example of the improved model’s detection

226

N. Ni and C. Hu

Table 20.2 Results of models on object detection Model

Class

TP

FP

FN

P

R

FNR

Basic

Helmet

3365

201

157

0.9436

0.9554

4.46%

Head

1097

83

75

0.9297

0.9360

6.40%

Improved

Helmet

3402

153

120

0.9570

0.9659

3.41%

Head

1135

60

37

0.9498

0.9684

3.16%

There are 4696 objects in the test set, including 3522 helmet objects and 1172 head objects. Table 20.2 shows the detection performance of the improved model on different classes of objects in the test set. The improved model has improved the overall mAP by 1.7%. It can be seen from the table that all detection performance has been improved. The recall and precision performance of helmet detection has increased by more than 1%. In particular, the performance of the false negative rate of head detection is greatly improved by 3.30% and dropped to 3.16%, which is an excellent performance in ensuring safety. Figure 20.8 shows the comparison of the small object detection done by the basic model and the improved model. Small objects that are not detected in the basic model are detected in the improved model. But in the lower right of the figure, we can see that there are two highly overlapping head objects that are still not detected. This situation

Fig. 20.8 Improvement made on small object detection can be seen by comparing left outputs done by the basic model and right outputs done by the improved model

20 Automatic Detection of Safety Helmet Based on Improved …

227

Fig. 20.9 Improvement made on classification can be seen by comparing left outputs done by the basic model and right outputs done by the improved model

is hard to prevent. On the one hand, because the features of highly overlapping objects are not obvious, and on the other hand, different prediction bounding boxes of highly overlapping objects tend to be filtered out due to non-maximum suppression. If we choose to keep on reducing DIoU non-maximum suppression threshold, the precision performance of model will decrease significantly. However, in practical applications, since the detection is real time and the objects tend to move frequently, the objects will not always be in a highly overlapping state. Figure 20.9 shows the comparison of classification done by the basic model and the improved model. Misclassified head objects are corrected in the improved model. When the classification is ambiguous, the improved model tends to classify the object as a head object. Although this will increase the probability of the helmet object being misclassified, reducing the probability of misclassifying the head object is more meaningful to ensure safety in this task.

20.5 Conclusion In this paper, we have improved the basic YOLOv5 network model for the task of safety helmet detection on construction site. Firstly, a new detection scale has been added. Improved model’s detection performance on small objects and overlapping objects has been significantly improved. Secondly, we adjust the loss function, so that the improved model has a better performance in detecting head objects, especially the false negative rate, to ensure safety performance. The final experimental results show that the improved model performs well in automatic safety helmet detection task. And because the scenes in the training set are diverse and complex, and images of different quality are intentionally used, the improved model has good generalization capabilities and there is also room for further improvement. Before putting into practical use, concrete dataset of the target construction scenes can be used to further fine-tune the model, and the accuracy of the detection performance will be improved.

228

N. Ni and C. Hu

References 1. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection[C]. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05). IEEE, vol. 1, pp. 886–893 (2005) 2. Lowe, D.G.: Distinctive image features from scale-invariant keypoints[J]. Int. J. Comput. Vision 60(2), 91–110 (2004) 3. Bay, H., Tuytelaars, T., Van Gool, L.: Surf: speeded up robust features[C]. In: European Conference on Computer Vision, pp. 404–417. Springer, Berlin (2006) 4. Suykens, J.A.K., Vandewalle, J.: Least squares support vector machine classifiers[J]. Neural Process. Lett. 9(3), 293–300 (1999) 5. Chen, T., He, T., Benesty, M., et al.: Xgboost: extreme gradient boosting[J]. R package version 0.4-2 1(4), 1–4 (2015) 6. Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014) 7. Ren, S., He, K., Girshick, R., et al.: Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Adv. Neural. Inf. Process. Syst. 28, 91–99 (2015) 8. He, K., Gkioxari, G., Dollár, P., et al.: Mask r-cnn[C]. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017) 9. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016) 10. Liu, W., Anguelov, D., Erhan, D., et al.: SSD: single shot multibox detector[C]. In: European Conference on Computer Vision, pp. 21–37. Springer, Cham (2016) 11. Fu, C.Y., Liu, W., Ranga, A., et al.: DSSD: deconvolutional single shot detector [J]. arXiv: 1701.06659 (2017) 12. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017) 13. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift[C]. In: International Conference on Machine Learning, pp. 448–456. PMLR (2015) 14. Redmon, J., Farhadi, A.: Yolov3: an incremental improvement[J]. arXiv:1804.02767 (2018) 15. Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117– 2125 (2017) 16. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: optimal speed and accuracy of object detection[J]. arXiv:2004.10934 (2020) 17. Wang, C.Y., Liao, H.Y.M., Wu, Y.H., et al.: CSPNet: a new backbone that can enhance learning capability of CNN[C]. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 390–391 (2020) 18. Huang, Z., Wang, J., Fu, X., et al.: DC-SPP-YOLO: Dense connection and spatial pyramid pooling based YOLO for object detection[J]. Inf. Sci. 522, 241–258 (2020) 19. Liu, S., Qi, L., Qin, H., et al.: Path aggregation network for instance segmentation[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8759– 8768 (2018) 20. Neubeck, A., Van Gool, L:. Efficient non-maximum suppression[C]. In: 18th International Conference on Pattern Recognition (ICPR’06). IEEE, vol. 3, pp. 850–855 (2006)

Chapter 21

Opinion Mining and DENFIS Approaches for Modelling Variational Consumer Preferences Based on Online Comments Huimin Jiang, Gaicong Guo, and Farzad Sabetzadeh Abstract Previous studies have mainly developed consumer preference models through customer surveys, ignoring the variability of consumer preferences over time and the difficulty of obtaining time series data. Recently, some previous studies tried to analyze consumer preferences based on time series data for products through online customer comments. However, previous studies have not solved the ambiguity of emotions described by customers in online comments. To solve the above problems, this paper proposes dynamic evolving neural-fuzzy inference system (DENFIS) method to model variational consumer preferences through online customer comments. Using the time series data mined by opinion mining method and the attributes of the review products, the DENFIS method is offered to dynamically model consumer preferences. Case studies are used to illustrate the proposed method. According to the average relative error and error variance, the verification test results show that the proposed DENFIS method is superior to other three adaptive neuro-fuzzy inference system (ANFIS) methods.

21.1 Introduction Nowadays, the development of new products that can meet consumer preferences has become an important issue that enterprises need to solve. Therefore, modeling the relationship between product attributes and consumer preferences is crucial. Customer surveys, as the common method, is challenging to collect time series data that reflect the dynamic changes of consumer preferences. Unlike the traditional survey data, online comments not only contain the consumer’s evaluation of the product, but also the evaluation date and other information, which can be H. Jiang (B) · G. Guo Department of Decision Sciences, School of Business, Macau University of Science and Technology, Macau, China e-mail: [email protected] F. Sabetzadeh Faculty of Business, City University of Macau, Macau, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_21

229

230

H. Jiang et al.

easily obtained and used in some research to develop a consumer preference model. However, previous research failed to resolve the ambiguity in customer emotional expression in online comments. To solve these restrictions, this paper proposes a dynamic evolving neuralfuzzy inference system (DENFIS) method to model variational consumer preference through online customer comments. The proposed method first collects online comments of selected products at different periods and uses sentiment analysis to calculate sentiment scores. Secondly, DENFIS is used to model the association between the sentiment scores of consumer preferences and the settings of product attributes. The structure of this paper is as follows: Sect. 21.2 introduces related work. The proposed method of modelling variational consumer preferences through online customer reviews presents in Sect. 21.3. The proposed methodology is implemented and verified in the case of a sweeping robot, which is described in Sect. 21.4. In Sect. 21.5, the conclusions are presented.

21.2 Related Works Sentiment analysis can analyze customer sentiment and reveal customers’ preferences for various product functions [1]. To obtain consumer preferences from online comments, a case-based method was proposed [2], which uses an integrated method of text mining and perceptual engineering to extract consumer preferences from online customer reviews. However, the relationship between product attributes and consumer preferences is very complicated and non-linear. To simulate consumer preferences, You and Nagamachi et al. applied statistical linear regression [3] and partial least squares analysis [4], while Chen and Wang et al. [5] proposed the artificial neural networks. Nevertheless, the methods mentioned above cannot solve modelling fuzziness from consumers’ subjective judgments. According to the mentioned problem, some fuzzy regression based polynomial modelling methods have also been proposed recently, such as fuzzy regression based on forwarding selection [6], fuzzy regression based on the chaos optimization [7] and stepwise-based fuzzy regression [8]. However, previous studies disregarded the variability of consumer preferences and only used survey data to model static consumer preferences. Therefore, to model consumer preferences, time series data based on online consumer comments should be employed. Prior to this, there have been quite a few studies using survey data to predict future consumer preferences, such as a method based on artificial immune and neural system [9], and a support vector machine method based on an artificial immune system [10]. However, limited research has been found on the future consumer preferences forecasting by using online comments. A fuzzy time series method based on online customer comments was developed by Jiang et al. [11]. To associate consumer preferences and product attributes, Jiang et al. [12] introduced the association rules generated from a multi-objective PSO method, while Chung and Tseng [13] proposed the

21 Opinion Mining and DENFIS Approaches for Modelling …

231

If–Then rules developed from a rule induction framework. However, the generated rules are not enough to decide the optimal product attributes for designing the new product.

21.3 Proposed Methodology To resolve the limitations mentioned above, this paper supports a consumer-initiated model based on the time series of online comments.

21.3.1 Sentiment Analysis from Online Customer Comments First, we determine the sample products. Second, consumer reviews of sample products on the e-commerce platform are obtained by the web crawlers, which are divided into different periods and put in separate excel files. Third, sentiment analysis is used to obtain sentiment scores of consumer preferences. In this study, Semantria, a well-known text analysis software tool, was used for sentiment analysis of online comments. It provides text analysis through API and Excel plug-in and extracts emotions based on positive, neutral, and negative dimensions to calculate corresponding emotion scores.

21.3.2 DENFIS Method In DENFIS method, the inputs were divided into clusters through the evolutionary clustering method (ECM). The antecedents of new fuzzy rules are formed by using the cluster centers. Through the least squares estimation method, a first-order linear model is developed for each fuzzy rule. x 1 ~ x k represent k product attributes, while y(t − 1) ∼ y(t − T ) respectively describe the scores of consumer preferences in the past periods t − 1 ~ t − T. The predicted scores of the consumer preference in period t was represented as y(t) . Evolving clustering method (ECM). Through the ECM, the input data can be dynamically clustered, where the number and centers corresponding to the cluster can also be obtained. In the process of clustering, the input data set is firstly divided into different clusters based on their center and radius. For each input data Z i , i = 2, …, n, is presented, where n denotes the number of data sets. The cluster is presented as C j , j = 1, 2, …, m, where m indicates the cluster numbers. For the first cluster, C 1 , setting its center, Cc1 , as the first data set and radius, Ru1 , as zero, are first initialized. To determine the number of clusters and limit the new cluster radius, a threshold value, Dthr , is presented. The Euclidean distance was applied to determine the distances, Dij , by (1) from Z i to t C j .

232

H. Jiang et al.

Di j = ||Z i − Cc j ||, j = 1, 2, . . . , m

(21.1)

where Ccj represents the cluster center of C j and m indicates the cluster numbers. To compute the minimum distance, Eq. (21.2) is used. Di min = min(Dij ) = ||Zi - Ccmin ||

(21.2)

C min is the cluster with Di min , while its center and radius are determined as Ccmin and Rumin , respectively. If Di min ≤ Rumin , Z i belongs to the cluster C min . If Di min > Rumin , update the existing cluster, or create a new cluster. Equation (21.3) and (21.4) are used to determine the value V ij and the minimum value of V ij , Via, respectively. Vi j = Di j + Ru j , j = 1, 2, . . . , m

(21.3)

  Via = min Vi j = Dia + Ru a

(21.4)

C a is the cluster with Via, while its cluster center and radius are determined as Cca and Rua , respectively. The distance between Z i and C a. is denoted as Dia . The cluster C a is updated if Via ≤ 2 × Dthr . Its radius Rua is replaced as Rua ’ and set as the half of the minimum value, Via/2. Its center Cca is updated by the new center, Cca ’ , which is located at a point on the line connecting Z i and Cca , and the distance from Cca ’ to Z i is equal to Rua ’ . If Via > 2 × Dthr , Z i belong to a new created cluster, while its center is set as Z i , and radius set as zero. The ECM is completed after processing all data sets. Learning processes in DENFIS model. There are two parts of the DENFIS inputs. One part is the sample product attributes, x 1 ~ x k , and another part is the consumer preference score y(t−1) ~ y(t−T). The ith input of the model is represented as xi , i = 1, 2, . . . , q, where the inputs number is q, q = k + T. When i is in different ranges, xi represents diverse input. It is equal to x 1 ~ x k and y(t−1) ~ y(t−T), when i = 1, 2, …, k, and i = k + 1, …, q, respectively. A set of fuzzy rules can be generated through the clusters of the input, which are defined as follows:   If x1 is M F11 , x2 is M F12 , . . . , and xq is M F1q , then y is f 1 x1 , x2 , . . . , xq  If x1 is M F21 , x2 is M F22 , . . . , and xq is M F2q , then y is f 2 x1 , x2 , . . . , xq .. .   If x1 is M Fm1 , x2 is M Fm2 , . . . , and xq is M Fmq , then y is f m x1 , x2 , . . . , xq “x i is MF ji ”, j = 1, 2, …, m, i = 1, 2, …, q, are m × q fuzzy propositions as q antecedents from m fuzzy rules respectively. As the jth membership function of x i , MF ji has m membership function, which is the same cluster’s number through ECM. In the sequential components of the fuzzy rules, f j (x 1 , x 2 , …, x q ), j = 1, 2, …, m are the first-order Sugeno fuzzy models. The output of the fuzzy rule is represented as

21 Opinion Mining and DENFIS Approaches for Modelling …

233

y. In this study, (21.5) is used to define the triangular-shaped membership functions.

μ j (xi ) =

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

xi −a j b j −a j c j −xi c j −b j

0, xi < a j , a j ≤ xi ≤ b j , b j ≤ xi ≤ c j

(21.5)

0, xi > c j

For the jth cluster, the center, left and right value are represented as b j , a j and c j , respectively, where a j = b j −d × Dthr and c j = b j +d × Dthr , 1.2 ≤ d ≤ 2. For the consequent parts of the fuzzy rules, the linear least-square estimation method (LSE) is employed to develop the linear functions in the consequences. Linear models in this paper can be expressed as follows: y = β0 + β1 x1 + β2 x2 + · · · + βq xq

(21.6)

 T The regression β = β0 , β1 , β2 , . . . , βq was obtained through the

 l l coefficients, data sets, x1 , x2 , . . . , xql , yl , l = 1, 2, . . . , n. The inputs and actual output of the lth data pair are represented as x1l , x2l , . . . , xql and yl , respectively. For the coefficients and initial inverse matrix, they are described as β and P, which are calculated through the least square estimation method using (21.7) and (21.8), respectively. β = P AT W Y

(21.7)

 −1 P = AT W A

(21.8)

where ⎛

1x11 x21 . . . xq1 ⎜ 2 2 ⎜ 1x1 x2 . . . xq2 A=⎜ .......... ⎜ ..... ⎝ 1x1n x2n . . . xqn





⎞ W1 0 . . . 0 ⎟ ⎜ 0W2 . . . 0 ⎟ ⎟ ⎟ ⎟, W = ⎜ ⎜ .... ⎟, andY = [y1 , y2 , . . . , yn ]T . ⎟ . . . . ⎝ .... ⎠ ⎠ 00 . . . Wn

()−1 expresses the inverse matrix and []T represents the transpose of the matrix. Wl can be computed as follows: Wl = 1 − Dis l , l = 1, 2, . . . , n

Dis l =

  2  21 q   x − b i ji i=1 1

q2

(21.9)

(21.10)

234

H. Jiang et al.

The normalized Euclidean distance between the current cluster center and the lth data set was determined by (21.10), where b ji is the ith value in the jth cluster center. When entering a new data set, the initialized P and β will be updated. At the (l + 1)th iteration, the matrix Pl+1 and coefficients βl+1 are updated respectively as follows.   T βl βl+1 = βl + Wl+1 Pl+1 αl+1 yl+1 − αl+1

Pl+1

  T Pl Wl+1 Pl αl+1 αl+1 1 = Pl − T λ λ + αl+1 Pl αl+1

(21.11)

(21.12)

T T ,αl+1 = The (l + 1)th row vector of matrix A is represented as αl+1  l+1 l+1 l+1 1x1 x2 . . . .xq , while the (l + 1)th component of Y and the forgetting factor are described as yl+1 and λ, 0 < λ ≤ 1, respectively. By calculating the weighted average of each rule output through the following formulas (21.13) and (21.14), the lth predictive output of DENFIS is obtained. The learning process of DENFIS is finished.  l l  m l  j=1 w j f j x 1 , x 2 , . . . , x q m (21.13) yl (t) = j=1 w j q   wj = μ j xil (21.14) i=1

21.4 Implementation and Validation In this section, a case study of DENFIS is introduced to support the design of sweeping robots. 10 competitive sweeping robots in the market has been identified, denoted by A ~ J. Online customer reviews of competitive sweeping robots were collected on Amazon.com using 4 fixed time period strategies and stored in an Excel file. To perform sentiment analysis, the Semantria Excel add-in was applied. Firstly, group words or phrases based on word frequency, synonyms, and relevance of consumer preferences. For example, the excavated phrases “good cleaner”, “high performance”, and “useful” were classified as a category of “clean well”, which is one of the consumer preferences of sweeping robots. In this case, there are four common consumer preferences summarized which are quality, smart operation, clean well and working sound. Using Semantria’s user category analysis, keywords and phrases relevance of consumer preferences are set as the “user category”. Based on each time period and each online comment, Sentiment analysis was repeated to obtain sentiment scores of consumer preference. In this paper, consumer preference “clean well” is used as an example to illustrate the proposed method. There are four product attributes related to the consumer

21 Opinion Mining and DENFIS Approaches for Modelling …

235

preference “clean well”, which are volume, max suction power, dust box capacity and wet mopping, and are denoted as x 1 , x 2 , x 3 and x 4 , respectively. The units of x 1 , x 2 and x 3 are cubic inch, pascal (Pa) and liter (L), respectively. For x 4 , 1 denotes that the sweeping robot has the function of wet mopping while 0 means it cannot provide this function. The attributes of ten competitive sweeping robots were collected and are shown in Table 21.1. To improve training performance of neural systems, the values of x 1 , x 2 and x 3 in Table 21.1 are normalized. DENFIS model was developed in this case study to predict the consumer preference scores, y(t), of “clean well” for Period 4. Together with four product attributes, x 1 ~ x 4 , consumer preference scores of three past periods, periods 1–3, which are expressed respectively as y(t-3), y(t-2), and y(t-1) are used for developing a DENFIS model to forecast y(t). The sentiment scores of ten products under 4 periods are shown in Table 21.2. In this section, five validation tests were conducted to evaluate the modelling effectiveness. The validation data sets for tests 1 ~ 5 are shown in the 2nd column of Table 21.3. The mean relative error (MRE) and the variance of errors (VoE) were adopted as the prediction errors to evaluate the effectiveness of the proposed method in this paper. 1 n |yl (t) − yl (t)| M RE = l=1 n yl (t) 

1 n |yl (t) − yl (t)| − M R E) ( i=1 n−1 yl (t) 

V oE =

(21.15) 2

(21.16)



where yl (t) and yl (t) are the lth predicted and actual score of the consumer preference; and n is the number of data sets. Table 21.1 The attributes of ten competitive sweeping robots Competitive sweeping robots

Attributes of sweeping robots Volume x1

Max suction power (Pa) x2

Dust box capacity (L) x3

Wet mopping x4

A

438.178

1400

0.5

1

B

490.100

1800

0.6

0

C

417.720

850

0.3

1

D

515.573

2000

0.75

0

E

417.720

1000

0.3

0

F

442.368

1400

0.3

0

G

642.105

1800

0.5

0

H

645.979

1800

0.4

0

I

643.500

2000

0.7

0

J

466.215

1500

0.6

0

236

H. Jiang et al.

Table 21.2 The sentiment scores of ten products under 4 periods Product

Clean well Period 1

Period 2

Period 3

Period 4

A

0.36

0.43

0.47

0.40

B

0.37

0.29

0.35

0.20

C

0.28

0.32

0.44

0.24

D

0.32

0.33

0.38

0.24

E

0.33

0.33

0.33

0.30

F

0.43

0.40

0.41

0.39

G

0.33

0.31

0.27

0.30

H

0.33

0.32

0.37

0.31

I

0.30

0.31

0.30

0.30

J

0.32

0.32

0.37

0.28

Table 21.3 The validation results for the customer preference scores of “clean well” Validation Test

Validation data sets

FCM-ANFIS

SC-ANFIS

K-means-ANFIS

DENFIS (Proposed approach)

1

A and B

0.3843

0.3855

0.3854

0.3648

0.0443

0.0434

0.0442

0.0150

0.4429

0.4456

0.4430

0.4070

0.0151

0.0152

0.0151

0.0142

F and G

0.2640

0.2655

0.2626

0.2494

0.0350

0.0357

0.0360

0.0316

E and H

0.2233

0.2263

0.2242

0.1223

0.0154

0.0148

0.0151

0.0002

0.0388

0.0407

0.038

0.0313

0.0015

0.0017

0.0018

0.0005

MRE VoE

2

MRE

C and D

VoE 3

MRE

4

MRE

VoE VoE 5

MRE VoE

I and J

To compare the effectiveness of the DENFIS approach, various adaptive neurofuzzy inference system (ANFIS) methods, which are fuzzy c-means-based ANFIS (FCM-ANFIS), subtractive cluster-based ANFIS (SC-ANFIS), and K-means-based ANFIS, were carried out to compute the values of MRE and VoE in each verification, using the same data set. For those methods, the FCM, SC and K-means methods are combined into ANFIS to decide the membership function of ANFIS respectively. For the DENFIS, parameter λ were set as 0.8, 0.88, 0.38, 0.44 and 1 in the validation tests 1–5, respectively. Based on the typical setting applied in previous studies, parameter d was set as 1.2 in all the validation tests. For all the validation tests, the number of clusters was set as 3 for FCM-ANFIS and K-means-based ANFIS, respectively. Using the MATLAB software, four approaches for modelling “clean well” were

21 Opinion Mining and DENFIS Approaches for Modelling …

237

performed. The prediction errors of the five validation tests based on the four methods were shown in Table 21.3, where the MRE and VoE based on the proposed approach are the smallest of four approaches. Therefore, the proposed DENFIS approach can develop more accurate models for variational consumer preferences based on the online comments.

21.5 Conclusions To resolve the limitations, a DENFIS method for modelling variational consumer preferences using the online consumer comments has been proposed in this study. A case study of sweeping robot products was carried out to illustrate the proposed method. In order to evaluate the effectiveness of the DENFIS method, the prediction results obtained based on other three method were compared, which are FCMANFIS, SC-ANFIS, and K-means-ANFIS. The comparison results show that the DENFIS method is superior to the other three methods in terms of the mean relative error and the variance of errors. Future research work on adjusting the parameters of DENFIS adaptively based on the intelligent optimization approach will be conducted. Acknowledgements This work is supported by the National Natural Science Foundation of China [grant number 71901149].

References 1. Cambria, E.: Affective computing and sentiment analysis. IEEE Intell. Syst. 31(2), 102–107 (2016) 2. Chiu, M.C., Lin, K.Z.: Utilizing text mining and Kansei Engineering to support data- driven design automation at conceptual design stage. Adv. Eng. Inf. 38, 826–839 (2018) 3. You, H., Ryu, T., Oh, K., Yun, M.H., Kim, K.J.: Development of customer satisfaction models for automotive interior materials. Int. J. Ind. Ergon. 36(4), 323–330 (2006) 4. Nagamachi, M.: Perspectives and the new trend of Kansei/affective engineering. TQM J. 20(4), 290–298 (2008) 5. Chen, C.H., Khoo, L.P., Yan, W.: An investigation into affective design using sorting technique and Kohonen self-organising map. Adv. Eng. Softw. 37(5), 334–349 (2006) 6. Chan, K.Y., Ling, S.H.: A forward selection based fuzzy regression for new product development that correlates engineering characteristics with consumer preferences. J. Intell. Fuzzy Syst. 30(3), 1869–1880 (2016) 7. Jiang, H., Kwong, C.K., Ip, W.H., Chen, Z.: Chaos-based fuzzy regression approach to modeling customer satisfaction for product design. IEEE Trans. Fuzzy Syst. 21(5), 926–936 (2013) 8. Chan, K.Y., Lam, H.K., Dillon, T.S., Ling, S.H.: A stepwise-based fuzzy regression procedure for developing customer preference models in new product development. IEEE Trans. Fuzzy Syst. 23(5), 1728–1745 (2015) 9. Chong, Y.T., Chen, C.H.: Management and forecast of dynamic customer needs: an artificial immune and neural system approach. Adv. Eng. Inf. 24(1), 96–106 (2010)

238

H. Jiang et al.

10. Huang, A.H., Pu, H.B., Li, W.G., Ye, G.Q.: Forecast of importance weights of customer requirements based on artificial immune system and least square support vector machine. In: Proceedings of 2012 International Conference on Management Science and Engineering, pp. 83–88. IEEE, USA (2012) 11. Jiang, H., Kwong, C.K., Yung, K.L.: Predicting future importance of product features based on online customer reviews. J. Mech. Des. 139(22), 111413–1–10 (2017) 12. Jiang, H., Kwong, C.K., Park, W.Y., Yu, K.M.: A multi-objective PSO approach of mining association rules for affective design based on online customer reviews. J. Eng. Des. 29(7), 381–403 (2018) 13. Chung, W., Tseng, T.L.: Discovering business intelligence from online product reviews: a rule-induction framework. Exp. Syst. Appl. 39(15), 11870–11879 (2012)

Chapter 22

Pysurveillance: A Novel Tool for Supporting Researchers in the Systematic Literature Review Process Julen Cestero, David Velásquez, Elizabeth Suescún, Mikel Maiza, and Marco Quartulli Abstract Systematic literature studies aim to identify, evaluate, and combine evidence from primary studies using a rigorous method. These analyses can be timeconsuming, so it is necessary to use automatic tools which optimise this process and help to obtain as much insight as possible about a research subject. We present a tool named Pysurveillance which supports researchers in the initial stages of the literature review process. This work describes (i) The process of designing and constructing Pysurveillance and (ii) a comparative study with other similar tools. The results obtained show that Pysurveillance is a tool that can be considered to support researchers in their literature review processes. It is an open-source software tool based on Python that provides researchers quantitative and qualitative bibliometric results that allow them to analyse different levels about a specific topic through visual analytics.

22.1 Introduction Bibliometric studies allow access to the body of publications from anywhere and at any time to evaluate knowledge communication processes in a specific field [1]. To evaluate scientific production, there are specific proposals. Among them are technological surveillance, systematic mapping, and systematic literature reviews, which are designed to give an overview of a research area through an objective, classification, analysis, and answer to questions that lead to the consolidation of contributions about the categories of that classification. This implies designing a J. Cestero (B) · D. Velásquez · M. Maiza · M. Quartulli Department of Data Intelligence for Energy and Industrial Processes, Vicomtech Foundation, Basque Research and Technology Alliance (BRTA), Donostia-San Sebastian 20014, Spain e-mail: [email protected] D. Velásquez · E. Suescún RID On Information Technologies and Communications Research Group, Universidad EAFIT, Carrera 49 No. 7 Sur - 50, 050022 Medellín, Colombia © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_22

239

240

J. Cestero et al.

literature search strategy to determine what issues have been covered in the literature, research trends if there are gaps, and specifically, where it was published [2]. Although these studies share some points regarding the search, such as selecting the studies, synthesising the evidence, and evaluating the evidence’s strength, they are also different in terms of objectives, purpose, strategies, and approach to analysing data. However, it is a fact that conducting these studies require much effort and additional work due to the volume of information that must be processed. In this context, this work presents a tool named Pysurveillance that supports the researcher in the execution of the systematic literature review. Pysurveillance retrieves search results and presents different analysis levels in terms of their complexity. Pysurveillance prioritises the visual communication to boost the interpretability of the results, plotting the different analysis performed by the back-end within an easy-to-use front-end, and using data requested directly from Scopus [3], one of the most used online bibliographic databases. All the figures shown in this paper represent the tool’s current state, which is currently being used and tested by several researchers.

22.2 State of the Art Bibliometric studies can be divided into two main areas: (i) performance analysis and (ii) science mapping analysis. Performance analysis evaluates groups of scientific actors and the impact of their activity based on bibliographic data. It allows tracking, predicting, and building predictive models of the evolution of a research field and its sub-fields [4]. The science mapping objective is to display the structural and dynamic aspects of scientific research. Thus, it aims to represent the intellectual connections, and their evolution in a knowledge area [5]. Bibliometric analysis requires a data source of a particular research area. Several bibliographic databases exist (e.g., SpringerLink, EMbase, PubMed, among others); however, some do not have the required information to perform bibliometric analysis. The databases that are commonly used in bibliometrics analysis are Web of Science (WoS), Scopus, Google Scholar (GS), and Microsoft Academic (MA). Table 22.1 provides a summary of the characteristics of the leading bibliographic databases used in bibliometrics analysis. Table 22.1 Features of commonly used bibliometrics databases [6] Characteristics

WoS

Scopus

GS

MA

Subscription Data download





Free

Free





×

Records limit

Using API

100,000

2000

n/a

n/a

API





×



Formats

Plain text, Tab delimited

RIS, CSV

n/a

n/a

22 Pysurveillance: A Novel Tool for Supporting Researchers …

241

22.2.1 Performance Analysis Tools As previously mentioned, data must be first gathered from a database source to develop a performance analysis process. Through this analysis, it is possible to obtain information on scientific production and impact in a specific research area. The most common indicators are the number of citations, number of publications, and research area from the source’s data. Among the software tools developed for performance analysis is Publish or Perish [7], which is capable of generating the h-index from GS, WoS, Scopus, Crossref, MA, or an external data source and carries a performance analysis by obtaining the h-index, age-weighted citation rate, and authors indicators. CRExplorer, developed in Java by Marx et al. [8], allows analysis from WoS, Scopus, or Crossref databases, and uses Reference Publication Year Spectroscopy (RPYS) as the method to identify the most significant influence and its life cycle in a specific research topic. Another tool worth mentioning is ScientoPyUI [9], which is a python-developed open-source tool that allows merging information from Scopus and WoS databases. Additionally, the tool allows to visualise a time-series evolution of a particular research topic (i.e., number of documents per publication year) among other useful classifications.

22.2.2 Science Mapping Tools Science Mapping Tools need to follow a process [5] that involves the following steps: data retrieval, data preprocessing, network extraction, network normalisation, mapping, analysis, and visualisation. The researcher must then interpret the final results in the visualisation step. The previous process builds a science map, showing the intellectual connections of a research area. For this task, several tools have been developed by the research community. One of these tools is Bibliometrix [10]. It is an open-source R package for bibliometric and co-citation analysis. This tool can be further extended with Biblioshiny, which provides a web-interface for Bibliometrix. This tool can analyse WoS, Scopus, PubMed, and Dimensions databases to generate plots in terms of sources, authors, and documents. Furthermore, it features an analysis of three structures of knowledge (K-structures): (i) Conceptual Structure, (ii) Intellectual Structure, and iii) Social Structure. Another tool is the Science Mapping Analysis Software Tool (SciMAT) [11]. SciMAT is an open-source tool developed in Java and is based on a longitudinal science mapping approach. It carries out a bibliometric analysis (bibliographic coupling, co-author, co-citation, and co-word analysis) and builds science maps enriched with bibliometric measures based on citations such as hindex, g-index, hgindex, and q2-index. Other software tools that were created for Science Mapping are Bibexcel [12], BiblioMaps [13], CiteSpace [14], CitNetExplorer [15], Citan [16], Metaknowledge [17], and VOSviewer [18]. Therefore, there are multiple systematic literature review tools for bibliometrics. Some of them are focused on performance analysis and others on science mapping.

242

J. Cestero et al.

However, some of these tools are limited to libraries and do not feature user interaction and full citation-based analysis concerning an affiliation, country, author, and article.

22.3 Design and Development of the Solution Pysurveillance is an open-source web tool based on Python that can perform a bibliometric analysis at different levels. It can analyse a research topic in terms of quality and cross-reference the results with quartile indicators based on the Scopus Journal Rankings (JSR). Using the results from the query ‘(Reinforcement Learning) AND (inverse OR control OR stochastic)’ in Scopus as an example, this tool shows a series of graphs that sorts the results in a more useful way to the user using different complexity grade analysis. These analyses vary from showing the most prolific authors, through the most-cited authors or papers, to a much more complex analysis such as the most-cited authors counting as unique the citations of the different authors of each institution, and sorting their papers by the quartile of the journal at their publication time. Pysurveillance is divided into two different elements: front-end and back-end.

22.3.1 Front-End The front-end of the Pysurveillance system presents functionalities that are aligned with the objectives of the project, such as clarity with the information, usefulness by providing fast and concise information via visually communicative and interactive graphs, and swiftness in the process of searching and sorting bibliographic references. An interactive dashboard was created for these purposes. This dashboard is fully responsive and can react to the direct interaction of the user. The home screen of the tool is shown in Fig. 22.1. First, the dashboard appears empty except for the left toolbar. This toolbar is the input of the system. The user can import a valid CSV file exported from Scopus, or they can search directly within the tool using the query text area below, without the need of manually exporting the CSV from Scopus. Using the syntax of the Scopus queries [19], the objective of the query definition step is to fetch a number of results around 1000 to get a significant representation of the literature [20]. If the number is acceptable, after the user’s confirmation, the tool shows a collection of charts with helpful information. There is also a bar in the left toolbar, which allows the user to reduce the information shown in the charts up to a custom range of years. Figure 22.2 shows an example of a chart, the result of the example query. All the graphs are organised by their complexity grade using a grade analysis. The different complexity grades are determined by the number of variables used by the sorting algorithm. First-grade analysis consists of counting the number of results

22 Pysurveillance: A Novel Tool for Supporting Researchers …

243

Fig. 22.1 The home screen of Pysurveillance

Fig. 22.2 Example chart showing the top 10 cited papers of the sample query

of the sorting term, for example, the number of publications per year. Second-grade analysis plots the data sorting by the number of citations per publication. Third-grade analysis is, currently, a proof of concept that more complex analysis can be made through three terms for the sorting process and aims to provide even more accurate and valuable insight into the reviewed literature, trying to avoid biased data. For example, it can sort the most-cited authors, counting as unique the citations from the same source. Therefore, if a source tends to cite a specific author more than others, this kind of graph can filter information through this bias.

244

J. Cestero et al.

In addition to the analyses mentioned above, a higher grade analysis was implemented. It consists of integrating the quartile ranking of a journal, provided by Scimago [21] as a criterion (Scimago uses Scopus Journal Ranking). This higher grade analysis is currently a work-in-progress feature, but, for now, it provides insight into the ranking of a specific paper’s journal when it was published. Finally, this tool also includes a word cloud with the key-words of the query. Word cloud counts the frequency of the key-words and depicts them so that the most frequent words are shown in a more noticeable way than the others, usually bigger or wider. The latter is useful for getting insight into the related terms of the query and measuring its accuracy. Many tools implement the first-grade analysis; however, fewer tools depict an analysis more in-depth than the second-grade analysis. Therefore, Pysurveillance can provide more in-depth insight for literature reviews.

22.3.2 Back-End The data used to feed the front-end graphs is created by a pair of scripts that find the data desired by the user and perform a set of operations to arrange it. The scripts that compose the back-end can be divided into two main purposes: retrieve the data from the Internet and process the obtained data. The code of both the front-and and the back-end can be accessible through the public repository of Pysurveillance in Github [22].

22.3.3 General Architecture of the System This section shows the connection between all the system elements. As depicted in Fig. 22.3 the user uses the system’s front-end to interact with the tool. The front-end, shown in Fig. 22.1, allows the user to either request a search query directly to the Scopus API or import a CSV file with the result of this query. When the user states the Scopus query, the system returns the number of results that this

Fig. 22.3 Architecture of the system

22 Pysurveillance: A Novel Tool for Supporting Researchers …

245

query would return. If the user is satisfied with this number, the process goes on. Therefore, the front-end makes a call to the Scopus scraper script, which will make a query to the Scopus API. The API will return several packages with a fraction of the results in each one. After all the packages are received, the scraper script merges them into a pandas DataFrame and sends them to the Data processing script. This script will process the data to create the previously described figures of the front-end by selecting, filtering, and ordering the needed data for these figures. Finally, the front-end creates these figures using back-end filtered data and shows them to the user. Having the data displayed, the user can make further iterations in polishing the search query, thus, improving their research quality faster.

22.4 Discussion Pysurveillance aims to support bibliometric studies, specifically systematic literature studies. However, other tools share specific characteristics with this one in terms of treating the analysed literature. Therefore, it is important to check the benefits of the proposed tool compared to other SoA tools and its areas for improvement. For this objective, a comparison was defined with the following characteristics: deployment platform (desktop or web), programming language, latest update, if they provide online request capacity, if they preprocess the data, first, second, third, and fourth-grade analyses, user interaction capabilities, and finally other notes not covered in the list. The tools used as a reference for this discussion are ScientoPy [9], Publish or Perish [7], SciMat [11], Bibliometrix [10], and Bibliotools [23]. Pysurveillance was used as a reference point to compare all these tools. The comparison can be found in the table of Fig. 22.4. The conclusion that can be obtained after analysing the different features of the mentioned tools is that Pysurveillance integrates different features seen in reviewed SoA tools into a unified platform. One of the distinctive features of Pysurveillance is the visual communication system provided by the front-end, which summarises the most relevant information about the literature research process. Specifically, the distinguishing feature compared to other tools in terms of the front-end is the ease of generating research results in a concise and fast manner through a reduced number of steps. Another feature is the capacity to retrieve the literature research information from research sources (in this case Scopus) directly through the tool, a feature that not all the discussed tools provide. On the other hand, reviewed SoA tools present specific capabilities that Pysurveillance does not cover yet, but it is planned to add them in future versions of the tool. Some of these capabilities are the possibility of creating custom visualisations by the user, a feature that Bibliometrix already has, or acquiring literature research information from other indexers, as Publish or Perish already does.

246

J. Cestero et al.

Fig. 22.4 Comparison table between the state-of-the-art tools and Pysurveillance

22.5 Conclusions Systematic literature studies are an indispensable activity when it is necessary to tackle a research project. The depth in the works to be explored and the research questions to be answered show that the work that the researcher must perform is limited by the time and the resources available for this task. Moreover, sometimes the amount of literature found is so extensive that the researcher cannot manually perform many of the analyses that could be interesting for the research. Aiming at this purpose, this work presents Pysurveillance, an open-source platform in evolution, which performs a detailed analysis of a specific search of the literature on a research topic via insightful visual communication systems aimed to provide the critical information of the research topic analysis to the researcher, boosting the literature research analysis performance. In addition to this, it uses the results of a search defined by the user through literature indexers, such as Scopus, represent in graphs and orders the results in a useful way for the researcher using analyses of different levels of complexity, also known as analyses of first, second, and third level. These analyses vary, showing the most relevant results, through the mostcited authors or articles, to other complex analyses such as the authors, citations, and the affiliation. Finally, this platform classifies works by the journal’s quartile at the time of publication, among other functionalities. Also, a critical analysis of the platform proposed in the present work was made comparing the main differences with other proposals available to support researchers in their systematic literature studies.

22 Pysurveillance: A Novel Tool for Supporting Researchers …

247

This article showed that Pysurveillance could be used to carry out systematic literature studies in research work. However, future work is proposed to make some improvements related to the user interface and software quality metrics to better support the platform’s results, as well as other analyses of higher levels of complexity. Finally, a comparative analysis with other tools existing in the literature and quality metrics at the software level, such as usability, ease of use, reliability, performance, and support, will be performed for future versions of the tool.

References 1. Solano López, E., Castellanos Quintero, S., López Rodríguez del Rey, M., Hernández Fernández, J.: La bibliometría: una herramienta eficaz para evaluar la actividad científica postgraduada. MediSur 7(4), 59–62 (2009) 2. Kitchenham, B., Charters, S.: Guidelines for performing systematic literature reviews in software engineering (2007) 3. Burnham, J.F.: Scopus database: a review. Biomed. Dig. Lib. 3(1), 1–8 (2006) 4. Upham, S., Small, H.: Emerging research fronts in science and technology: patterns of new knowledge development. Scientometrics 83, 15–38 (2010) 5. Börner, K., Chen, C., Boyack, K.W.: Visualizing knowledge domains. Ann. Rev. Inf. Sci. Technol. 37(1), 179–255 (2003) 6. Moral-Muoz, J.A., Herrera-Viedma, E., Santisteban-Espejo, A., Cobo, M.J.: Software tools for conducting bibliometric analysis in science: An up-todate review. Profesional de la Información 29(1), 4 (2020), https://revista.profesionaldelainformacion.com/index.php/EPI/article/ view/epi.2020.ene.03 7. Harzing, A.W., Wal, R.: Google scholar as a new source for citation analysis. Ethics Sci. Environ. Politics 8, 61–73 (2008) 8. Marx, W., Bornmann, L., Barth, A., Leydesdorff, L.: Detecting the historical roots of research fields by reference publication year spectroscopy (rpys). J. Am. Soc. Inf. Sci. 65, 751–764 (2014) 9. Ruiz-Rosero, J., Ramirez-Gonzalez, G., Viveros-Delgado, J.: Software survey: Scientopy, a scientometric tool for topics trend analysis in scientific publications. Scientometrics 121(2), 1165–1188 (2019). https://ideas.repec.org/a/spr/scient/v121y2019i2d10.1007 s11192–019– 03213-w.html 10. Dervis, H.: Bibliometric analysis using bibliometrix an r package. J. Scientometric Res. 8, 156–160 (2019) 11. Cobo, M., López-Herrera, A., Herrera-Viedma, E., Herrera, F.: Scimat: a new science mapping analysis software tool. J. Am. Soc. Inf. Sci. Technol. 63, 1609–1630 (2012) 12. Persson, O., Danell, R., Schneider, J.: How to use bibexcel for various types of bibliometric analysis. In: Celebrating Scholarly Communication Studies: A Festschrift for Olle Persson at His 60th Birthday, vol. 5, pp. 9–24 (2009) 13. Grauwin, S., Sperano, I.: Bibliomaps—a software to create web-based interactive maps of science: the case of ux map. Proc. Assoc. Inf. Sci. Technol. 55, 815–816 (2018) 14. Chen, C.: Citespace ii: Detecting and visualizing emerging trends and transient patterns in scientific literature. J. Am. Soc. Inform. Sci. Technol. 57, 359–377 (2006) 15. van Eck, N.J., Waltman, L.: Citnetexplorer: a new software tool for analyzing and visualizing citation networks. J. Inf. 8(4), 802–823 (2014) 16. Gagolewski, M.: Bibliometric impact assessment with r and the citan package. J. Inf. 5(4), 678–692 (2011). https://www.sciencedirect.com/science/article/pii/S1751157711000708 17. McLevey, J., McIlroy-Young, R.: Introducing metaknowledge: software for computational research in information science, network analysis, and science of science. J. Inf. 11(1), 176–197 (2017). https://www.sciencedirect.com/science/article/pii/S1751157716302000

248

J. Cestero et al.

18. Jan van Eck, N., Waltman, L.: Software survey: vosviewer, a computer program for bibliometric mapping. Springer 84(2), 523–538 (2010)., www.cs.sandia.gov/*smartin/software.html 19. B.V., E.: Scopus search guide. http://schema.elsevier.com/dtds/document/bkapi/search/SCO PUSSearchTips.htm (2020), [Online; accessed 4-March-2021] 20. G. Rogers, M. Szomszor, J.A.: Sample size in bibliometric analysis. Scientometrics 125(1), 777–794 (2020). https://doi.org/10.1007/s11192-020-03647-7 21. Lab, S.: Scimago journal and country rank. https://www.scimagojr.com/ (2020), [Online; accessed 4-March-2021] 22. Cestero, J.: Pysurveillance. https://github.com/JulenCestero/pysurveillance (2021), [Online; accessed 10-July-2021] 23. Grauwin, S., Jensen, P.: Mapping scientific institutions. Scientometrics 89, 943–954 (2011). https://link.springer.com/article/https://doi.org/10.1007/s11192-011-0482-y

Chapter 23

Research on Intelligent Control System of Corn Ear Vertical Drying Bin Qiang Fei, BiaoJin, Lijing Yan, Lian Yao Tang, and Rong Chen

Abstract Through the analysis of the research status of advanced corn ear drying equipment at home and abroad, this paper designs a vertical drying system with high drying efficiency, and then introduces the structure, principle, operation method, and practicability of the system. The virtual instrument technology is applied to the corn ear drying system, and the corn ear vertical drying dual control system based on LabVIEW is designed. The application’s software is used to monitor it. According to the production process, the overall scheme is formulated, and the hardware is designed. The data such as temperature and pressure that affect the drying effect of corn ear are collected and monitored, so as to achieve the goal of drying process optimization.

23.1 Introduction The temperature sensor in this system was made with high precision industrial armored thermal resistance. It can convey the collected data from the point temperature within the storehouse to module, then to the industry computer accurately and quickly in the way of communication with an industry computer and six analogue acquisition templates, which display in the main interface at the same time. The configuration king as upper monitor software was used to concentrate scheduling and scientific management for seed (food) storage industry [1]. Corn ear drying process is a multi-input and multi-output nonlinear system [2]. Artificial neural network is a nonlinear dynamic system simulating human thinking, which is characterized by distributed storage and parallel collaborative processing of information. The research content of neural network is quite extensive. The feedforward BP network used by the research group is a simple and widely used artificial neural network, which is suitable for nonlinear pattern recognition and classification prediction. Neural network was used to model and optimize the drying Q. Fei (B) · BiaoJin · L. Yan · L. Y. Tang · R. Chen School of Mechanical and Electrical Engineering, Guangdong University of Science and Technology, Dongguan 523083, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_23

249

250

Q. Fei et al.

device in the process of corn ear drying, and the optimized objectives were verified by genetic algorithm. Adopting the industrial computer, analogue acquisition module and three-wire high-precision temperature sensor thermal resistances in the system, it can greatly meet the drying requirement of high stability, small line loss and strong anti-interference ability.

23.2 Parameter Optimization of Drying Device The neural network mathematical model of drying process parameters is established by taking the moisture quality, total mass, dry matter quality and drying operation time of materials as independent variables, and drying effectiveness, drying uniformity and moisture ratio as response values. The model is optimized by MATLAB optimization toolbox to obtain the best combination of various factors of the drying device, and the optimization results are verified by genetic algorithm. The optimization results show that after optimizing the model with MATLAB optimization toolbox, the drying effectiveness is improved by 0.97–1.70%. The best combination of various factors in the drying process has a high fit with the experimental optimization results, and can accurately predict the working parameters and performance of the corn ear drying device. The drying rates of corn kernels and ears were measured by thin-layer drying test, but the drying efficiency was close to 0. In the actual drying system, a bed or layer deep enough can absorb reasonable energy from the hot air. The layer closest to the beginning of introducing air is dried, which has the greatest drying rate and plays the role of thin-layer drying. Hukill defines the continuous layer as a deep factor, which is developed to calculate the average moisture content per dimensionless unit time, the initial moisture content and the conditions of dry air. The analogues and figures quickly evaluate the effects of the configuration, capacity and efficiency of different dryers on the changes of corn properties and drying parameters through the drying model. For the single objective optimization results, the performance indexes of the vertical dryer (drying effectiveness, drying uniformity and moisture ratio) meet the technical conditions of agricultural products drying equipment (local standards of Gansu Province). For the drying device, firstly, the moisture ratio of corn ears should be minimized during drying; on this basis, the drying damage rate of ear and the broken core of ear shaft should be minimized.

23.3 Overall System Design The overall block diagram of the system as shown in Fig. 23.1. The system through the data acquisition module directly collects temperature and test data of the pressure sensor, and then transmit it to the computer by RS485 bus, using LabVIEW for data

23 Research on Intelligent Control System …

251

Fig. 23.1 The overall block diagram of the system

analysis and processing, monitor the production situation, and the analysis results will be sent to PLC, starting and stopping, reversing operation by PLC to control fan. The whole system is divided into three levels: management layer, monitoring layer and field layer [2]

23.4 Design of the Control System The system is composed of main control, signal channel, file operation, digital filtering, spectrum analysis, statistical analysis module and system monitoring module. The main interface of ear drying monitoring that are mainly used for realtime display of the pressure of each drying chamber are a drying bin temperature, drying bin, on the road and wind pressure difference. Real-time curve drawing function of LabVIEW2016 displays the real-time status of each variable that is dynamic, including temperature, pressure and air pressure on the drying bin; the main display real-time curve and temperature curve of pressure drying bin; and display data changes directly by waveform and histogram method. When the temperature monitoring threshold exceeds 4.3, the alarm system, the corresponding warning lights start flashing, while the pop-up warning window and door at the power system requires staff to adjust the fan and the drying bin, in order to achieve the purpose of safe drying process. Drying monitoring interface figure is as shown in Fig. 23.2. Hot air temperature impacts most on the ear drying in the process. It is controlled below 43 °C [3–5]. But too low temperature influences drying efficiency. A supervisory control system which ensures the quality of ear drying is required to monitor temperature in drying storehouse and to direct operators. The monitoring system for corn ear vertical drying storehouse is developed with a ADAM4000 series data acquisition module. The system which consists of master control signal channel, file

252

Q. Fei et al.

Fig. 23.2 Drying monitoring interface figure

operations, digital filtering, spectrum analysis, statistical analysis and system monitoring module realizes the data collecting, processing, and monitoring of temperature and pressure. It also solves many problems such as low drying efficiency and poor safety performance and so on. A virtual simulation experiment which using SolidWorks Flow Simulation proved that the monitoring system could effectively improve the security of drying. It is known from the simulation results that the ear contact temperature which changes between 256.41 K and 484.71 K can meet the safety requirements of corn ear drying [5]. The application shows that the system which runs stably and reliably has a fine man–machine interface which can satisfy the requirement of real-time monitoring and system reliability.

23.5 Monitoring System Module In the seed corn ear drying process, the temperature of hot ear drying effect greatly, for drying hot air temperature control at 43 °C, so the temperature monitoring threshold is set to 4.3. The numerical value of 10 when divided by the drying bin in the drying process of the seed contact temperature monitoring is greater than or equal to the threshold; the system can be real-time alarm and alarm popup window, and door at the power system requires staff to adjust the fan and the drying bin, in order to achieve the purpose of safe drying process. The drying ear monitoring program diagram is as shown in Fig. 23.3. A new mathematical model of neural network vertical drying storehouse is developed with the moisture mass, total mass, drying matter mass of material and drying time as the independent variable, and drying effectiveness, dry uniformity, moisture ratio as the response value. From simulation results of ear contact temperature change between 256.41 k and 484.71 k, and can meet the safety requirements of seed corn ear drying, which proved that the monitoring system can effectively improve the safety of drying. It can be seen from the monitoring picture that after the alarm

23 Research on Intelligent Control System …

253

Fig. 23.3 The drying ear monitoring program diagram

lamp is bright, the drying temperature curve is obviously decreased, and the pressure difference curve in the warehouse decreases. It is proved that the monitoring system has strong practicability and high efficiency.

23.6 Conclusions In this paper, the corn ear drying system is designed, and the upper computer monitoring interface is designed, which can realize the real-time monitoring and control of ear drying. Experiments show that the system is easy to operate, can reduce human errors, ensure the safety of seed drying and has a high value. Through the connection between Kingview and access database, the real-time query of field data and historical data query are realized. At the same time, the connection with access database makes up for the deficiency of LabVIEW database in real-time data processing. In the future, the automatic control technology of air door can be combined with this system to further avoid human operation errors in the process of seed drying and reduce the labor intensity of the staff.

References 1. Strzelczak, A.: The application of artificial neural networks (ANN) for the denaturation of meat proteins—the kinetic analysis method. Acta scientiarum Polonorum Technologia Alimentaria 18(1), 87–96 (2019) 2. Qiang Fei, L.: proceedings paper template parameter optimization of vertical drying system for corn ears based on BP neural network. Recent Trends Decis. Sci. Manag. 1142(1), 177–184 (2020)

254

Q. Fei et al.

3. Li, S., Chen, S.S., Han, F., Xv, Y., Sun, H.M., Ma, Z.S., Chen, J.Y., Wu, W.F.: Development and optimization of cold plasma pretreatment for drying on corn kernels. J. Food Sci. 84(8), 2181–2189 (2019) 4. Atungulu, G.G., Gbenga Olatunde, G., Wilson, S.: Engineering methods to reduce aflatoxin contamination of corn in on-farm bin drying and storage systems. Drying Technol. 36(8), 932– 951 (2018) 5. Lu, Z.H., et al.: Study on artificial intelligence optimization based on genetic algorithm. Guangxi Light Indus. 36(4), 77–80 (2012)

Chapter 24

The Design of Tracking Car Based on Single Chip Computer Hanhong Tan, Wenze Lan, and Zhimin Huang

Abstract This design is based on the controller of the STM32F103RBT6 microcontroller as the core, equipped with TR5000 tracking sensor, TRCT5000 infrared reflective sensor, and other devices to achieve automatic tracking and obstacle avoidance. The overall design adopts a double-layer three-wheel architecture, which can be completed and realized with a more stable function; the design is cost-effective, system stable, and has strong anti-interference ability. Because of its abundant I/O ports, it is convenient for subsequent upgrades and transformations, providing one more possibility for future research and development in multiple fields.

24.1 Introduction In today’s cutting-edge technology fields, wheeled robot technology led by smart cars is a high-tech development technology that involves the most extensive technical fields and includes the broadest knowledge. This design selects STM32 singlechip microcomputer as the controller, L298N motor chip as the power center, and TRCT5000 infrared reflection sensor as auxiliary to realize a car design that can realize automatic tracking and obstacle avoidance [1].

24.2 System Design The car’s system is based on a modular design and can be divided into five modules as a whole: power module, tracking module, obstacle avoidance module, main control module, and motor drive module [2]. The trolley system is powered by the power module so that each module can operate normally. The tracking module collects corresponding data on the road conditions ahead, and sends it back to the main control H. Tan (B) · W. Lan · Z. Huang Guangdong University of Science & Technology, Dongguan 523083, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_24

255

256

H. Tan et al.

module for processing, and then outputs the corresponding PWM signal through the Motor drive module to control the two cars. The motor of the wheel controls the start and forward direction of the trolley [3]. If there is an obstacle in front, the reflected signal will be processed by the infrared obstacle avoidance module, and a buzzer will alarm. The hardware scheme design of the system is shown in Fig. 24.1. According to the above hardware scheme design, the program design of our car also adopts modular design ideas, which can be divided into three major modules: motor drive subprogram module, black line tracking subprogram module, and obstacle avoidance subprogram module [4]. We use Keil uVision5 microcontroller program development software to program the hardware of each module, establish corresponding library functions, and call them in the main program according to the logical sequence, and process the data information of each module through the microcontroller to achieve tracking Obstacle avoidance function. The program design of the system is shown in Fig. 24.2. Fig. 24.1 The hardware scheme design of the system

Fig. 24.2 The program design of the system

24 The Design of Tracking Car Based on Single Chip Computer

257

24.3 Hardware Circuit Design Because the 51 series of single-chip microcomputer controllers have only 8 bits, the speed of running and processing data is slow, and the peripheral modules are single, which cannot preset many functions well, and does not meet the needs of the design for precision control and subsequent additional functions. So the main control module adopts the single-chip microcomputer STM32F103RBT6 as the core controller, which has a wide range of uses, has a large memory space and a highspeed memory, and has many I/O ports and peripherals connected to the external bus, which is very suitable for the control device of this design. To realize the tracking function, we use the TR5000 sensor, which can work normally at a working voltage of 3.3–5 V. It has the advantages of high sensitivity and strong anti-interference ability [5]. Its sensitivity can be adjusted by the potentiometers on the left and right sides, and the distance can be adjusted to 0–3 cm; during clockwise rotation, the inspection distance increases, and during counterclockwise rotation, the inspection distance decreases. The circuit of the tracking module is shown in Fig. 24.3. To realize the obstacle avoidance function, we use the TRCT5000 sensor. The sensor adopts the infrared reflection detection principle, so the target reflectivity and shape are the key to the detection distance. For black, the detection distance is small, white is large, small area is small, and large area is large. When the module detects an obstacle in front, the green indicator on the circuit board lights up, and the OUT port continues to output low-level signals [6]. The module has a detection distance of 2–30 cm and a detection angle of 35°. It is the same as the above tracking module. The detection distance can also be adjusted by a potentiometer, and the adjustment method is the same as the above-mentioned module. The obstacle avoidance module circuit is shown in Fig. 24.4. To realize the functions of the above design, it is also necessary to convert the data obtained by each module into a PWM signal through the motor drive module and transmit it to the wheel motor of the car to control the direction of the car’s start, brake, and operation [7]. The motor drive module circuit is shown in Fig. 24.5.

Fig. 24.3 The circuit of the tracking module

258

H. Tan et al.

Fig. 24.4 The obstacle avoidance module circuit

Fig. 24.5 The motor drive module circuit

24.4 Results Display and Analysis After the car design is assembled by welding, the design is programmed through the microcontroller program development software Keil uVision5, and after the software is compiled without errors, the HEX file is generated, and it is burned into the microcontroller of the car design through the serial port programming software FLYMCU. After pressing the white power start button, the car is powered on; the STM32F103RBT6 main control module and the L298N motor drive module will have a red indicator light, and the TCRT5000 sensor obstacle avoidance module will

24 The Design of Tracking Car Based on Single Chip Computer

259

Fig. 24.6 The actual operation diagram of the car design

have a green indicator light. The actual operation diagram of the car design is shown in Fig. 24.6. After pressing the function start button, place the car on the ground where the black lines are arranged; the car’s TCRT5000 sensor will send out infrared rays to automatically scan the position of the black lines on the ground, and return to the single-chip microcomputer according to the scanning results. Corresponding to the PWM waveform, control the two-wheel motor of the trolley to control the trolley to move forward, turn left or right for black line tracking. The operation diagram of the black line tracking function of the trolley is shown in Fig. 24.7. When the car is started and the black line is tracking, if there is an obstacle in front of the car, the green indicator lights of the TCRT5000 sensors on both sides of the car will be bright, and the data returned by the reflected obstacle will be sent to the single-chip microcomputer, and the single-chip microcomputer will then separate the data. It is sent to the buzzer and L298N motor chip; the buzzer sounds an alarm, and the L298N motor chip converts the obtained data into the corresponding PWM waveform and transmits it to the motor that controls the wheels, so that it can control the car to achieve braking. The operation of the obstacle avoidance function of the trolley is shown in Fig. 24.8.

260

H. Tan et al.

Fig. 24.7 The operation diagram of the black line tracking function of the trolley

Fig. 24.8 The operation of the obstacle avoidance function of the trolley

24.5 Conclusion The design of the trolley adopts a double-layer acrylic plate and three-wheel structure to make the overall and functional realization of the trolley more concise and stable. This article uses STM32F103RBT6 as the main control chip, and collects and feeds information to the microcontroller through the TR5000 infrared photoelectric sensor module and the TRCT5000 infrared reflective sensor. The processed signal is driven by the L298N motor drive chip to output the corresponding PWM signal to control

24 The Design of Tracking Car Based on Single Chip Computer

261

the motor of the wheel, and the car completes the automatic tracking and obstacle avoidance functions. Acknowledgements Fund Project: 2019 School-level Scientific Research Project (GKY2019KYZD-5)

References 1. Ma, Z.D., Du, S.L., Zhou, Y.S., Guo, J.C.: Design and application analysis of multifunctional smart car. Sci. Technol. Innovat. 31, 156–157 (2020) 2. Zhang, Y.L., Zhao, Q., Jin, Q.C.: Control system design of multifunctional intelligent car. Electron. Test. 17, 15–17 (2020) 3. Peng, C., Liu, L.I., Tu, Z.F.: The exploration and design of smart cars in the cultivation of college students’ innovative ability. Electron. World (14), 95–98 (2020) 4. Liu, F., Zhang, T.T., Niu, M.H.: Intelligent tracking and obstacle avoidance car based on STM32. Sci. Technol. Wind (19), 18 (2019) 5. Yu, L., Zhi, Y.R., Zhu, Y.F.: Design of smart tracking car based on STM32. J. Chifeng Univ. (Natural Science Edition) 35(04), 108–110 (2019) 6. Guo, Z.Y.: Embedded technology and application development project tutorial STM32 Edition (M), vol. 1, pp. 239–268. People’s Posts and Telecommunications Press, Beijing (2019) 7. Li, S.N.: The design of a smart car for tracking and avoiding obstacles based on STM32. Dig. Technol. Appl. 36(08), 163–164 (2018)

Chapter 25

Research Status and Trend Analysis of Coal Mine Electro-Mechanical Equipment Maintenance Under the Background of Smart Mine Construction Libing Zhou Abstract Intelligent maintenance of coal mine mechanical and electrical equipment is an important part of the construction of intelligent mine, which is a major change of equipment maintenance management mode. With the continuous development of equipment fault diagnosis and predictive maintenance technology, it has gradually spread in many fields. There are many faults occurring frequently owing to the mine mechanical and electrical equipment with the characteristics of great variety, large volume, complex structure, bad working environment and so on. Researches on fault diagnosis and predictive maintenance of mining electromechanical equipment are of great significance to ensure the reliable operation of equipment, coal mine safety production and personnel safety. This paper introduces the research status of diagnosis and predictive maintenance of mining electromechanical equipment, summarizes the outstanding progress and typical achievements, analyzes the existing problems and shortcomings, and finally puts forward some thoughts on the development trend of the research.

25.1 Introduction Equipment fault diagnosis and predictive maintenance is a kind of science and technology about evaluating the current operating state and predicting the development trend of equipment system. Since its concept reported in the 1960s, this technology has been gradually developed into an emerging discipline that combines theoretical research and practical applications, whose emergence and development are of great significance for ensuring the safe and reliable operation of equipment. L. Zhou (B) CCTEG Changzhou Automation Research Institute, Changzhou, China e-mail: [email protected] Tiandi(Changzhou) Automation Co., Ltd, Changzhou, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_25

263

264

L. Zhou

There are lots of types among coal mine electro- mechanical equipment, including fully-mechanized equipment, washing equipment, lifting equipment, ventilation equipment, water supply and drainage equipment and other categories. These equipments with large size and complex structure often work in harsh environment, and inevitable machine fault may occur after long-time operation, resulting in downtime and threatening personnel safety. Currently, coal mine production mostly adopts assembly line operation mode, and a certain equipment fails may cause entire production breakdown in serious cases, and bring relatively large economic losses and social impacts. Consequently, traditional “post maintenance” approach can’t meet the demand of advanced equipment system management. It’s significant for future development trend towards timely and accurately mastering the operation status, fault diagnosis and prediction of coal device. In China, the related key technology of operation reliability, safety and maintainability of major product and facility were listed as important research direction in medium and long-term planning (2006– 2020) and “Mechanical Engineering Discipline Development Strategy Report (2011– 2020)”. With the continuous advancement of smart mine construction in recent years, National Development and Reform Commission, National Energy Administration and other eight ministries and commissions jointly issued “Guiding Opinions on Accelerating the Intelligent Development of Coal Mines” in 2020, proposing the deep integration of artificial intelligence, internet of things, big data, intelligent equipment and modern coal mining. Meanwhile, that means the maintenance and management of mining equipment should gradually develop in the direction of intelligent diagnosis and prediction.

25.2 Research Status For a variety of coal mine equipment, extensive research on fault diagnosis and predictive maintenance of different coal mine equipment were conducted in the past few years. This paper briefly introduces research progress of fault mechanism, sensor and data acquisition, signal processing and analysis, fault diagnosis and predication algorithm in coal mine equipment.

25.2.1 Fault Mechanism Study Failure mechanism study is essentially through theoretical or experimental methods to obtain a process that can reflect the rules between the signal characteristics of the equipment and the equipment’s own system parameters under the equipment failure state [1]. Li [2] established horizontal and vertical vibration models of the hoisting container under the steel guide. Based on the model and experiment, relationship curve of the influence different faults of steel guide, lifting speed and lifting load on the hoisting container was obtained, which laid a foundation for mine steel guide

25 Research Status and Trend Analysis of Coal Mine Electro-Mechanical …

265

failure diagnosis through vibration test. Cui [3] built kinetic model under rotor imbalance fault of mining fan, and obtained amplitude and phase of imbalance vibration signal, which can provide scientific basis for maintenance of mining fan. Liu [4] developed proper dynamic models under spring failure of mining vibrating screen, and obtained relationship between amplitude and frequency of system under different spring failure by numerical simulations, which can provide a theoretical basis for further study and experiments in spring failure diagnosis. Zhang [5] performed physical properties test analysis on gear and bear in the rocker arm system of underground shearer and the fault mechanism for the transmission system of rocker arm machine was studied, which provides a valuable reference for design, using and maintaining of heavy-duty gear transmission systems. Li [6] built comprehensive dynamical models of mine-used heavy load reducer by centralized parameter method and simulated different fault with varied degree of dynamic characteristics of gear through virtual prototyping technology, which can help identify gear fault more quickly. Zhao [7] established a mathematical model of the surge boundary line according to operating characteristics of the coal mine fan, and it revealed the change of inlet air pressure and flow when the fan surge occurs, which can be used as the information to diagnose the fault.

25.2.2 Equipment Status Monitoring Research Timely and accurately obtaining the data of the current operating status of the equipment is the basis for fault diagnosis and predictive maintenance of the equipment. Generally, a complete monitoring system consist of sensor, data acquisition device, data transmission network, data displaying and so on. Zhang [8] designed time monitoring system on double drum shearer, which can monitor the cutting height status, haulage speed, mining face inclination and running position of the coal shearer. Moreover, the above related information was transmitted the substation near the control table in underground mine via CAN bus and transmitted to the monitoring and control room at the mine surface via the industrial Ethernet. Then the dynamic pictures of the coal shearer operation in underground mine and monitoring and measuring data can be simulated and displayed on the visualized platform in the monitoring and control room. Zhao [9] proposed a monitoring system of underground hydraulic support based on ZigBee technology in view of underground complex working environment, which can achieve functions of pressure monitoring of anterior column, posterior column and beam of hydraulic support. Huang [10] designed an online pressure monitoring system of underground hydraulic support by means of WaveMesh wireless ad hoc network technology, which can timely monitor pressure change of hydraulic support. In order to solve the problems of imbalanced wire rope tension and the overload of hoisting in multi-rope friction hoist, Lei [11] designed a particle damping vibration attenuation sensor. This sensor can not only effectively dissipate the impact load caused by the coupling vibration of the wire rope and filter out

266

L. Zhou

the noise in the tension signal of the wire rope, but also can effectively and accurately measure the tension and hoisting load of wire rope. Towards the requirements of real-time and precise monitoring of spatial position in fully mechanized face of coal mine, Mao [12] proposed a scheme of information monitoring system based on multi sensor. In this system, cutting arm attitude angle was tested by displacement sensor of hydraulic cylinder and spatial position and posture monitoring were tested by ultrasonic sensors, laser sensors and combined ins based on fusion of ins and geomagnetic, which indicated good precision through the experiments. Li [13] designed a belt conveyor monitoring system of coal mine based on RS485 and CAN bus, which includes the emergency stop module, amplify telephone, lower computer and the terminal module. The test results indicated that this system had the advantage of accurate location, reliable communication and so on. Zhou [14] designed a collecting and computing platform applied to predictive maintenance of coal mine electro-mechanical equipment based on main controlling chip of STM32F4. which can real-time collect vibration, temperature, pressure and other data of coal mine equipment. The test results showed small errors for this platform, which can meet the requirements of monitoring the status and fault diagnose of equipment on the spot in coal mine field.

25.2.3 Signal Analysis and Processing Study It’s significant for acquiring failure characteristic of equipment to process and analyze collected signal, which directly determine the accuracy and reliability of diagnose. Duan [15] established a partial failure vibration signal model of spur gear of shearer. In this model, the changing rule of gear vibration signal generation under different partial failure were analyzed, which has been proved to be effective in signal analysis during experiment validation. Yang [16] proposed a vibration signal denoising method for drive roller bearing of mine-used belt conveyor based on ensemble empirical mode decomposition (EEMD) and fast independent component analysis (FastICA) in order to solve modal aliasing during vibration signal denoising and other issue. The final results indicated that the approach can effectively achieve the purpose of signal denoising, and enhance the effectiveness and accuracy of failure diagnose. In view of the signal analysis of the vibration screen, Zhu [17] adopt an excellent denoising algorithm based on EEMD and wavelet packet, which has been proven to be reasonable and effective at vibration signal noise reduction through simulation and tests. Guo [18] adopted wavelet packet method to decompose vibration signals of centrifugal pump for extracting characteristic frequency bands, and compared energy values of each frequency band to narrow analysis range. On the above basis, separating fault characteristics frequency and the third harmonic frequency was utilized to extract fault characteristic frequency more accurately. Hua [19] proposed translationinvariant multiwavelets and neighboring coefficient denoising method to help overcome the difficult in extracting the fault signal, which has been verified to be an effective method of extracting fault feature frequency hidden in background noise.

25 Research Status and Trend Analysis of Coal Mine Electro-Mechanical …

267

25.2.4 Diagnosis and Predication Algorithm Research After obtaining the typical fault characteristics of the equipment, you can judge whether the equipment exists currently fault or not and its development trend. Wang [20] designed a fault diagnosis system for mine main ventilator based on neural network trained by extreme learning machine. The test results revealed this model has been improved on algorithm running time and accuracy rate of fault diagnosis compared with other neural network. Zhang [21] proposed a fault diagnosis method of mine hoist based on fuzzy fault tree and Bayesian network, which has been applied to fault diagnose in mine hoist to verify its effectiveness. Meng [22] built hoist fault diagnose system method by use of fuzzy Petri net model. This method can help diagnose complex and fuzzy issue and find out cause of the fault more quickly and efficiently. Cao [23] put forward a novel fault diagnosis model of shearer rolling bearing based on vibration image and dynamic convolution neural network (DCNN). This model successfully solved the one-dimensional vibration signal on the diagnosis of rolling bearing faults under complex working conditions, and has been proved to be the correct and efficient in experiment tests. Yang [24] constructed neural network based on the variation self-adapting particle swarm optimization algorithm to perform fault detection in roadheader rotary table with different degrees of failure, which indicated a high application value at fault detection with good accuracy and stability. Lin [25] raised a tension force prediction method based on BP neural network. In particular, the belt conveyor can be adjusted adaptively according to the predicted tension force and the test results indicated high prediction accuracy for this method. Gao [26] established a fault diagnosis model of underground water pump based on fuzzy Petri net and condition monitoring. The vibration signal processed by model as fuzzy feature vector input was introduced and trained with BP algorithm of neural network. As a result, the model showed good accuracy, rapidity and adaptability in pump failure diagnose after several tests.

25.3 Problems and Challenges After decades of development in equipment fault diagnosis technology, there are many study and fruitful research results performed and achieved by researchers in related fields. With regard to coal mine equipment starting late, it develops rapidly with the advancing technology in recent years but exists problems at the same time. On the other hand, the advancement of the intelligent construction of coal mines also poses new challenges to the research in this field, mainly as follows,

268

L. Zhou

25.3.1 Insufficiently Study on the Failure Mechanism of Equipment As we all know that typical coal mine equipment is often bulky and complicated in structure. It is necessary to simplify the research object to a certain extent when studying the failure mechanism of this type of equipment. On above basis, the corresponding mathematical and mechanical models can be established, which can be verified and improved through simulation and combined experiments respectively. The problems brought by this process are as below: The simplified mathematical model can’t completely reflect all the status information of the equipment, and may even lose some important features which reflect the fault status of the equipment, which will increase the difficulty of diagnosis. Therefore, in terms of mechanism research, the establishment of simplified models (including mathematical and test models) that can reflect the state of the equipment as much as possible, and the reasonable addition of typical faults require further in-depth research in the future.

25.3.2 The Unavailable Failure Data at the Coal Mine Site It is no longer difficult to obtain real-time operating status data of equipment with the improvement of coal mine informatization degree. Owing to the limited monitoring time, there is still less fault data that can be acquired from equipment operation process. But it’s significant of the fault data for equipment fault diagnosis and predictive maintenance. At present, building a test bench or computer simulation are the most common solutions in many studies. But these methods are always efficient only in an ideal environment, which can’t truly reflect the actual operating conditions of the equipment. Hence, it’s still necessary to accumulate on-site monitoring data for a long time, especially to obtain the data of the entire life cycle of the equipment, which has a profound meaning for the diagnosis and predictive maintenance of the equipment.

25.3.3 Diagnosis and Prediction of Low-Level Intelligence With the intelligent construction of coal mines, the relevant research on fault diagnosis and predictive maintenance of mining equipment will surely make continuous progress in many aspects, such as smart sensor technology, smart diagnosis and prediction algorithms, smart decision-making and smart maintenance management. In the current research on diagnosis and prediction algorithms, the corresponding parameters are often artificially set or modified to realize the switching of equipment or faults, and the promotion and universality of these algorithms need to be further verified. Therefore, the research of intelligent diagnosis and prediction algorithm

25 Research Status and Trend Analysis of Coal Mine Electro-Mechanical …

269

that combined with artificial intelligence technology should be paid more attention and research. Meanwhile, there are more studies focus on a single diagnosis and prediction algorithm in the current research, while fewer studies concentrating on combined algorithm application. As a result, the efficiency of intelligent diagnosis and prediction will be further improved through a combination of multiple effective algorithms.

25.4 Development Trend Analysis Based on the above analysis, the research on fault diagnosis and predictive maintenance of mining equipment still faces many problems and challenges at present. But new achievements and breakthroughs in this field are also bound to be attained with the wave of intelligent construction of coal mines, especially with the continuous fusion of other advanced technologies such as artificial intelligence, big data, and 5G communications. Consequently, the main development trends are summarized as follows,

25.4.1 Advance the Equipment Fault Identification Time Predicting the operation status of mining equipment is an ultimate aim of realizing equipment fault diagnosis and predictive maintenance research. Hence, it’s crucial to promptly detect and adopt corresponding maintenance measures in the early stage of equipment failure. To meet the demand of early fault identification, the immensely weak fault characteristic signal can be extracted from strong noise, which can help find potential threats existing in the equipment. Moreover, the continuous improvement of the sensitivity of sensors and other hardware makes early equipment failures identification more timely. Meanwhile, the progress of signal processing methods for weak signal feature extraction under strong noise background can also help further realize predictive maintenance.

25.4.2 Development of Multi-fault Coupled Diagnosis and Predictive Maintenance At present, there are many fault diagnosis and predictive maintenance studies focus on single failure of mining equipment, such as bearings, gears and other key components. However, the situation in practical field should be much more complex compared with single failure research. In fact, there may be not a single fault but combined

270

L. Zhou

and interacted faults occurring on the machine or equipment. Therefore, the realtime obtained signals usually consist of the characteristic signals with multiple fault couplings. The system may misjudge or miss judgement through signal processed by use of single fault diagnosis algorithm. With the unceasing progress of intelligent algorithms, the problem of pattern recognition for multiple failures will also attract more attention. Meanwhile, it’s of great significance of fault diagnosis and predictive maintenance research for the accurate and effective identification of all failures in equipment.

25.4.3 Strengthen the Guidance of Simulation and Experiment Data on Application Field Site The accumulation of on-site monitoring data requires a long-term process, and the simulation and test are still crucial means to acquiring typical fault characteristics. In order to solve the problem of the difference between the simulation and experiment conditions and the field conditions, it’s necessary to conduct further study on their correlation. With the continuous progress, upgrade and application of migration learning and other intelligent approaches, the correlation between the data that obtained by simulation and experiment methods and the operating status of field equipment will be further improved. As a result, the above progress can provide great support and guarantee for the fault diagnosis and predictive maintenance at the current stage of lack of field data.

25.5 Conclusion Under the background of intelligent mine construction, the fault diagnosis and predictive maintenance of mine equipment research is bound to enter a new stage. This paper introduces the current research situation of diagnosis and prediction of mine equipment, and summarizes the existing three problems, which including insufficient research on fault mechanism, lack of sufficient fault data in the field and the lowlevel intelligence of diagnosis and prediction. Meanwhile, the development trend of the research in the future is analyzed on this basis. With the rapid development of technologies such as artificial intelligence, big data and Internet of Things, the relevant research on the maintenance and management of mine equipment will break through layers of barriers and make major breakthroughs in various aspects. Besides, the development of technologies will also promote the fault diagnosis and predictive maintenance of mine equipment from in-depth research to extensive application, which can ensure the safe and efficient operation of mining equipment. Acknowledgements This work was supported by the Research on key technology and standards of mine Internet of Things, CCTEG No. 2019-TD-ZD007.

25 Research Status and Trend Analysis of Coal Mine Electro-Mechanical …

271

References 1. Chen, Y.S.: Nonlinear dynamical principle of mechanical fault diagnosis. Chinese J. Mech. Eng. 43(1), 25–34 (2007) 2. Li, Z.F.: Study on The Vibration Characteristic and Typical Fault Diagnosis of Mine Hoist System. China University of Mining & Technology (2008) 3. Cui, G.L., Meng, G.Y., Ding, C.: Fault diagnosis of main fan rotor imbalance based on spectrum analysis. Coal Mine Mach. 33(010), 273–274 (2012) 4. Liu, Y., Suo, S., Meng, G., Shang, D., Bai, L., Shi, J.: A theoretical rigid body model of vibrating screen for spring failure diagnosis. Mathematics 7(3), 246 (2019) 5. Zhang, Y.C., Wu, L.J., Yin, M.H., Cui. Y.H., Guo, Z.P.: Fault mechanism analysis of transmission system for a shearer rocker arm. J. Mech. Trans. 42(11), 137–141 (2018) 6. Li, W.: Research on Fault Simulaiton of Mine-used Heavy Load Reducer Based on Virtual Prototyping Technology. China University of Mining & Technology, Beijing (2012) 7. Zhao J.: Mechanism of surging of axial-flow ventilator in operation and study of countermeasures. Hydraulic Coal Mining Pipeline Transp. (02), 30–31+35 (2019) 8. Zhang, X., Zhang, P.C., Meng, G.Y., Zhang, X.C.: Development of on time monitoring and measuring system and visualized platform for coal shearer. Coal Eng. (03), 101–103(2008) 9. Zhao, D., Zong, X.: Design of pressure monitoring system of underground hydraulic support based on ZigBee technology. Indus. Mine Automat. 40(1), 31–31 (2014) 10. Huang, D.Q.: Design of online pressure monitoring system of underground hydraulic support. Indus. Mine Automat. 41(12), 9–11 (2015) 11. Lei, G.Y., Xu, G.Y.: Study on dynamic monitoring methods of wire rope tension and load using particle damping vibration attenuation sensor. Coal Eng. 51(09), 162–165 (2019) 12. Mao, Q.H., Zhang, X.H., Ma, H.W., Xue, X.S.: Study on spatial position and posture monitoring system of boom-type roadheader based on multi sensor information. Coal Sci. Technol. 46(12), 41–47 (2018) 13. Li, H.W., Li, L., Qu, B.N., Song, J.C.: Design of belt conveyor monitoring system based on RS485 and CAN bus. Coal Eng. 49(7), 18–21 (2017) 14. Zhou, L.B.: Design of collecting and computing platform used for predictive maintenance of coal mine electromechanical equipment. Indus. Mine Automat. 46(7), 106–111 (2020) 15. Duan, J.L., Xu, C.Y., Song, J.C., Tian, M.Q., Guo, J., Yan, Z.K., Guan, Q.: Spectrum analysis of partial failure of shearer rocker gear based on vibration model. Indus. Mine Automat. 42(07), 34–39 (2016) 16. Yang, X., Tian, M.Q., Li, L., Song, J.C., Zhang, L.F., Wu, J.K.: Vibration signal denoising method for drive roller bearing of mine-used belt conveyor. Indus. Mine Automat. 45(3), 66–70 (2019) 17. Zhu, M., Duan, Z.S., Guo, B.L.: Denoising analysis of vibration screen bearing signal based on EEMD and wavelet packet. Mach. Design Manuf. (5), 63–67 (2020) 18. Guo, W.Q., Tian, M.Q., Song, J.C., Geng, P.L., Yao, Y.: Wear fault analysis of centrifugal pump impeller based on multi-source signal fusion. Indus. Mine Automat. 44(06), 74–79 (2018) 19. Hua, W., Niu, Z.H., Wang, Z.Y., Leng, J.F.: Mine gearbox fault diagnosis based on neighboring coefficients of translation-invariant multiwavelets. J. China Coal Soc. 041(z1), 253–258 (2016) 20. Wang, H.Y., Chen, Y., Miao, Y.Z., Chen, B.G.: A fault diagnosis system of mine main ventilator. Indus. Mine Automat. 43(6), 69–71 (2017) 21. Zhang, M., Xu, T., Sun, H.H., Meng, X.Y.: Fault diagnosis of mine hoist based on fuzzy fault tree and Bayesian network. Indus. Mine Automat. 46(11), 1–5 (2020) 22. Meng, X.G., Yu, X., Li, X.J.: Fault diagnosis of mine hoist deceleration system based on fuzzy Petri net. Indus. Mine Automat. 45(6), 91–95 (2019) 23. Cao, X.G., Zhang, G.Z., Zhang, X.Y., Zhang, S.N.: Fault diagnosis of shearer rolling bearing based on vibration image and DCNN. Coal Mine Mach. 41(07), 149–152 (2020) 24. Yang, J.J., Tang, Z.W., Wang, X.L., Wang, Z.R., Wu, M.: Roadheader anomaly detection method based on VSAPSO-BP under the single category learning. Vib. Test Diagnosis 39(01), 130–135+226 (2019)

272

L. Zhou

25. Lin, G.X.: Tension force prediction method for mine-used belt conveyor. Indus. Mine Automat. 44(10), 38–42 (2018) 26. Gao, Z.Z., Gong, Q.Y., Zhao, L.N., Xu, H.Q., Xiao, J.Y.: Fault diagnosis of underground water pump based on fuzzy Petri net and condition monitoring. Indus. Mine Automat. 42(05), 28–31 (2016)

Chapter 26

A Novel Conditioning Circuit for Testing Tank Fire Control Systems Yang Cao, Hongtian Liu, Chao Song, Hai Lin, Dongjun Wang, and Hongwei Wu

Abstract In view of the failure problems in the actual use of the new tank fire control system, this paper designs a detection and conditioning circuit for the fire control system to obtain its dynamic and comprehensive data in real time and to mark the test time. The objective of this conditioning circuit is to solve the problem of not being able to obtain the dynamic integrated data of the fire control system in real time, which results in insufficient data to assess the technical status of the fire control system and to locate the exact time point of failure. Furthermore, it can read, retrieve, sort and calculate the dynamic integrated monitoring data in real time, solving the problem of not being able to quickly and accurately assess the technical status of the fire control system due to the inability to process the monitoring data in real time.

26.1 Introduction According to the statistics on the failure of the upper weapon system of a new main battle tank, there are more failures in the electrical aspect of the fire control system (FCS). Once a fault in this area has occurred, it is usually difficult to diagnose the cause of the fault directly, and the fault is difficult to reproduce. To address this problem, it is necessary to design a fast and efficient information technology fault detection and conditioning circuit (hereafter, referred to as conditioning circuit), which can realize the real-time acquisition of data signals during the status monitoring of the fire control system, improve the real-time and accuracy of the fire control system test, and improve the efficiency of equipment protection [1]. One of the important elements of the fault detection conditioning circuit design is the design of the conditioning circuit. At present, China has made great progress in the detection of faults in the fire control system of armoured equipment, and has developed a number of conditioning circuits. However, there are some general problems with these conditioning circuits. Y. Cao · H. Liu · C. Song (B) · H. Lin · D. Wang · H. Wu Department of Weapons and Control, Army Academy of Armored Forces, Beijing, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_26

273

274

Y. Cao et al.

First of all, some designs of conditioning circuit are relatively simple. They can only detect a device or a certain function, and cannot meet the needs of system integration. Second, some conditioning circuits are slow and have low detection accuracy, which cannot meet the requirements of fast and efficient detection. Thirdly, some conditioning circuits are unable to monitor the system status and record its parameters in real time, especially when using fire control systems, resulting in the inability to implement accurate diagnosis of hard-to-replicate faults.

26.2 The Functional Requirements and Composition of the Conditioning Circuit In view of the current realistic problems and partial statistics of the faults in the fire control system of armoured equipment, the functional requirements and composition of the conditioning circuit are as follows.

26.2.1 Analysis of the Functional Requirements of the Conditioning Circuit The conditioning circuit is required to achieve real-time acquisition of dynamic integrated data of the fire control system and mark the test time, to solve the problem of not being able to assess the technical status of the fire control system and to accurately locate the time point of failure when a failure occurs due to insufficient data acquisition. Moreover, it needs to be able to read, retrieve, sort, calculate and other data processing of the dynamic integrated monitoring data in real time to solve the problem of not being able to quickly and accurately assess the technical status of the fire control system due to the inability to do so [2].

26.2.2 Composition and Function of the Conditioning Circuit The conditioning circuit includes I/O control signal circuit, DC signal acquisition circuit and switch acquisition circuit, which are used for signal control, isolation, reduction or amplification and other conditioning. Its constituteframe diagram is shown in Fig. 26.1.

26 A Novel Conditioning Circuit for Testing Tank Fire …

275

Fig. 26.1 The constituteframe diagram of the conditioning circuit

26.2.2.1

Composition and Function of the I/O Control Signal Circuit

The I/O control signal circuit is used to output the required voltage value of the component under test. The I/O control circuit diagram is shown in Fig. 26.2. The I/O signal control circuit consists of the HFD4/5 relay K10 and the IN4148 diode D10. The input signal to the I/O signal control circuit is IO1, which is connected to the common terminal (COM) 1 and common terminal 2 of the relay K10. The normally closed terminal (NC) 1 and normally closed terminal 2 of K10 are left open, and the normally open terminal (NO) 1 and normally open terminal 2 of K10 are connected to the +24 V voltage signal. The +5 V voltage signal is connected to the positive control terminal of K10 and the control signal P2.0 is connected to the negative control terminal of K10. The negative terminal of diode D10 is connected to the positive control terminal of K10 and the positive terminal of D10 is connected to the negative control terminal of K10. As shown in the I/O control circuit schematic diagram (Fig. 26.2), IO1 is the output of the processing board, which is connected to the component under test through the detection cable; K10 is a relay of model HFD4/5, which is a +5 V on/off controller; P2.0 is the resource signal of the multi-functional acquisition module; D10 is a diode of model IN4148. When the component under test needs an output of +24 V, the detection device issues a control command to control the P2.0 resource of the multi-functional acquisition module to a low level, at which time the relay K10 is connected, that is, pin 3 of K10 is connected to pin 4 and pin 6 is connected to pin Fig. 26.2 The I/O control circuit schematic diagram

276

Y. Cao et al.

5, thus making IO1 and +24 V connected to achieve the role of signal control. In this process, D10 plays a protective role, protecting the relay from the sudden change of voltage and prolonging the service life of the circuit. The I/O signal control circuit is designed with the characteristics of circuit protection, through which the circuit can achieve the function of protecting the internal circuit of the testing equipment.

26.2.2.2

Composition and Function of the DC Signal Acquisition Circuit

The DC signal acquisition circuit is used for the acquisition of DC voltage signals. The diagram of the DC signal acquisition circuit is shown in Fig. 26.3. The DC signal acquisition circuit consists of a protection circuit, a subcompression mini-circuit and a filter circuit. The DC signal acquisition circuit is composed of a transient voltage suppressor D1 of model SMAJ36A, a resistor R1 with a resistance of 49.9 K, a resistor R2 with a resistance of 100, a resistor R3 with a resistance of 10 K, capacitors C1, C2, C3 and C4 of 100 nF/50 V, operational amplifier U4A of model LM324, as well as a resistor R4 with a resistance of 10 . The input signal to the DC signal acquisition circuit is U1, which is connected to one end of resistor R1 and the negative end of transient voltage suppressor D1; the other end of resistor R1 is connected to one end of resistor R2; the other end of resistor R2 is connected to one end of resistor R3. One end of capacitor C3, the positive input of operational amplifier U4A, the positive end of transient voltage suppressor D1, the other end of resistor R3 and the other end of capacitor C3 are connected together to GND. The reverse input end of operational amplifier U4A is connected to the signal output end of U4A; the signal output end of U4A is connected to one end of resistor R4; the other end of resistor R4 is connected to one end of the output signal AD1 and then capacitor C4; the other end of capacitor C4 is connected

Fig. 26.3 The DC signal acquisition circuit

26 A Novel Conditioning Circuit for Testing Tank Fire …

277

to GND; the positive supply side of U4A is connected to the voltage signal of +15 V and one end of capacitor C1, respectively; the other end of capacitor C1 is connected to GND; the negative supply side of U4A is connected to the voltage signal of −15 V and one end of capacitor C2, respectively; the other end of capacitor C2 is connected to GND. The input signal U1 is connected to the input end of the transient voltage suppressor D1 and the output end of the transient voltage suppressor is grounded. The type of transient voltage suppressor is SMAJ36A. When the input DC signal exceeds 36 V, the input voltage is limited to 36 V to ensure that the input signal is within the input range of the conditioning circuit. The input signal is then connected to one end of resistor R1, the other end of resistor R1 is connected to one end of resistor R2, the other end of resistor R2 is connected to one end of resistor R3 and the other end of resistor R3 is grounded. The resistance values of R1, R2 and R3 are 49.9 K, 100  and 10 K, respectively. R1, R2 and R3 form a small circuit with a voltage division ratio of R3/(R1 + R2 + R3), i.e. 1/6, which means that the voltage at the point where R2 is connected to R3 is 1/6 of the input voltage. Then, the voltage after the division is connected to one end of capacitor C3, and the other end of capacitor C3 is connected to the ground. The capacitance of capacitor C3 is 100 nF/50 V. Capacitor C3 absorbs the high-frequency signal of the voltage after the decompression, and then transmits it to the positive input terminal of the operational amplifier LM324, pin 3. Through the voltage follower composed of LM324, the voltage after the decompression is transmitted to the output terminal of LM324, pin 1. Then, the voltage is connected to resistor R4 with a resistance value of 10 , which is a current limiting resistor, ensuring that the current input to the multi-functional acquisition module at the back end is within its accepted range and also playing a role in protecting the multi-functional acquisition module. After that, through capacitor C4 with a capacitance value of 100 nF/50 V, the voltage is further filtered. AD1 is the name of the corresponding resource on the multi-functional acquisition module, which is connected to the testing equipment platform via the resource cable and finally collected by the multi-functional acquisition module. The capacitors C1 and C2 shown in Fig. 26.3 are power supply filter capacitors with the capacitance value of 100 nF/50 V, the supply voltage of the operational amplifier LM324 is ±15 V. C1 and C2 absorb the high-frequency signal on the supply voltage to ensure that the operational amplifier is not affected by fluctuations in the power supply. The design of the DC signal acquisition circuit has the features of Input signal protection, power supply filtering and output signal filtering, through which the function of protecting the component and the testing equipment being tested can be achieved.

26.2.2.3

Composition and Function of the Switching Acquisition Circuit

The switching acquisition circuit is used for collecting switching signals. The diagram of the switching acquisition circuit is shown in Fig. 26.4.

278

Y. Cao et al.

Fig. 26.4 The switching acquisition circuit

The switching acquisition circuit is composed of resistors R20, R21, R22 and R23 with a resistance of 2 K, optocoupler U1 of model TLP521-4, and resistors R9, R10, R11 and R12 with a resistance of 10 K. The input signals of the switching acquisition circuit are LCX, XXS, DDI and SSE, where the signal LCX is connected to the cathode terminal 1 of the photocoupler U1, the anode terminal 1 of U1 is connected to one end of the resistor R20, and the other end of the resistor R20 is connected to the +5 V voltage signal; signal XXS is connected to cathode terminal 2 of optocoupler U1, anode terminal 2 of U1 is connected to one end of resistor R21, and the other end of resistor R21 is connected to the +5 V voltage signal; signal DDI is connected to cathode terminal 3 of optocoupler U1, anode terminal 3 of U1 is connected to one end of resistor R22, and the other end of resistor R22 is connected to the +5 V voltage signal; signal SSE is connected to the cathode terminal 4 of the photocoupler U1, the anode terminal 4 of U1 is connected to one end of resistor R23 and the other end of resistor R23 is connected to the +5 V voltage signal. Collector end 1 of U1 is connected to the + 5 V voltage signal, emitter end 1 of U1 is connected to the output signal P1.0 and one end of resistor R9, respectively, and the other end of resistor R9 is connected to DGND; collector end 2 of U1 is connected to the +5 V voltage signal, emitter end 2 of U1 is connected to the output signal P1.1 and one end of resistor R10, respectively, the other end of resistor R10 is connected to DGND. The collector terminal 3 of U1 is connected to the +5 V voltage signal, the emitter terminal 3 of U1 is connected to the output signal P1.2 and one end of resistor R11, respectively, and the other end of resistor R11 is connected to DGND; the collector terminal 4 of U1 is connected to the +5 V voltage signal, the emitter terminal 4 of U1 is connected to the output signal P1.3 and one end of resistor R12, respectively, and the other end of resistor R12 is connected to DGND.

26 A Novel Conditioning Circuit for Testing Tank Fire …

279

The design of the switching acquisition circuit has the feature of Input and output signal isolation, through which the circuit can achieve the function of protecting the measured parts and detection equipment signals from each other.

26.3 The Working Principle of the Conditioning Circuit The conditioning circuit is set on the processing board of the fire control system’s detection adapter or the fire control computer’s detection adapter. The control signal from the CPU control module is output by the multi-functional acquisition module and transmitted to the conditioning circuit through the resource cable, which is processed by the conditioning circuit to make the fire control system’s detection adapter or fire control computer’s detection adapter work normally. The acquisition signal from the fire control system or fire control computer is conditioned by the conditioning circuit of their adapters so that the voltage value of the signal is within the acquisition range of the multi-functional acquisition module and then transmitted to the multi-functional acquisition module via the resource cable [3].

26.3.1 Principle of I/O Control Signal Circuit The test adapter transmits the I/O control signal of the multi-functional acquisition module to the processing board via the resource cable. After that, the test equipment platform controls the high and low electrical levels of the output of the multifunctional acquisition module, which in turn controls the on and off of the relay and thus the output of the test adapter.

26.3.2 Principle of the DC Signal Acquisition Circuit The DC voltage signal output from the component under test is transmitted to the detection adapter through the detection cable. As the DC voltage signal output directly from the component under test is not included in the acquisition range of the multi-functional acquisition module, it needs to be reduced by conditioning to be included within the acquisition range. The conditioned DC voltage signal is transferred to the multi-functional acquisition module via the resource cable and then to the CPU control module, where it is displayed on the touch screen after having been calculated and processed by the CPU control module [4].

280

Y. Cao et al.

26.3.3 Principle of Conditioning Circuit The switching signal has no explicit input variation; the voltage is +5 V when there is output and 0 V when there is no output, suspended or grounded. This design uses a photoelectric coupler to collect switch signals. The photoelectric coupler has a signal isolation function, which can protect the fire control system or fire control computer and detection equipment from interfering with each other during operation. This article uses the LCX signal as an example to illustrate the principle of signal acquisition. The LCX signal is the output signal of the component under test, which is connected to pin 2 of the TLP521-4 optocoupler U1, and is connected to a resistor R20 with a resistance of 2 K to form the signal acquisition circuit. The signal flow is +5 V to R20, then to the first light-emitting diode inside U1, and finally to the LCX signal. When the LCX signal is low, the whole circuit is a pathway and the light-emitting diode inside U1 lights up, generating a light signal to be transmitted to the next level, which flows to the voltage of +5 V. The phototriode inside U1 is externally connected to the resistor R9 with a resistance of 10 K, and the other end of the resistor R9 is connected to the ground. The P1.0 signal in the middle is the name of the resource that the multi-functional acquisition module connects to the detection adapter via the resource cable and is normally low. When the light-emitting diode inside U1 lights up, it emits a light signal to the corresponding phototriode, which makes the class III circuit loop conductive, thus making the P1.0 voltage collected by the multi-functional acquisition module change from the low to the high level, which is further collected to the CPU control module to perform data analysis and processing [5].

26.4 Experimental Testing After completing the development of the conditioning circuit, tests were carried out with the actual equipment components. In this process, the conditioning circuit worked continuously for not less than 8 h/day. The results show that the state was normal, and the actual equipment was in good technical condition components. Through the test, the conditioning circuit meets the design requirements for each function. Test data is not provided in this paper for confidentiality reasons.

26.5 Conclusion The conditioning circuit is able to obtain the dynamic integrated data of the fire control system in real time and mark the test time, solving the problem of being unable to assess the technical status of the fire control system and locate the time point of failure when a fault occurs due to insufficient data acquisition. Furthermore, the circuit is

26 A Novel Conditioning Circuit for Testing Tank Fire …

281

able to further read, retrieve, sort and calculate the dynamic integrated monitoring data in real time, thus enabling a rapid and accurate assessment of the technical status of the fire control system. In summary, the conditioning circuit designed in this study has the following advantages. (1)

(2)

(3)

A reasonable data reading strategy that ensures the processing speed and execution efficiency of analogue data and bus data, and ensures that both types of data are stored without causing data loss as well. A reasonable data processing strategy that ensures all valid information is accurately detected from each reading of the bus mixed data for parsing, and invalid information is properly handled and stored in parallel with the analogue data, while the analogue data and bus data are time-stamped and arranged in chronological order at the same time, which ensures the correlation and continuity of the data analogue data and bus data. A reasonable data storage strategy that supports both automatic storage and user-defined storage of test data, ensuring the stability of rapid storage of large amounts of test data and also facilitating the preservation of key test data by users.

References 1. Wei, Y.: Fire Control and Command Control. Beijing University of Technology Press (2003) 2. Xu, J., Cao, Y., Zhang, X., Han, J.: The design of circuit board fault detection system for fire control system of armored vehicles. Fire Control Command Control (2006) 3. Zhao, J., Fu, X., Dong, P.: Newly Edited—Sensor Circuit Design Manual. China Metrology Press (2002) 4. Meng, X., Wang, Y.: Optimal design of common sensor conditioning circuits. Electron. Sci. Technol. (2016) 5. Ning, F., Xiong, Y.: Design of fault detector for fire control system based on embedded technology. Electr. Autom. (2014)

Chapter 27

Intelligent Auxiliary Fault Diagnosis for Aircraft Using Knowledge Graph Xilang Tang, Bin Hu, Jianhao Wang, Chuang Wu, and Sohail M. Noman

Abstract This paper proposes a new intelligent auxiliary diagnosis method based on knowledge graph which helps grass-roots maintenance engineers to quickly and accurately locate the fault unit of aircraft. Firstly, aircraft fault knowledge graph is extracted from fault text data by using machine learning. Secondly, knowledge representation learning method is used to mapping fault knowledge graph into dense low-dimensional real-value vectors. Finally, the fault knowledge related to fault phenomena is extracted by the method of cosine similarity.

27.1 Introduction Nowadays, fault diagnosis of aircraft becomes more and more difficult since the function and structure of aircraft are becoming more and more complex. Therefore, the “intelligent auxiliary diagnosis system” can generate diagnosis strategy and guide the maintenance engineers to take appropriate test steps to quickly and accurately locate the fault unit. To construct the intelligent auxiliary diagnosis system, the basic knowledge base, including aircraft hierarchical structure, function and connection relationship of units, observable signal parameters, potential fault risk, available test means and so on, is necessary [1]. By using knowledge graph technology [2], this knowledge can be extracted automatically from documents, like FMECA (fault mode, impact and hazard analysis) analysis documents, aircraft maintenance support teaching materials, aircraft test and diagnosis records, aircraft quality control data, aircraft maintenance support records, fault analysis, research reports, etc. X. Tang · J. Wang · C. Wu Equipment Management and Unmanned Aerial Vehicle Engineering School, Air Force Engineering University, Xi’an 710038, China B. Hu Changsha Normal University, Changsha 410000, China S. M. Noman (B) Shantou University Medical College, Shantou, Guangdong, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_27

283

284

X. Tang et al.

Based on the fault knowledge graph, the possible fault causes and diagnosis strategies can be deduced according to the fault phenomena by using semantic retrieval and knowledge reasoning technology [3]. Therefore, how to troubleshoot can be guided for maintenance engineers. This paper is organized as follows: in Sect. 27.2, the framework of auxiliary fault diagnosis technology based on knowledge graph is introduced; in Sect. 27.3, construction of aircraft fault knowledge graph is illustrated; in Sect. 27.4, auxiliary diagnosis using fault knowledge graph is introduced.

27.2 The Framework of Auxiliary Fault Diagnosis Technology Based on Knowledge Graph The framework of auxiliary fault diagnosis technology based on knowledge graph is shown in Fig. 27.1. There are a large number of fault text data for aircraft through the information sharing mechanism, including product description, various teaching materials, fault assembly and FMECA documents. With the help of the combination

Fig. 27.1 The framework of auxiliary fault diagnosis technology based on knowledge graph

27 Intelligent Auxiliary Fault Diagnosis …

285

of manual editing and knowledge mining technology, these unstructured and isolated fault text data can be transformed into structured and interrelated fault knowledge graph. On the basis of structured fault knowledge graph, by using semantic retrieval and knowledge reasoning technology, knowledge related to fault phenomena can be retrieved and reasoned. Finally, with the human–computer interface, system can help maintenance engineers analyze fault, locate fault and adopt appropriate strategies to repair faults.

27.3 Construction of Aircraft Fault Knowledge Graph There are two kinds of problems involved in constructing fault knowledge graph from aircraft fault text data. One is fault entity recognition and the other is fault entity relationship extraction. The purpose of fault entity recognition is to identify the specific examples of fault concept in text or data based on machine learning. For example, “fault mode” is a fault concept, while “coil short circuit” is a fault entity. Fault entity identification is to correctly identify “coil short circuit” as an entity of “fault mode” in the text. The purpose of relationship extraction is to extract whether there are predefined relationship patterns between two fault entities, in text or data based on the method of machine learning. For example, “fault mode”, “cause” and “fault phenomenon” are the relationship modes between the two predefined fault concepts. “coil short circuit” and “current distortion” are the fault entities of “fault mode” and “fault phenomenon”, respectively. Relationship extraction is to confirm whether there is a “cause” relationship between “coil short circuit” and “current distortion” from the text. Because deep learning has unique advantages in extracting features from unstructured data such as text, which can avoid the cumbersome process of feature template construction, this paper mainly uses the method of deep learning to obtain the fault knowledge from the fault text data. For example, fault entity recognition can be transformed into sequence annotation or sequence decoding, that is, marking which entity concept words belong to. Bidirectional long-term and short-term memory neural network (BI-LSTM) can well reflect the impact of past and future content on current content, and has advantages in sequence feature extraction. Therefore, BI-LSTM network will be used for feature coding in the process of entity recognition. Then, conditional random field (CRF) algorithm is used for decoding. The entity recognition based on BI-LSTM-CRF is shown in Fig. 27.2. The word vector in the figure is output by the pre-trained BERT model (a tool for word vector calculation provided by Google). The “FM” in the CRF annotation represents the fault mode, and “B”, “I” and “O” represent the beginning of the entity, the middle of the entity and the non-entity, respectively. The engine is the heart of the aircraft, and ensuring its integrity is the basis for the safe flight of the aircraft. By using the above method, we have constructed the fault knowledge graph which is extracted from fault maintenance record.

286

X. Tang et al.

Fig. 27.2 Fault entity identification

27.4 Auxiliary Diagnosis Using Fault Knowledge Graph Because the aircraft structure is very complex, the scale of fault knowledge graph is very large, with millions of nodes and edges. The traditional graph algorithm, rule reasoning and other methods for retrieving the knowledge related to fault phenomena will become very inefficient. Therefore, firstly, the triples in the large-scale fault knowledge graph are mapped into dense low-dimensional real-value vectors by using knowledge representation learning method [4]. For example, bi-linear tensors are used to connect the head and tail entity vectors of the fault knowledge graph in different dimensions, as shown in Fig. 27.3. Then the local fault knowledge graph related to fault phenomena is extracted by using cosine similarity, link prediction technologies, which is shown in Fig. 27.4. The specific steps include. (1)

The description entity of fault phenomenon is expressed as f (q) = V T φ(q), where V represents a word vector representation matrix of each element of fault

Fig. 27.3 Knowledge representation learning based on tensor neural network model

27 Intelligent Auxiliary Fault Diagnosis …

287

Fig. 27.4 Fault case matching based on cosine similarity

(2)

(3)

phenomenon, and φ(q) represents the combination of semantic information of each element of fault phenomenon entity by tensor neural network model. A fault case is represented as g(t) = W T ψ(t) by adding the vectors of entities and relationships of the triple, where W is the vector representation matrix of entities and relationships in the fault knowledge graph, ψ(t) indicates entities and relationships that appear in the triple. Finally, the similarity S(q, t) of the two vectors is calculated by S(q, t) = f (q)T g(t).

Based on the knowledge graph constructed in Sect. 27.3, the local knowledge graph of fault can be matched with above method when an engine fault occurs, and the possible fault causes can be analyzed. Then with the help of rule-based reasoning, a diagnosis strategy will be generated, which guide the maintenance engineer to quickly locate the fault unit.

27.5 Conclusion This paper introduces a new intelligent auxiliary diagnosis method based on knowledge graph. The aircraft fault knowledge graph is extracted from fault text data such as maintenance records and fault reports. By using knowledge representation learning method, the aircraft fault knowledge graph can be mapped into dense lowdimensional real-value vectors, then local fault knowledge graph related to fault phenomena is extracted by using cosine similarity. Acknowledgements This work was supported by China Postdoctoral Science Foundation funded Project No.2021M693941

288

X. Tang et al.

References 1. Abid, A., Khan, M. T., Iqbal, J.: A review on fault detection and diagnosis techniques: basics and beyond. Artif. Intell. Rev. 1–26 (2020) 2. Nothman, J., Ringland, N., Radford, W., et al.: Learning multilingual named entity recognition from Wikipedia—ScienceDirect. Artif. Intell. 194(1), 151–175 (2013) 3. Chen, X., Jia, S., Xiang, Y.: A review: knowledge reasoning over knowledge graph. Expert Syst. Appl. 141, 1–21 (2020) 4. Zhen, T., Xiang, Z. , Yang, F., et al.: Knowledge representation learning via dynamic relation spaces. In: 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW). IEEE (2017).

Chapter 28

Study on the Relationship Between Ship Motion Attitude and Wave Resistance Increase Based on Numerical Simulation Yuhao Cao and Yuliang Liufu

Abstract The CFD method is used to construct an analytical numerical wave pool, to simulate the ship of regular wave generation, and to study and calculate the movement of sailing ships in the state of top wave, to clarify the expression of wave environment under the condition of numerical simulation, and then to verify and analyze the contents of wave propagation, wave elimination, and so on formed in the numerical wave pool. We eventually have to get the data information and compare the experimental data of DUT, perform proof analysis with numerical simulation as the core experiment analysis method is more practical and controllable, which not only can directly show the flow field surrounding the ship, but also in shipping tax work such as dynamic performance study and movement forecast that puts the broad space for development.

28.1 Introduction Nowadays, with the continuous innovation of network technology and information concept, computational fluid dynamics (CFD) is also widely used in project research and analysis. At the same time, on the basis of the comprehensive integration of CFD technology and ship performance prediction research, the concept of ship CFD research technology has emerged. In particular, on the basis of increasing the depth of application analysis of ship CFD by domestic and foreign researchers, the numerical wave pool technology (NWT) included in it has also been further promoted. In this context, the numerical wave pool can simulate the physical pool model experiment, and complete the calculation and performance analysis of ship hydrodynamics on the basis of ship motion. Compared with the physical pool form, the numerical wave pool has more application value. On the one hand, the actual cost is lower and it does not need to measure the contact flow field; on the other hand, it can effectively control the adverse problems caused by the sensor size, model deformation and other factors, and Y. Cao (B) · Y. Liufu Wuhan University of Technology, Wuhan, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_28

289

290

Y. Cao and Y. Liufu

obtain more systematic flow field information. At present, the numerical wave pool technology has become one of the main contents in the research and discussion of ship hydrodynamics. Numerical wave pool technology and numerical simulation method based on the theory of viscous flow occupy an important position in the research of ship wave movement, which is the focus of domestic and foreign researchers in recent years, and has achieved excellent results in practical research projects. For example, Papk et al. designed a three-dimensional numerical pool by using the N-S equation, and studied the interaction between nonlinear wave motion and fixed threedimensional objects. Luquet et al. developed and designed an application method of Swense, which can calculate the unsteady flow field and waveform of the DTMB5415 ship model during the sailing of the top wave in regular waves. Guo Haiqiang and Zhu Renchuan et al. proposed the testing and analysis technology of ship hydrodynamic coefficient in numerical wave pool in their research and analysis. Although there are more and more contents based on numerical simulation of ship motion at present, due to the differences in the focus of different scientists, the final obtained contents are also different, and part of the calculation results need to be further verified and analyzed. In order to better grasp the prediction information of ship motion in wave state, a numerical wave pool was constructed based on the viscous flow CFD theory, and the movement of ships sailing in regular wave top waves and wave resistance increase were calculated. Finally, the numerical simulation analysis was completed [1–4].

28.2 Numerical Wave Pool and Hull Motion 28.2.1 Numerical Wave Pool Based on viscous flow CFD theory and multiple flow theory analysis, a threedimensional numerical wave pool should be constructed based on the physical test pool, as shown in Fig. 28.1: In the calculation and analysis of this paper, the ship is fixed on the ship and will sail in steady state with the ship at the speed of U. Thus, the overall flow field can be obtained by obtaining the governing equation based on the N–S equation and the Fig. 28.1 Analysis diagram of pool structure based on numerical wave

28 Study on the Relationship Between Ship Motion …

291

continuity equation, as shown below: ∂(ρu i ) ∂ρ + = 0(i = 1, 2, 3) ∂t ∂ xi      ∂u j ∂(ρu i ) ∂ ρu i u j ∂u i ∂ρ ∂ μ − + = + + ρ f i + Si (i, j = 1, 2, 3) ∂t ∂ xi ∂x j ∂x j ∂ xi ∂ xi In the above formula, UI represents the velocity component of the fluid particle in the I direction, Fi represents the mass force, Si represents the source term, and 2  P represents the fluid pressure. And the definition of a fluid is ρ = aq ρq , the q=1

volume fraction AQ represents the ratio between the volume of the fluid in phase 2  Q and the total volume in the unit, and it is consistent aq = 1 in this condition. q=1

μ represents the dynamic viscosity coefficient of phase volume fraction equilibrium and is defined in the same form as density. The fluctuation of the free surface should be simulated by VOF method, and the specific equation is shown as follows:   ∂ u i aq ∂aq + = 0(i = 1, 2) ∂t ∂ xi In the above formula, A and A represent the volume fraction of air phase and water phase, respectively, and it is defined that AQ = 0.5 belongs to the free surface. When simulating the movement of ships sailing in wave state, it is necessary to first calculate and analyze the wave environment that meets the requirements, in other words, how to complete the simulation generation of waves in the pool. At present, there are many common numerical wave making methods, which can be divided into two types according to the mechanism. One is source wave making, including momentum source wave making and mass source wave making. The other is the boundary simulation method, which involves rocking plate wave making, pushing plate wave making, etc. In the numerical simulation state, the wave making mode with fixed wave velocity at the inlet boundary is not only convenient to operate, but also easy to install. For example, in the case of waves in deep water environment, according to linear theory analysis, the wave surface equation and velocity field formula of regular waves are, respectively, as follows: η = ξa cos(kx − ω0 t) u = ω0 ξa ekz cos(kx − ω0 t) v=0 w = ω0 ξa ekz sin(kx − ω0 t)

292

Y. Cao and Y. Liufu

In the above formula, ξ represents the fluctuation amplitude, K represents the wave number, W0 represents the natural frequency of the wave, Ox represents the ship direction of the wave, Oy represents the width direction of the pool, and Oz represents the direction of the water depth. Thus, the motion of the flexible wave making plate can be simulated and the wave making can be carried out in the inlet boundary area with a given velocity distribution. At the end of the numerical wave pool, the artificial damping area is designed, and the specific length can be designed and analyzed according to 1 to 2 times of the ship’s steady-state wave length. Meanwhile, the vertical velocity of fluid particles in the region is forced attenuating according to the formula as shown below, where the attenuation function is μ (x, y): wr (x, y, z; t) = w(x, y, z; t) · μ(x, z)     x − xs 2 z b − z μ(x, z) = α · xe − xs zb − z f s In the above formula, x and z, respectively, agree xs ≤ x ≤ xe , z b ≤ z ≤ z f s . Under this condition, α represents the control parameter of damping, the subscript s and e, respectively, represent the starting point and end point of the damping region designed in the direction of ox, and b and fs, respectively, represent the bottom and free surface of the pool.

28.2.2 Hull Motion During the operation of the top wave, the movement and flow around the ship hull will produce coupling phenomenon. According to the motion theory analysis of the ship in the wave, the force and moment formula of the ship in the wave are as follows: F=

(τ − p I ) · nd S − mgk S



MC =

(r − rC ) × (τ − p I ) · nd S S

F of the above formula and the MC, respectively, represent the force vector and hull outside around the center of mass of the moment vector, tau on behalf of the shear stress, the p stress, n represents the outward normal direction of the hull surface, m represents the quality of the model, the rc on behalf of the floating area centroid position vector, the subscript c and r model, respectively, from the center of mass and surface position of any point vector, and G is the acceleration of gravity. Combined with Newton’s second law, using the motion theorem of the center of mass and the theorem of the moment of momentum around the center of mass, the

28 Study on the Relationship Between Ship Motion …

293

governing equation of the six-degree-of-freedom motion of the ship can be obtained as follows: d (m · vC ) = F dt d (IC · ωC ) = MC dt In the above formula, VC represents the velocity vector of the average motion of the hull, IC represents the tensor of the moment of inertia, and WC represents the angular velocity vector of the rotation of the hull.

28.3 Analysis of Simulation Results In the process of verification and analysis of wave environment and ship speed, Wigley-III ship model is regarded as the main goal of practical research because it belongs to numerical propagation, and the hydrodynamic experimental data contained in it is very systematic. Specific values are shown in Table 28.1: Assuming that the ship running speed U in the simulation model is 1.085 m/s, 1.627 m/s, and 2.170 m/s, respectively, the corresponding Froude numbers Fr are 0.2, 0.3, and 0.4, as shown in Table 28.2. The wavelength of incident wave is 1.50 m, 2.25 m, 3.00 m, 4.50 m, 5.25 m, and 6.00 m, respectively. Table 28.1 Parameters of Wigley-III model Parameter

The numerical

Parameter

The numerical

The captain L/m

3.0

The position of the center of gravity is XGG /m

0.1875

Type B/m wide

0.3

The position of the center of gravity is ygg/m

0

Draft d/m

0.1875

The position of the center of gravity is ZGG /m

0

Drainage volume, ∇/m3

0.078

The radius of pitching inertia is Kyy/m

0.75

Table 28.2 Wave element analysis of Wigley-III ship model during top wave operation Parameter

The numerical

The incident wave amplitude /m

−0.02

Incident wave length /m

1.50,2.25,3.00,3.75,4.50,5.25,6.00

Ship model speed /(m/s)

1.085,1.627,2.170

294

Y. Cao and Y. Liufu

In this study, the period was set as 1.0 s and the amplitude of fluctuation was 0.045 m and 0.075 m. Under these two rules, the generation of waves in numerical wave pools and ships can directly simulate and measure regular wave environments. At the same time, the time change of wave surface can be determined according to the wave height monitoring point in the pool. Combined with the results shown in Fig. 28.2, after a period of simulation calculation, when the pool meets the entrance boundary of 3.2 m, the wave surface changes greatly, and the actual period is 1.0 s, and the fluctuation amplitude changes are 0.045 m and 0.075 m, respectively. Combined with the analysis of the change results in the figure below, it can be seen that the regular wave designed in the final simulation can enter a stable state after the completion of the simulation calculation within a period of time, and the change amplitude of the wave is basically the same as the expected set amplitude [5–8]. It should be noted that, in the process of studying and analyzing the numerical simulation of the movement of a sailing ship with regular wave top waves, the motion changes under two degrees of freedom, heave and pitch, need to be studied. In other words, the research method in this paper can be extended to the calculation of six-degree-of-freedom motion: Wherein, the expression formula of hull heave and pitch motion under regular top wave under this condition is as follows: Fig. 28.2 Curve of wave surface changes under different input amplitude times

0.06 0.04 0.02 0 -0.02

8

9

10 11 12 13 14 15

-0.04 -0.06 The input wavelength 0.045m 0.1 0.05 0 8

9

10 11 12 13 14 15

-0.05 -0.1 The input wavelength 0.075m

28 Study on the Relationship Between Ship Motion …

295

z = z a · cos(ωe t + εz ) θ = θa · cos(ωe t + εθ ) In the formula, za and θa represent the amplitude change values of heave and pitch, respectively, and wc represents the encounter frequency of the ship. The numerical formulas for the dimensionless heave and pitch motion amplitudes are as follows: za ∗ θa · L , θa = A 2π A

z a∗ =

According to the analysis in Fig. 28.3, under the condition of regular top waves, the Wigley-III ship model would use different sailing speeds to obtain the variation curve of the motion amplitude of dimensionless heave. Combined with the analysis of specific changes in the curves, it can be seen that the CFD calculation results of heave amplitude change values are consistent with the experimental values of DUT, and the location of the actual peak point is also very effective during the simulation. In this process, with the increase of sailing speed, the peak point of the actual motion amplitude will move toward the positive direction of the X-axis. 1.6

1.2

1.4

1

1.2

0.8

1

0.6

0.8 0.6

0.4

0.4

0.2

0.2 0

0 0

0.5

1

1.5

2

0

2.5

0.5

1

1.5

2

2.5

Fr=0.3

Fr=0.2

2.5 2 1.5 1 0.5 0 0

0.5

1

1.5

2

2.5

Fr=0.4

Fig. 28.3 Variation curve of dimensionless heave motion amplitude of Wigley-III ship model at different sailing speeds

296

Y. Cao and Y. Liufu

According to the analysis in Fig. 28.4, under the condition of regular top waves, the Wigley-III ship model would obtain the motion amplitude variation curve of dimensionless pitching by using different sailing speeds. Combined with the curve variation diagram analysis, it can be seen that the actual simulation results are consistent with the experimental values, and the location simulation of the selected peak points is very effective. And with the increase of the actual sailing speed, the peak point of the actual motion will move in the positive direction of the X-axis. At the same time, wave damping is also one of the main numerical values used in actual simulation calculation and analysis of ship performance. Generally speaking, it can be calculated and analyzed in combination with the resistance value of a free ship model under static water state, as shown in Table 28.3. According to the comparison and analysis of CFD results of infinite toughened wave increasing resistance for ships sailing in the final top wave state and model 1.6

2

1.4 1.2

1.5

1 0.8

1

0.6 0.4

0.5

0.2

0

0

0

0.5

1

1.5

2

2.5

0

0.5

1

1.5

2

2.5

Fr=0.3

Fr=0.2

2 1.5 1 0.5 0 0

0.5

1

1.5

2

2.5

Fr=0.4

Fig. 28.4 Variation curve of amplitude of dimensionless pitching motion of Wigley-III ship model at different sailing speeds

Table 28.3 Calculation and analysis of Wigley-III model resistance under still water condition RS /N

CFD

Experiment

Fr = 0.2

3.26

3.42

Fr = 0.3

9.73

9.97

Fr = 0.4

19.21

19.6

28 Study on the Relationship Between Ship Motion …

297

experimental data, it can be seen that the peak values of the two are very close, and the area of the peak point is similar, and the numerical results of wave increasing resistance are consistent with the experimental data. Because the viscous effect is incorporated into the numerical calculation of wave resistance increase, it is impossible to achieve the effect of potential flow theory calculation [9].

28.4 Conclusion To sum up, this paper combines the numerical wave pool to analyze and introduce the hull motion prediction and the wave resistance increase calculation under the wave state. This method is not only effective but also practical. Therefore, it belongs to the high-quality calculation method for the current research on ship performance. By using CFD method and numerical wave pool, a systematic study is made on the movement of ships sailing in the state of top wave and wave main motion. Finally, it is shown that the data simulation can directly show the relationship among incident wave, new radiation wave, and sailing wave, and can get more accurate prediction results. It is proved that the numerical wave pool is feasible to predict the motion of propagation in wave state and increase wave resistance. Compared with previous physical simulation tests, it can be seen that the method outlined in this paper is simpler, the actual cost is lower, and it can be effectively controlled and measured in the application operation.

References 1. Zhang, S.: Numerical simulation of longitudinal exercise and wave resistance of laser radir sailing in the top wave. Wuhan Institute of Sports (2019) 2. Guo, H. P.: Modeling and control of MMG model based on CFD. Shanghai Jiao Tong University (2019) 3. Sun, X., Dong, X., Yin, Y., et al.: Based on VDM and APSO. Ship Eng. 11 (2019) 4. Wei, B., Wang, H.W., Yin, M., et al.: Design of Marine and Marine Movement Simulation Control System. Journal of Transforming Medicine 008(004), 245–247 (2019) 5. Li, Z.F., Ren, H.L., Shi, Y.: Numerical Simulation of Ship Movement in Wave Based on Acceleration Potential and Higher-Order Boundary Yuan. Ship Mechanics 9, 1034–1044 (2019) 6. Yuan, S., Zou, Z.J.: Effect of Fluid viscosity on Movement and Wave Resistance. Hydrodynamic Research and Progress (A) 35(02), 43–51 (2020) 7. Xiao, K., Chen, Z. G.,: Numerical simulation of swinging and breeding craft. Chinese Ship Res. 1584(01), 138–146 (2020) 8. Bu, S. X., Qi, J. T., Min Gu, M.: Study on the overturning characteristics and influencing factors of damaged ships in the rule wave. China Shipbuilding 61236 (04), 65–74 (2020) 9. Zhang, B., Peng, X. Y., Gao, J.: Prediction of ship sttitude based on ELM-EMD-LSTM combination model. Ship Mech. 24205 (11), 41–49 (2020)

Chapter 29

Simulation of Automatic Control System of Self-Balancing Robot Based on MATLAB Zihan Wang and Minxing Fan

Abstract Two-wheel self-balancing robot, as one of the most advantaged robot attitude control optimization at present, needs to be adjusted by scientific attitude correction algorithm after being disturbed, so as to ensure that the robot can still operate normally under the condition of disturbance. After understanding the basic principle of the two-wheel self-balancing robot control system, this paper analyzes the function and efficiency of the current robot automatic control system according to the design of the actual controller and the modeling and simulation of the whole system.

29.1 Case Study According to the current two self-balancing robot application, situation analysis shows that the strong flexibility and control ability, whether it is under the condition of a large angle or a small radius, can quickly adapt to the complex space requirements, now the most commonly used in daily commute, military, and other fields, it is mainly used for carrying equipment of mobile platforms. Structurally, the robot has a gold-colored box with control chips and sensors inside, while black rubber wheels are connected to the body using a DC motor [1–5]. In this study, it is not necessary to study the model of the motor, but only as a driving unit to provide torque. The coordinate system is constructed according to the form shown in Fig. 29.1. Assuming that the robot moves along the X-axis and rotates around the Y-axis, in order to better construct the mechanical model, the O point on the way is transformed into two parts as shown in Figs. 29.2 and 29.3, which not only have vertical and horizontal forces but also possess torque [6–8]. To study the dynamics of this kind of robot, we must first clarify the meaning of the symbols in the formula, as shown in Table 29.1. In addition, MT, MS, JT, and JS represent the mass and inertia of the wheel. In this study, Newton Euler’s method was Z. Wang (B) · M. Fan Florida Institute of Technology, Florida, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_29

299

300 Fig. 29.1 Structure diagram of a two-wheel self-balancing robot

Fig. 29.2 Force analysis diagram of wheel moving region

Fig. 29.3 Force analysis diagram of vehicle rotation region

Z. Wang and M. Fan

29 Simulation of Automatic Control System … Table 29.1 Symbolic meaning table

301

Symbol Meaning T

Driving torque

Fstx

The force of the body on the wheel in the x-direction

Fsty

The force of the body on the wheel in the y-direction

Ftxx

The force of the body on the body in the x-direction

G

The force of the body on the body in the y-direction

Ff

The force of gravity

Fn

The frictional force

used to assume that the wheels and ground of this kind of robot did not skid, and the wheels and body were rigid bodies. In addition, the static friction force that existed in the system was ignored, and the external force of the system was not considered [9–11]. First, study the moving area of the wheel, because this part of the composition elements is rubber and plastic, the overall mass is very light, so you can directly ignore the moment of inertia. Then the equilibrium relation corresponding to each axis is as follows: ..

x m t = Fst x + F f t

T = r Ff Fn = Fst y + (m s + m t )g Second, when studying the rotation area of the car body, the forces on A and B in the same direction should be combined first, and respectively expressed as AND. Then, the equilibrium relation for each axis can be obtained as follows:   x¨t + l cos θ θ¨ − sin θ θ˙2 m s = Ft x x   m s − Ftsy = l cos θ θ˙2 − sin θ θ¨ 

Js θ¨ = Ftsy l sin θ − Ftsx l cos θ − T .

Simplify the above six formulas and assume cos θ.. = 1, sin θ = 0, θ 2 = 0. Then  .. x r (m t − m s ) + rlm s θ = u you get something like this: ..   .. x ml + Js + ml 2 θ −mglθ = u   Now, let’s set it again a = r (m t − m s ), b = rlm s , c = ml, d = Js + ml 2 , e = −mgl. Substitute it into the above formula, and you can get: 

..

..

..

..

a x +bθ = u c x + d θ + eθ = u

302

Z. Wang and M. Fan

Combined with the state variable asshown below, then according to the  analysis  d−b  a b ex1 + c−a ex u , x ˙ = x , x ˙ = relationship x1 = x˙2 , x˙2 = bc−ad 3 4 4 1 b u . a ad−bc The equation of state of the system can be obtained: ⎞ ⎛ θ⎞ x1 .⎟ ⎜x ⎟ ⎜ ⎟ θ ⎜ 2⎟ ⎜ ⎟ x =⎜ ⎟=⎜ ⎜ ⎝ x3 ⎠ ⎝ x ⎟ ⎠ ⎛

x4

.

x

⎛ ⎛ ⎞ ⎞ 0 0100 ⎛ ⎞ ⎛ ⎞ θ ⎜ ae ⎜ ⎟ θ ⎟ ⎟⎜ . ⎟ ⎜ c − a ⎟ ⎜ .⎟ ⎜ 000 ⎟⎜ ⎟ ⎜ ⎟ ⎜ θ⎟ ⎜ bc − ad ⎟⎜ θ ⎟ ⎜ bc − ad ⎟ ⎜ ⎟=⎜ ⎟⎜ ⎟ + ⎜ ⎟ ⎜ x⎟ ⎜ ⎟⎝ x ⎠ ⎜ 0 ⎟ 0001 ⎝ ⎠ ⎜ ⎜ ⎜ ⎟ ⎟ . ⎝ be ⎝ b−d ⎠ ⎠ . x x 000 ad − bc ad − bc

29.2 Controller and System Modeling and Simulation Analysis 29.2.1 PID The operation principle of this kind of controller is very simple and the overall stability is strong. However, because it is a single input and output, it is easy to appear overshoot when applied to the strongly coupled and nonlinear two-wheel robot system. By using Maple to obtain information such as dominant poles in the location map, data such as P, I, and D can be adjusted autonomously. Combined with field investigation and analysis, this paper shows that if the dominant pole is getting closer and closer to 0 and has a very abundant phase angle margin, then it means that the controller at this time meets the standard.

29.2.2 LQR As one of the key elements in modern control theory, the regulator means that the state equation of the system is linear, while the control variables and index functions are of quadratic type. According to the integration analysis, the final motion constraint

29 Simulation of Automatic Control System …

303



⎞ 0.5000 ⎜ 0000 ⎟ ⎜ ⎟ weight matrix Q is ⎜ ⎟, the control matrix R = (1). According to the above ⎝ 00100 ⎠ 0001 information, the feedback gain matrix K can be calculated, and the equation of the input control quantity can be obtained according to the relationship between the state quantity and the input quantity.

29.3 Modeling and Simulation The controller selected in this experiment is the dynamics model of MATLAB used in the virtual two-wheel robot platform, mainly to study and analyze the operation effect of the controller. It should be noted that during modeling, minor factors may be ignored, leading to errors in modeling operations. Therefore, model parameters need to be clearly set. At the initial stage of the system, free movement is carried out on the original data until the end on the ground. At this time, information is obtained by using sensors, and the dynamics model of MATLAB can be adjusted by comparing the simulation results. Based on the analysis of Fig. 29.4, it can be seen that although the data before and after calibration have similar trajectories, the simulation results are faster, and the adjusted data can better meet the actual needs. Fig. 29.4 Comparative analysis before and after calibration of the model

304

Z. Wang and M. Fan

29.4 Result Analysis The experimental research platform is to replace the microcontroller with a singlechip microcomputer, and send PWM signal, and then cause the motor movement on the basis of driving amplification, so as to form a drive circuit. This control system design includes three parts of the control loop: first, the main control loop only needs to use the gyroscope to feedback the SCM; second, due to the influence of unclear data and other factors, it is impossible to construct a systematic data model for ordinary DC generators, and it is difficult to adjust the torque directly by voltage. But according to the current feedback in the voltage and current between the installation of the current loop, it can be convenient to adjust the voltage torque; third, the fundamental goal of the system design is to control the current, so the speed is in the open-loop state, so it is easy to have problems such as the system constantly moving in the balance region. The installation of the speed control ring can directly use the grating Hall sensor and microcontroller to clarify the motor rotation speed in different periods, and then use PID to implement control, which can deal with problems such as continuous movement. Therefore, the simulation of self-balancing robot automatic control system based on MATLAB is an important basis for future research on robot scientific research projects.

29.5 Conclusion To sum up, under the background of a new era, as the robot at home and abroad research and technology level of ascension, to make full use of the controller and the modeling and simulation technology, scientific research institution and staff now in the integration on the basis of previous research experience, learn from excellent examples at home and abroad, in research of new technology and new products at the same time, improve self-balancing robot application level in our country. At the same time, according to the development needs of The Times, we should actively participate in related scientific research projects in different fields, which will help expand the foundation of thinking space for research and development and provide new ideas for future robot research and development.

References 1. Liu, T., Zhang, Z., Liu, Y., et al.: Motion simulation of bionic Hexapod robot based on ADAMS/MATLAB co-simulation. J. Phys. Conf. Ser. 1601, 062032 (2020) 2. Jeyed, H. A., Ghaffari, A.: A nonlinear optimal control based on the SDRE technique for the two-wheeled self-balancing robot. Aust. J. Mech. Eng. (2020) 3. Qian, Q., Wu, J., Wang, Z.: Dynamic balance control of two-wheeled self-balancing pendulum robot based on adaptive machine learning. Int. J. Wavelets Multiresolut. Inf. Process. 3, (2019)

29 Simulation of Automatic Control System …

305

4. Hurski, N.N., Skudnyakov, Y.A., Artsiushchyk, V.S., et al.: Control of mechatronic system based on multilink robot-manipulators. Sci. Tech. 18(4), 350–354 (2019) 5. Odry, K., Róbert, F., Rudas, I.J., et al.: Fuzzy control of self-balancing robots: a control laboratory project. Comput. Appl. Eng. Educ. 28(3), 512–535 (2020) 6. He, Y., Quan, Y.: Simulation of multi-robot cooperative scheduling system based on ROS[J]. J. Phys. Conf. Ser. 1678(1):012015, 6 (2020) 7. Jin, X., Chen, K., Zhao, Y., et al.: Simulation of hydraulic transplanting robot control system based on fuzzy PID controller. Measurement 164 (2020) 8. Li, Z., Chen, L., Zheng, Q., et al.: Control of a path following caterpillar robot based on a sliding mode variable structure algorithm. Biosyst. Eng. 186, 293–306 (2019) 9. Zhang, W.B.: The energy-saving simulation of suspension robot path tracking based on improved PID control. Mech. Des. Manuf. Eng. 048(008), 29–32 (2019) 10. Wang, L., Hu, T.: The design of a dual channel synchronous control system based on a new percutaneous puncture surgical robot. Multimedia Tools Appl. 79(4), (2020) 11. Matoui, F., Boussaid, B., Abdelkrim, M.N.: Distributed path planning of a multi-robot system based on the neighborhood artificial potential field approach. Simulation 95(7), 637–657 (2019)

Chapter 30

Design of Bionic Knee Joint Structure Based on the Dynamics of Double Rocker Mechanism Tianshuo Xiao

Abstract Intelligent knee joint is a mechanical device to compensate for the loss of lower limb function. The double-column mechanism can be used to simulate the motion function of knee protector in order to solve the problem of excessive energy consumption and poor stability in the knee protection process, the purpose of the optimization is to minimize the steering peak torque of the knee protection mechanism, and an optimization model is established: under the limitation of performance and structure, the composite form is used. Based on the optimal selection of the model of the knee joint protection mechanism and the best structural parameters, the virtual motion simulation is carried out by using the ADAMS program.

30.1 Dynamic Analysis of Double Rocker Mechanism of Intelligent Prosthesis Knee Joint The human knee joint will provide different damping torque and driving torque according to the actual situation to complete the complex movement such as running, up/downstairs, and so on. The main difference between the intelligent prosthetic knee and other knee joints is that they can provide active torque. Figure 30.1 shows the knee joint diagram of intelligent prosthesis [1–4]. As shown in Fig. 30.1, you can see that the upper body of the joint is fixed to the thigh and the lower body is fixed to the calf. When performing complex actions, such as running and climbing/descending steps, the intelligent knee joint requires a large rotation direction with a large driving torque. The required active driving torque is provided by a DC motor. MR damper is a semi-active control device composed of MR fluid. It has the characteristics of fast response speed, wide adjustment range, large compensation range, and good stability widely used in biomechanical engineering, flexible robots, and other fields. The intelligent knee joint not only has the dynamic characteristics of the dynamic knee joint, but also has the deformation characteristics T. Xiao (B) Southwest Jiao Tong University, Sichuan, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_30

307

308

T. Xiao

Fig. 30.1 Diagram of the knee joint of an intelligent prosthesis

of the back knee joint, which accords with the characteristics of the normal knee joint motion and has excellent bionic performance [5–7]. The intelligent prosthesis knee joint is simplified into a double rocking mechanism. The structure diagram is shown in Fig. 30.2. In Fig. 30.2, the BCE is the thigh Fig. 30.2 A schematic diagram of double rocker mechanism of knee joint

30 Design of Bionic Knee Joint Structure …

309

connecting rod and the AD is the intelligent prosthesis knee joint frame rod. Establish coordinate system with point A origin li (i = 1, 2, 3, 4, 5). The length of AB rod, BC rod, CD rod, DA rod, and CE rod in the mechanism, respectively, indicates the position angle of each rod, and the point θi (i = 1, 2, 3, 4) ICR is the instantaneous rotation center of the double rocker mechanism.

30.2 Optimal Design of Prosthetic Knee Joint Mechanism 30.2.1 Objective Function In the current application process, the intelligent knee joint should provide a larger rotation drive pair when performing difficult movements such as running. This part of the action ends with a DCengine; however, due to energy limitations and energy consumption as well as knee protection intelligent joints, when the knee joint ends a predetermined motion, make the peak driving torque provided by the active iliac crest as small as possible which can reduce the output power of the DC motor, while meeting the requirements of the support and rotation of the intelligent prosthetic knee joint and reducing energy consumption [8, 9]. The active rod AB is selected as the driving rod to connect with the DC motor, and the minimum peak driving torque M1 of the active rod is used as the optimization goal. The driving moment M1 can be regarded as a function of the position angle θ1 of the active rod during motion. If max (M1(θ1)) denotes the maximum driving moment of the active rod, the objective function of the optimization problem of the driving moment of the active rod is min f (x) = max(Mi (θi ))

30.2.2 Determination of Optimization Parameters As shown in Fig. 30.2, the double rocker mechanism of prosthetic knee joint is mainly determined by six design parameters, namely, l1 , active rod, thigh connecting rod (l2 + l5 ), l3 , follower rod l4 , θ4 , and intelligent prosthesis knee joint installation parameters. According to the actual processing requirements and the installation requirements of the intelligent prosthetic knee joint l5 = l2 , the optimization parameters of the prosthetic knee joint double rocker mechanism are x as follows: x = [l1 , l2 , l3 , l4 , θ4 ]

310

T. Xiao

30.2.3 Constraints The constraints of the knee joint double rocker mechanism include the performance constraints of the design variables themselves and the structural constraints that limit the value range of the design variables. The specific constraints of each design variable are as follows: Install constraints. According to the requirements of the installation of the double rocker mechanism of the prosthetic knee joint, the angle of inclination of the prosthetic knee joint installation is as follows: 5◦ < θ ◦ < 15◦ Structural constraints. The range of motion of each movable member in the double rocker mechanism of prosthetic knee joint should be in leg space. Under these conditions, combined with the structure of the human knee joint, the size of the femur and tibial plateau and its approximate range of motion, the range of values of each variable is determined as follows: ⎧ 30 < l1 < 35 ⎪ ⎪ ⎪ ⎨ 15 < l < 20 2 ⎪ 45 < l3 < 50 ⎪ ⎪ ⎩ 25 < l4 < 30 Assembly constraints. The double rocker mechanism of prosthetic knee joint is mainly composed of a double rocker motion chain, which should meet the condition of double rocker: the sum of the longest rod and the shortest rod length is greater than the sum of the other two rod lengths. When the BC is the shortest rod and the CD is the longest rod, the assembly constraints of the prosthetic knee joint mechanism are as follows:  l2 + l3 > l1 + l4 l2 < l1 , l3 , l4 Performance constraints. According to the requirements of the motion performance of the double rocker mechanism of the prosthetic knee joint, the active rod should swing in the upper limit position and the lower limit position, and the range of motion should meet the requirements of the flexion angle of the knee joint of the human body. For this purpose, the θ1 of the position angle of the active rod of the knee joint of the double rocker prosthesis should be satisfied: 0◦ < θ1◦ < 105◦

30 Design of Bionic Knee Joint Structure …

311

30.3 Virtual Simulation Analysis of Prosthetic Knee Joint Mechanism In order to verify the accuracy and rationality of the optimal design of the knee joint protection joint double motion mechanism. After the optimization of the knee joint model into the ADAMS program for simulation analysis. Place the material car of each moving member in 20# steel, applying a 1000 N load to the upper part of the thigh connection in the direction of gravity. In order to compare and analyze the dynamic performance of forward intelligent knee joint and backward intelligent knee joint, Active digital and prototype angle offset θ1 is evenly positioned at 105; In addition, for the convenience of motor selection and control. The active rod uses constant speed input, 30 rad/s. angular velocity s, 3.5 simulation time 105 steps, run the simulation. By processing the simulation results, the comparison curves of driving torque of the front and rear active rods are obtained, as shown in Fig. 30.3. The maximum driving torque of intelligent prosthesis knee joint before optimization is 38.87 N m, and the driving torque changes between 1038.87 N m. The maximum driving torque of intelligent prosthesis knee joint optimized by composite shape method is 23.16 N m, which occurs when the active rod rotates 71. Compared with the peak value of the driving moment before optimization, the change range of the driving moment is reduced by 40 and 36. It can be seen that the optimized prosthetic knee joint can reduce the peak driving torque and make the change range of driving torque smaller, which ensures the stability of intelligent prosthetic knee joint motion. 45 40 35 30 25 20 15 10 5 0 0

40 pre-opmizaon

80

120

opmized

Fig. 30.3 Comparison curves of driving drive moments before and after optimization

312

T. Xiao

30.4 Conclusion Through the analysis of the dynamic characteristics of the intelligent prosthesis knee joint, the correctness and rationality of the optimization model are verified by virtual simulation. The optimized prosthetic knee joint can effectively improve the endurance of prosthetic knee joint, at the same time, it also makes the variation range of driving torque smaller and ensures the stability of motion.

References 1. Gonalves, R.S., Soares, G., Carvalho, J.C.: Conceptual design of a rehabilitation device based on cam-follower and crank-rocker mechanisms hand actioned. J. Braz. Soc. Mech. Sci. Eng. 41(7), 277 (2019) 2. Mutshinda, C.M., Finkel, Z.V., Widdicombe, C.E., et al.: Bayesian inference to partition determinants of community dynamics from observational time series. Community Ecol. 20(3), 238–251 (2019) 3. Liu, D., Ma, Z., Zhang, W., et al.: Superior antiwear biomimetic artificial joint based on highentropy alloy coating on porous Ti6Al4V. Tribology International, (7):106937 (2021). 4. Endress, B. A., Averett, J. P., Naylor, B. J., et al.: Non-native species threaten the biotic integrity of the largest remnant Pacific Northwest Bunchgrass prairie in the United States. Applied Vegetation Science, 23 (2020). 5. Yang, T., Zhang, Q., Wan, X., et al.: Comprehensive ecological risk assessment for semi-arid basin based on conceptual model of risk response and improved TOPSIS model-a case study of Wei River Basin, China. The Science of the Total Environment, 719(Jun.1):137502.1–137502.16 (2020). 6. Asatryan, V., Dallakyan, M.: Principles to develop a simplified multimetric index for the assessment of the ecological status of Armenian rivers on example of the Arpa River system. Environmental Monitoring and Assessment, 193(4) (2021). 7. Chang, R. J., Wang, Y. C.: Experimental Investigation on the Lumped Model of Nonlinear Rocker-Rocker Mechanism with Flexible Coupler. Journal of Dynamic Systems Measurement and Control (2020). 8. Daryabor, A., Arazpour, M., Aminian, G., et al.: Design and Evaluation of an Articulated Ankle Foot Orthosis with Plantarflexion Resistance on the Gait: a Case Series of 2 Patients with Hemiplegia. Journal of Biomedical Physics, Engineering, 10(1) (2020). 9. Tian, M., Gao, B.: Dynamics analysis of a novel in-wheel powertrain system combined with dynamic vibration absorber. Mechanism and Machine Theory, 156(3):104148 (2021).

Chapter 31

Research on Temperature Decoupling Control System of PET Bottle Blowing Machine Based on the Improved Single Neuron Yuanwei Li Abstract The neural network model is applied to the parameter tuning process of PID controller, and a control algorithm based on improved single neuron PID is proposed. Through the application in the temperature decoupling control system of the bottle blowing machine, the actual simulation results and conclusions are obtained. The simulation results show that the control algorithm has strong selflearning function and adaptive decoupling ability, can achieve good control effect, and can be widely used in the decoupling control of multivariable systems.

31.1 Introduction In awareness of environmental protection continues to wake up today, low carbon economy, energy-saving, reduced pollution has become an important indicator to measure the quality of products. And PET bottles are energy-saving, environmental protection, convenient transportation and storage, and many other advantages, making PET bottles become a major mainstream packaging container. Of course, the production of PET bottles cannot be separated from the PET bottle blowing machine. In the PET bottle blowing equipment, the heating link is a key link. The temperature is reasonably controlled, which can make the PET bottle heat absorption rate the highest, which will greatly save energy consumption and avoid the impact of environmental temperature changes. According to the characteristics of temperature control system of bottle blowing machine, PID decoupling control based on single neuron is adopted in this paper, and the actual simulation results and conclusions are obtained.

Y. Li (B) Guangdong Industry Polytechnic, Guangdong, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_31

313

314

Y. Li

31.2 Modeling of the Temperature Control System of Bottle Blowing Machine At present, the heating process of high-speed PET bottle embryo adopts the technology of “four-stage furnace body, nine-stage heating, circulating air cooling and compact arrangement.” Heating furnace design is mainly to meet the bottle embryo heating to the ideal temperature, easy to stretch and blow the bottling process. According to the performance of PET, nine layers of infrared light tubes are used to heat the bottle embryo radiatively. During the heating process, the heat-power of each layer of light tubes is reasonably distributed according to the quality, thickness, and shape of the bottle embryo, so that the bottle embryo can get the most ideal heat distribution. To crystallize and prevent deformation between the screw thread and the neck ring of the bottle, it is necessary to use a cooling fan to carry out air cooling on the bottle mouth and body. According to the above analysis, in the temperature control system of the bottle blowing machine, the two inputs are the voltage U1 of the infrared lamp and the voltage U2 of the cooling fan, and the two outputs are the temperature Y1 of the bottle embryo and the wind speed Y2 of the cooling fan. The wind speed of the cooling fan has a great influence on the temperature of the bottle embryo, and the temperature also has a certain influence on the wind speed, so this is a coupling system. They control according to the requirements of the system. The system can be transformed into a two-input/two-output controlled object, and the transfer function block diagram of the controlled object is shown in Fig. 31.1. In order to decouple the system, we must first know the mathematical model of the controlled object. We use the step response method to obtain the mathematical model of the related process. According to the experimental data, we can conclude that the transfer function of the controlled object is: 

Fig. 31.1 Transfer function block diagram of system controlled object

  e−s G 11 (S) G 12 (S) = 140s+1 0.8 G 21 (S) G 22 (S) 10s+1

u1

1 0.6s+1 5 12s+1

G11(S)

 (31.1)

y1

G12(S) G21(S) 2

G22(S)

y2

31 Research on Temperature Decoupling Control …

315

31.3 PID Decoupling Controller Based on Single Neuron As a very popular interdisciplinary subject, neural network has been widely used in the field of control because of its strong nonlinear mapping ability, parallel processing ability, and self-learning ability. The neural network adaptive control method has a large amount of calculation. Because there is no corresponding practical neural network computer hardware support, it is still difficult to apply multilayer neural network adaptive control to real-time on-line control [1]. In order to meet the requirements of the fast process control system, adaptive control based on single neuron is adopted. It can not only make use of the advantages of neural network, but also meet the requirements of fast process real-time control. Single neuron is a multi-input single output nonlinear processing unit with selflearning and adaptive ability, so it can be used to realize adaptive PID control. The decoupling control of multivariable system can be realized by using multiple single neuron PID controls [2]. Figure 31.2 shows the block diagram of a two-variable single neuron PID decoupling control system composed of two single-neuron PID controllers. Take the first single-neuron PID controller as an example, and its controller block diagram shows in Fig. 31.3. w1 (x), w2 (x), w3 (x) are required state variables for neuron learning and control [3], which are as follows:

Fig. 31.2 PID decoupling control system block diagram of double variable single neuron

Fig. 31.3 PID controller block diagram based on single neuron

316

Y. Li

Fig. 31.4 Simulation chart of Single neuron PID decoupling control system

w1 (x) = z(k) w2 (x) = z(k) − z(k − 1) w3 (x) = z(k) − 2z(k − 1) − z(k − 2)

(31.2)

In the figure, T is the proportionality coefficient of the neuron, T > 0, y(x) is the output value of the neuron at the moment T, so the control signal y(x) generated by the neuron through association search is: z(x) = z(x − 1) + z(x)

(31.3)

Neuron adaptive control realizes adaptive and automatic feedback functions by adjusting weighting coefficients. The result of weighting coefficient should be obtained through a lot of practice. The results show that the online learning correction of PID parameters is mainly related to y(x) and y(x) relevant. Therefore, this paper uses the improved supervised Hebb learning rule to adjust the weighting coefficient, and the learning algorithm can be obtained as follows:

31 Research on Temperature Decoupling Control …

⎧ ⎫ v1 (x) = v1 (x − 1) + ∂i z(x)d(x)[z(x) + z(x)] ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ v2 (x) = v2 (x − 1) + ∂ p z(x)d(x)[z(x) + z(x)]⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ v (x) = v (x − 1) + ∂ z(x)d(x)[z(x) + z(x)] 3 d ⎨ 3 ⎬ 3

⎪ ⎪ vi (x)wi (x) T ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ ⎪ d(x) = ⎪ ⎪ ⎪ ⎪ 3

⎪ ⎪ ⎪ ⎪ ⎩ ⎭ |vi (x)|

317

(31.4)

i=1

Type in the: vi (k) to correspond to the wi (k) is the weighting coefficient of, ∂ p , ∂i , ∂d are learning rates of proportion, integral, and differential, respectively, and different values are used to adjust different weight coefficients, respectively. T is the proportional coefficient of neurons, and the selection of T value is very important. The larger t is, the faster the system will be, but it will increase the overshoot and even make the system unstable. When the time delay of the controlled object increases, t must be reduced to ensure the stability of the system. However, if the selected t value is too small, the rapidity of the system will become worse [4]. As for how to select the initial value of the weight factor, due to the self-learning function of the neuron, the allocation of the initial value will not affect the subsequent learning results of the weight factor and the control effect of the controller. Therefore, the initial value of the weighting coefficient can be selected as a small random value.

31.4 Simulation Study on Temperature Decoupling Control System of Bottle Blowing Machine Set the sampling period T = 0.1 s, and the transfer function model of the controlled object (Eq. (31.1)) is transformed into a difference equation, which can be expressed as: y1 (k) = y1 (k − 1) + 0.007u 1 (k − 1) + 0.1u 2 (k) y2 (k) = y2 (k − 1) + 1.7u 1 (k − 1) + 0.4u 2 (k)

(31.5)

According to the abovementioned, the single neuron PID decoupling control principle of the control algorithm as shown in type (4), we can use MATLAB software to build the Simulink simulation diagram as shown in Fig. 31.4 to implement the controller, the partial decoupling control algorithm written in S-function, select the S-function module of the input signal for [e (k), e (k − 1), e (k − 2)], the output signal to control the amount of u (k), and the system of the controlled differential equation of encapsulated into a module [5]. The given input signal is the unit step input, namely:

318

Y. Li

        1 0 r1 (k) r1 (k) = = R1 = R2 = r2 (k) r2 (k) 0 1

(31.6)

The input signal r1 (k) and r2 (k) corresponding to a given temperature value and a given wind speed value respectively, the corresponding output is y1 (k) and y2 (k), the unit step signals of R1 and R2 are added at t = 1 s, and the simulation time is t = 1000 s. The parameters of a single neuron are configured as follows: learning rate θ p1 = θ p2 = 0.4, θi1 = θi2 = 0.4, θd1 = θd2 = 0.4; ratio K 1 = K 2 = 0.16. When the input signal is R1, the decoupling response curve is shown in Fig. 31.5, and when the input signal is R2, the decoupling response curve is shown in Fig. 31.6. As can be seen from the figure, no matter the input signal is R1 or R2, when the PID controller based on the improved single neuron is adopted, the system has a fast response speed, no overshoot, strong adaptive ability, and can better realize the requirements of decoupling control. And at the same time, when the input signal is Fig. 31.5 Decoupling curve of input signal is R1

y1 y2

1 y1

y1,y2

0.8 0.6 0.4 0.2 y2

0 0

100 200

300 400

500 600 700

800 900 1000

t(s)

Fig. 31.6 Decoupling curve of input signal is R1

y1 y2

1 y2

y1,y2

0.8

0.6

0.4

0.2 y1

0 0

100

200

300

400

500

t(s)

600

700

800

900 1000

31 Research on Temperature Decoupling Control … Fig. 31.7 There is an interfering signal decoupling curve of input signal are R1

319

1 y1

y1,y2

0.8

0.6

0.4

0.2 y2

0 0

100

200

300

400

500

600

700

800

900 1000

t(s)

R1, at t equals 500 s, right r2 (k). That is, given the wind speed, a pulse interference signal with an amplitude of 5 and duration of 1 s is added. The decoupling curve of the system is shown in Fig. 31.7. As can be seen from the figure, the wind speed value can quickly recover to the original state, while the temperature value is not affected, indicating that the system has a strong anti-interference ability.

31.5 Actual Debugging Results For the temperature control system of the PET bottle blowing machine, we use an infrared temperature acquisition device to collect the temperature of the bottle embryo, use the wind speed sensor to measure the wind speed, send the signal to the embedded controller, use the single neuron control algorithm to write the control program, decouple the temperature and wind speed control. The actual debugging results show that when the given wind speed is changed [6], the temperature remains the original given value through the learning adjustment of the neuron, which meets the requirement of decoupling control.

31.6 Conclusions In this paper, a neural network controller based on single neuron is used, and an improved single neuron weighting coefficient correction algorithm is proposed. Therefore, the online correction of weighting coefficient is not completely based on the learning principle of neural network, but formulated with reference to practical experience. Simulation and experimental results show that this method has a good decoupling control effect on the temperature control system of bottle blowing

320

Y. Li

machine by adjusting the weight of neural network through self-learning. Moreover, it also has a strong anti-interference ability, which realizes the rapid and accurate control of the temperature of the heating furnace of the bottle blowing machine.

References 1. Chen, Z. S.: Research on three parameter feedforward decoupling adaptive PID control strategy of VAV air conditioning system based on improved single neuron PID algorithm. Lanzhou University of technology, (2020) 2. Wang, G. J., Hu, J., Zhou, M. Y., Wang, Y.: Research on improved nucleic acid amplification temperature control system with single neuron adaptive PID control. In: 2021 6th International Conference on Intelligent Computing and Signal Processing (ICSP), (2021) 3. Ji, P.F., Fu, Y.Q., Yang, B.F.: Research on multivariable decoupling control of greenhouse system based on single neuron PID. China J Agric. Chem. 41(08), 143–147 (2020) 4. Chen, D.P., Zhang, J.G., Liang, X.: Decoupling optimal control of VAV air conditioning based on analytical method. Computer engineering and application 54(18), 235–241 (2018) 5. Yin, H.Q., Yi, W.J., Jia, F., Li, C.C., Wang, K.J.: Simulation of Brushless DC motor control system based on single neuron neural network. Sci. Technol. Eng. 21(07), 2747–2753 (2021) 6. Wang, W. X.: Research and implementation of leachate pressure control system based on single neuron PSD predictive control. Guilin University of technology, (2020)

Chapter 32

Research on the Simulation Method of the Drift Trajectory of Persons Overboard Under the Sea Search Mode of Unmanned Surface Vehicle (USV) Ying Chang, Yani Cui, and Jia Ren Abstract It is necessary to estimate the possible drift direction and location of a wrecked target based on the available time, marine environment and meteorology of the incident area before a maritime search and rescue activity is conducted. In this paper, the research object of the study is the wrecked target of a man overboard. With Monte Carlo stochastic particle method, persons overboard are abstracted as simulation particles, and the motion of these particles is set to be independent of each other and have the drift characteristics of persons overboard. The drift velocity model is used to calculate the trajectory and position distribution of the particles after a period of time to obtain the area to be searched, which provides a basis for the Unmanned Surface Vehicle with the most likely location of the man overboard.

32.1 Introduction Maritime search and rescue activities are generally divided into two parts: search and rescue [1], and rescue activities can only be carried out effectively after a wrecked target has been identified. In order to improve the success rate of maritime search and rescue activities, search activities should be fast and accurate to gain time for rescue activities and therefore to ensure the success of search and rescue activities. Unlike a vessel, which is equipped with its own communications equipment, when a target of a wreck is a person overboard, estimating the drifting position of the person overboard at sea becomes an important part of the search activity due to the uncertainty of his own communications location. The use of intelligent bodies, such Y. Chang (B) Institute of Data Science, City University of Macau, Macao 999078, China e-mail: [email protected] Y. Cui School of Computer and Cyberspace Security, Hainan University, Haikou 570228, China J. Ren Institute of Advanced Technology and Equipment, Hainan University, Haikou 570228, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_32

321

322

Y. Chang et al.

as Unmanned Surface Vehicle (USV), to conduct searches at sea and the application of intelligent devices to the estimation of the drift trajectory of a person overboard can improve the efficiency of the search at sea. The drifting motion of a person overboard at sea is a movement under the combined effect of the external force vectors of wind, waves and currents. Theoretically, the drift trajectory can be predicted by the dynamic principle as long as the force on the person overboard is accurately analysed. Many domestic and foreign scholars have carried out a lot of research work in the field of predicting the drift trajectory of people overboard at sea. Allen et al. [2] used linear regression of sea drift test data to find the wind drift coefficient of a person overboard, and concluded that the drift velocity of a person overboard is a linear superposition of wind drift velocity and current drift velocity, and then used the Lagrangian tracking method to calculate the drift trajectory based on the drift velocity. Zhang J [3] established a drift dynamics model of a target in distress based on the wind pressure difference and sea current velocity field in the incident sea area, and obtained the area to be searched by predicting the position of the wrecked target and the current drift velocity on this basis. On the basis of the work of Allen, A A and Breivik, Maio A D [4] proposed a drift model based on the stochastic particle method and conducted field experiments in the Etruscan Sea and the Sicily Channel, respectively, to verify the accuracy of this drift model in predicting the area to be searched for under different parameters. Brushett B A [5] conducted field experiments in the Pacific Ocean to test the downwind and sidewind drift coefficients of three different sizes of small vessels to determine the area to be searched. According to the results of the above studies by domestic and foreign scholars, different marine environments have different drift characteristics. The use of a maritime drift motion model should take into account the specific marine environment and type of wrecked target, as well as the model should be based on data from the test sea for the calculation to be valid. This paper establishes a simulation environment based on the meteorological and hydrological data of the Qiongzhou Strait, and uses the Monte Carlo method to model the Gaussian distribution of the drift velocity of the person overboard. Through computer simulation, the search area and the most likely location of the person overboard are estimated.

32.2 Force Analysis of Drifting Motion at Sea The drifting movement of persons overboard is mainly the result of the combined effect of the total current forces, the wind pressure difference at the sea surface and the waves. Total current forces. The main types of total currents are currents, wind-drift currents and tidal currents. The currents here refer to the relatively steady directional flow of seawater over a

32 Research on the Simulation Method of the Drift …

323

wide range of red areas. As most of the people who fall overboard are floating on the surface of the sea, the velocity of the current at 0.5 m below the surface is considered to be the main factor influencing the drifting movement. The velocity of the current can therefore be estimated from measured data or by consulting hydrographic data. FC =

1 CdC SC ρC |VPerson − VC |(VPerson − VC ) 2

(32.1)

In this equation, V Per son is the drift velocity of the fallen person, VC is the total current velocity at 0.5 m below the surface, CdC is the tug coefficient, SC represents the cross-sectional area of the fallen person exposed below the surface, ρC is the seawater density, |V Per son − VC | is the modulus of the difference between the drift velocity vector of the fallen person and the current velocity vector and (V Per son − V W ) is the drift velocity vector minus the current velocity vector. Wind-drift currents are the flow of seawater caused by the continuous wind action on the sea surface. The IAMSAR Manual states that the direction of wind-drift currents is related to the latitude of the sea, and that in areas north of 10 degrees north latitude, the direction of wind-drift currents will be approximately 30 degrees to the right of downwind; within 10 degrees north and south latitude, approximately 30 degrees to the left of downwind. Tidal currents, also known as gyratory currents, are the movement of water produced by celestial gravity and are relatively constant in speed and direction. Tidal currents have a significant effect on the drifting motion of targets in distress at sea, either near shore or in shallow water, or negligible if the target is far from shore. The composition of the total current is complex and in order to more accurately simulate the drifting motion of the target, it is common practice to drop buoys in the incident area to obtain real-time total current speed and direction. Force of wind pressure difference. Wind forces act on the exposed part of the sea surface of a target in distress at sea, causing it to drift in a downwind direction. In previous studies, wind data were generally selected for 10 m above sea level, where the wind speed is essentially linear with the wind-driven drift velocity of the target in distress [2]. If the part of the target in distress at sea exposed to the wind field is irregularly shaped, its drift trajectory under wind action is not exactly along the downwind direction [6], but always at an angle to the direction of the downwind. Moreover, the same angle has the same probability of occurring to the left and right of the downwind direction, as shown by the wind pressure offset (left) and (right) in Fig. 32.1. The University of Maritime Affairs in Gdynia, Poland, conducted experiments with life rafts as targets in distress at sea and found that the maximum deviation of the raft’s drift trajectory from the wind direction could be up to 70 degrees in a sustained force 6 wind [7]. If the target in distress is a person overboard, the wind-driven drift directional deviation is also relatively small due to the smaller exposure above the sea surface. The wind force is:

324

Y. Chang et al.

Fig. 32.1 Two cases of the total drift vector

FW =

1 CdW SW ρW |VW − VPerson |(VW − VPerson ) 2

(32.2)

In this equation, V Per son is the drift velocity of the person overboard, VW is the differential wind pressure velocity at 10 m above the sea surface, CdW is the tug coefficient, SW represents the cross-sectional area of the person overboard exposed to the sea surface, ρW is the density of air,|VW − V Per son | is the modulus of the difference between the wind velocity vector and the drift velocity vector of the person overboard and (VW − V Per son ) is the wind velocity vector minus the drift velocity vector of the person overboard. Wave and Coriolis force Effect. The drifting effect of waves on targets in distress at sea is more complex and is mainly related to the ratio between the wavelength and the size of the target in distress. It was found that when the wavelength is shorter than the size of the target in distress, its thrust has a significant effect on the drifting motion of the target; if the size of the floating object is small, such as a man overboard, the wave has essentially no effect on the drifting motion [8]. The Coriolis force is a description of the deflection of a mass in a rotating system undergoing linear motion due to inertia relative to the linear motion of the rotating system. Due to the rotational motion of the Earth, maritime targets in distress are all affected by the Coriolis force, but it has less influence than other environmental factors and only has to be considered during long search missions [9].

32.3 Drift Velocity Model Based on the above force analysis, it is known that at any given moment on the sea surface, a man overboard is mainly affected by the wind, the current, the waves and the resistance of the water. The multiplicity of forces ultimately leads to a shift in the position of the person overboard. By analysing the data of wind and current, the sea current and wind pressure difference are the main factors affecting the drifting motion

32 Research on the Simulation Method of the Drift …

325

of the person overboard. By analysing the data of wind and current, the sea current and wind pressure difference are the main factors affecting the drifting motion of the person overboard. In order to simplify the calculation of the forces on the person overboard, only the wind and currents are considered for the drifting motion. The force analysis of the person overboard at sea is shown in Fig. 32.1. In general, a person overboard at the surface can be considered as an unpowered drifting target. Theoretically, a certain equilibrium between the wind and the drag of the sea will cause the object to drift as time passes. The drift dynamics model for a person overboard satisfies Eq. (32.3). 1 1 CdW SW ρW |VW − V Per son |(VW − V Per son ) = CdC SC ρC |V Per son − VC |(V Per son − VC ) 2 2 (32.3) According to Eq. (32.3), V Per son satisfies Eq. (32.4), in which a satisfies Eq. (32.5). V Per son = aVc + (1 − a)Vw a=

1    1/2 1 + CdW SWρW / CdC SCρC

(32.4) (32.5)

32.4 Computer Simulation According to the Central Limit theorem, it is assumed that the drift velocity of a person overboard follows a Gaussian distribution, and the mean and variance of V Per son satisfy Eqs. (32.6) and (32.7) respectively, where the wind speed, wind direction, flow velocity and flow direction are derived from measured data. According to the test, wind speed and direction data were obtained at three-minute intervals over an hour period, together with flow speed and direction data for the same time period. Statistical parameter values of the influencing factors such as wind speed and flow velocity are obtained from the sample data. Ex p[V Per son ] = a · Ex p[VC ] + (1 − a) · Ex p[VW ]

(32.6)

V ar [V Per son ] = a 2 · V ar [VC ] + (1 − a)2 · V ar [VW ]

(32.7)

The person overboard is abstracted as N particles, and the motion of these particles is set to be independent of each other, and each particle has the drift characteristics of the person overboard. The positions of all particles are recorded as Psn i N and a coordinate system is established with the wrecked position of the person overboard as the origin, with the east direction being the positive direction of the X-axis, the

326

Y. Chang et al.

west direction the negative direction of the X-axis, the north direction the positive direction of the y-axis and the south direction the negative direction of the Y-axis. Assume that at time ti the horizontal and vertical velocities of the person overboard are V Per son X and V Per sonY respectively, then calculate the drifting position of the person overboard after time t and project it onto the coordinates. Setting the initial positions of all particles to LKP, Psn 0 N =[lon 0 N , lat 0 N ], in which lon 0 N and lat 0 N denote the longitude and latitude values of the initial positions of the N particles respectively. According to E x p[V Per son ] and V ar [V Per son ], a distribution model of the values taken for the drift velocity of the N particles is derived. Given that the last known position of the fallen person is at the origin, and set the number of simulated particles N = 1, 10, 1000, calculate the particle position every 2 s, the drift trajectory and possible positions after 30 min are shown in Figs. 32.2, 32.3 and 32.4; the drift trajectory is shown as a solid line, and the distribution of possible positions is shown as a star-shaped point. It can be seen that as the number of particles in the simulation increases, the search target area as well as the most likely area can be calculated. This shows that as the number of particles in the simulation increases, the search target area and the most likely area can be calculated. At N = 1000, the drift trajectory is hidden and only the position distribution at 30 min of simulation is displayed. A rectangle is drawn through the four vertices of coordinates (min(lon t N ), min(lat t N )), (min(lon t N ), max(lat t N )), (max(lon t N ), max(lat t N )), (max(lon t N ), min(lat t N )) to calculate the search area of the wrecked target, and the most probable search area is obtained by calculating the area with the highest density of particle distribution as shown in Fig. 32.5.

Fig. 32.2 Drift trajectory and possible locations at N = 1

32 Research on the Simulation Method of the Drift …

327

Fig. 32.3 Drift trajectory and possible locations at N = 10

Fig. 32.4 Drift trajectory and final position at N = 1000

32.5 Conclusions In this paper, the distribution of the drift velocity of a person overboard is modelled using the Monte Carlo method based on the drift dynamics model with statistical parameter values of the marine environmental data of the Qiongzhou Strait. The drift

328

Y. Chang et al.

Fig. 32.5 The rectangle of the search area and the coordinates of the four vertices

trajectory and position distribution of the person overboard are simulated by a 0.5 h sea simulation test. Due to the complexity and variability of the marine environment, the data on wind pressure difference, currents and other influencing factors are dynamically changing over time. To improve the accuracy of the calculation results, the statistical data on wind pressure difference and currents should be updated at the next 0.5 h simulation test. Based on data from the marine environment of the test area, this study provides a method to calculate the drift trajectory and location distribution of a person overboard. The method allows the search area and the most likely distribution to be derived, which provides a basis for decision making for the search at sea by intelligent bodies such as Unmanned Surface Vehicle. Acknowledgements Project Name: a cognitive oriented multi-source data probability graph theory (61961160706). Category: International (regional) cooperation and exchange projects.

References 1. National Maritime Search and Rescue Manual: edited by China Maritime Search and Rescue Centre. Tianjin Maritime Search and Rescue Centre and Dalian Maritime University, Dalian Maritime Press (2012) 2. A11en, A. Review of Leeway: Field Experiments and Implementation. Coast Guard Research and Development Center. (2009). 3. Zhang, J., Teixeira, P., Soares, C.G., et al.: Probabilistic modelling of the drifting trajectory of an object under the effect of wind and current for maritime search and rescue. Ocean Eng. 129, 253–326 (2017) 4. Maio, A.D., Martin, M.V., Sorgente, R.: Evaluation of the search and rescue LEEWAY model in the Tyrrhenian Sea: A new point of view. Nat. Hazard. 16(8), 1979–1997 (2016) 5. Brushett, B. A., Allen, A. A., Futch, V. C„ et al. Determining the leeway drift characteristics of tropical Pacific island craft. Applied Ocean Research, 2014, 44:9101 (2014). 6. Chen, Jiasen.: Simulation study of ship drift model under the influence of multiple factors. Harbin Engineering University, (2017).

32 Research on the Simulation Method of the Drift …

329

7. Leszek, S.k.: Maritime traffic flow simulation in the Intelligent Transportation Systems theme. European Safety and Reliability Conference, 17:30–31 (2014). 8. Lu, Q., Xu, Y., Chen, Y., et al. Enhancing State Space Search for Planning by Monte-Carlo Random Walk Exploration (2016). 9. Ou, Y.: Study on the drift model of runaway ships at sea. Dalian Maritime University, (2008).

Chapter 33

Design of Infrared Remote Control Obstacle Avoidance Car Hanhong Tan and Ziying Qi

Abstract The research goal of this design is infrared remote control obstacle avoidance car. With STM32 microcontroller as the control center, infrared remote control is used to realize wireless control of the car, and ultrasonic obstacle avoidance method is used to realize automatic obstacle avoidance of the car. The focus of this design is to study the remote control and obstacle avoidance functions of the car. This paper designs a smart car with STM32 microcontroller as the core controller and has remote control and obstacle avoidance functions. The hardware design and software design of the entire system have been completed, and the design goals of high performance and low cost have been achieved. For most of the needs, this design can also be used to develop smart toys, and has a very important reference significance for the development of multi-functional smart robots.

33.1 Introduction The smart car is the product of the intersection of artificial intelligence, automatic control, computer and other disciplines, and it plays an important role in driverless cars and intelligent sweeping robots [1]. This design designed a remote control obstacle avoidance system, with STM32 microcontroller as the control center, using infrared remote control to realize wireless control of the car, which can control the car to complete forward, backward, and circle movements. At the same time, ultrasonic obstacle avoidance is used. In this way, the car has the ability to sense the environment, can make its own judgment on the next action, and complete the real-time collection of environmental data. Infrared remote control obstacle avoidance vehicles are widely used because they can realize wireless control and automatic obstacle avoidance [2]. They can be applied to food service industry and space exploration, which not only greatly improves work efficiency, but also reduces the risk factor in production, and has great application value for its research. H. Tan (B) · Z. Qi Guangdong University of Science & Technology, Dongguan 523083, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_33

331

332

H. Tan and Z. Qi

power supply

Ultrasonic obstacle avoidance module STM32 control module Infrared remote control module

Motor drive module Fig. 33.1 Overall block diagram of the system

33.2 Systematic Design This design is a design method based on modular design, that is, a design method corresponding to hardware system and software design. Among them, the main modules are remote control submodules and obstacle avoidance submodules. The trolley is controlled by STM32, and the trolley is controlled by infrared rays. When the infrared receiver module of the trolley receives the infrared control signal transmitted by the remote end, it performs information processing and analysis, and drives the DC motor to complete forward and backward actions; the trolley is detected by the ultrasonic sensor Whether there is an obstacle in front of the car, if there is an obstacle in front of the car, the car will stop driving, and then use the ultrasonic module to detect the obstacle in front, and finally choose a suitable route to drive to avoid the front obstacle in; if there is no obstacle ahead, the car continues to move forward. The overall block diagram of the infrared remote control obstacle avoidance car is shown in Fig. 33.1.

33.3 Hardware Circuit Design According to the design requirements, this design needs to implement infrared remote control and automatic obstacle avoidance functions. The hardware circuit part mainly includes five modules: main control module, ultrasonic module, infrared remote control module, motor drive module, and power supply module [3]. The controller of this design is STM32. STM32F103C8T6 is a new 32-bit microcontroller based on the ARM Cortex-M core. It has the advantages of low power consumption, high performance, innovative core, rich peripherals, and high program code efficiency. Can provide a solid foundation for the writing of the program. What this text adopts

33 Design of Infrared Remote Control Obstacle Avoidance Car

333

is the design scheme that utilizes several circuit modules to add and synthesize a complete system, only need to use an STM32 chip to complete the control task, therefore, STM32 is the best choice to realize this design. The STM32F103C8T6 chip diagram is shown in Fig. 33.2. The design of ultrasonic obstacle avoidance program is mainly based on the principle of ultrasonic distance measurement. After calculating the distance between the obstacle and the car, take the values in multiple directions in front, and then perform multiple sampling to reduce the error of the data, and finally process the data, and determine the current optimal path through the software algorithm, and then drive the car to the target direction. The ultrasonic interface diagram is shown in Fig. 33.3. Infrared remote control is one of the most widely used communication and remote control technologies. The infrared remote control module of this design uses the infrared receiver tube 1838 and the matching remote control [4]. The infrared remote control device not only has low power consumption, small size, and low cost, it is an excellent choice to realize the infrared remote control function of this design. The universal infrared remote control system is composed of two parts: transmitting and receiving. The transmitting part mainly includes keyboard, code modulation, and LED infrared transmitter; the receiving part mainly includes photoelectric conversion amplifier, demodulation circuit, and decoding circuit. The block diagram of the infrared remote control system is shown in Fig. 33.4.

Fig. 33.2 STM32F103C8T6 chip diagram

334

H. Tan and Z. Qi

Fig. 33.3 Ultrasonic interface diagram

Fig. 33.4 Block diagram of infrared remote control system

L293D is mainly used to drive solenoid valves, DC bipolar stepper motors, relays, and other high voltage or high current power loads, and it is compatible with all TTL inputs. EN1, EN2 are high-level active enable signals, and the speed is controlled by the PWM signal output by STM32; IN1, IN2, IN3, IN4 are the control signals of the motor rotation direction, connected to STM32; OT1, OT2, OT3, OT4 are respectively [5]. It is connected to both ends of the two motors and changes with the changes of IN1, IN2, IN3, and IN4, so that the motor direction can be controlled. The L293D-integrated motor drive chip has the advantages of ease of use and stable performance. It can also meet the requirements of a high-current DC motor drive. The STM32 microcontroller can control the speed of the two motors by adjusting the duty cycle of the output square wave. The motor drive module interface is shown in Fig. 33.5.

33.4 Results Display and Analysis The design is developed in the Keil 5 environment. Keil 5 has powerful online debugging capabilities and a simple and convenient programming style. Keil 5 not only provides a clear and intuitive operation interface, but also is very easy and convenient to use, and Keil 5 also comes with a compiler, a compiler, and debug trace. Keil 5 is currently one of the most widely used STM32 microcontroller development

33 Design of Infrared Remote Control Obstacle Avoidance Car

335

Fig. 33.5 Motor drive module interface diagram

software [6]. Therefore, the Keil 5 program development platform is used to develop the program part of the STM32-based infrared remote control obstacle avoidance car design. It is relatively simple and easy to implement [7]. After a certain degree of debugging and error detection on Keil5, the hex file is produced, and then the hex file is downloaded to the development board for testing. After downloading the generated hex file to the STM32 core board, you can start to test the obstacle avoidance ability of the car. After pressing the power switch, the car starts to run. The car first detects whether there is an obstacle in front of it through the ultrasonic module. If there is an obstacle in front, the car moves back and chooses a relatively suitable route to drive, thus realizing the car automatically avoiding obstacles function; if there is no obstacle ahead, the car will drive forward. The automatic obstacle avoidance diagram of the trolley is shown in Fig. 33.6. Fig. 33.6 Automatic obstacle avoidance diagram of the car

336

H. Tan and Z. Qi

Fig. 33.7 Infrared remote control car

After completing the automatic obstacle avoidance debugging of the car, then you can start debugging to remotely control the car with an infrared remote control. When button 2 is pressed, the car moves forward; when button 8 is pressed, the car moves backward; when button 4 is pressed, the car turn left in a circle on the spot; when button 6 is pressed, the car will make a circle on the spot; when button 5 is pressed, the car will stop. The infrared remote control car diagram is shown in Fig. 33.7.

33.5 Conclusion This design researches an infrared remote control obstacle avoidance car system based on STM32 control. It uses an STM32 microcontroller as the control center, and uses an infrared remote control to realize the remote control function of the car. It uses ultrasonic obstacle avoidance to realize the control of the car. Obstacle avoidance function, the focus of this design is to study the remote control function and obstacle avoidance function of the car. The infrared remote control obstacle avoidance car system designed in this design involves a lot of content, and needs to consider many aspects. At the same time, it also needs to understand a lot of knowledge related to software and hardware. It is a relatively complex system. By analyzing the requirements of the system, studying the involved knowledge and related technologies, and then combining the actual situation of the car, the workflow, functional modules, and system hardware of the STM32-based infrared remote obstacle avoidance car control system are designed. Through the combination of software and hardware design and the cooperation between the various modules, the trolley of this design finally realizes the remote control function and obstacle avoidance function.

33 Design of Infrared Remote Control Obstacle Avoidance Car

337

Fund Project Characteristic Innovation Project of Ordinary Universities in Guangdong Province in 2019 (2019KTSCX252).

References 1. Wang, P. F., Zhang, Y. H., Wang H., et al.: Design of automatic obstacle avoidance car control system based on STM32F103 microcontroller. Inf. Technol. 2, 77–80 (2019) 2. Cao, C. Z., Liang, S. Y., Wang, F. Q., et al.: Design of remote control smart car control system based on STM32. Intell. Comput. Appl. 30, 256–259+262 (2020) 3. Liu, X.W.: Smart car control based on stm32 single-chip microcomputer. Modern Manuf. Technol. Equipment 1, 192–193 (2019) 4. Li, H. Y.: Design of obstacle avoidance car based on STM32. Sci. Technol. Vis. 30, 158+191–193 (2018) 5. Fang, G.X.: Design and implementation of smart car based on STM32. Wuhan University of Light Industry, Hubei (2018) 6. Xu, W.: 51 Single-chip integrated learning system-infrared remote control. Electron. Prod. 3, 22–24 (2008) 7. Zhang, L. X.: The design of smart car based on STM32F103. Nongjia Staff 9, 106 (2020)

Chapter 34

Parameter Adaptive Control of Virtual Synchronous Generator Based on Ant Colony Optimization Fuzzy Yao Linping, Li Mengda, Liang Zhichao, and Zheng Xubin

Abstract The introduction of virtual inertia and damping coefficient of the virtual synchronous generator effectively improves the frequency response characteristics of the system. After the microgrid is disturbed by the load, the output power of the system will overshoot and oscillate. According to the relationship between the angular frequency deviation and quotiety of vary of the synchronous generator and the virtual inertia and damping coefficient, a fuzzy controller is devised. On that basis, the ant colony algorithm is accustomed to optimize the membership function and fuzzy rules in the fuzzy control. Compared with the traditional VSG fuzzy control strategy, the ant colony optimization fuzzy strategy has smaller overshoot and oscillation, and the system recovery adjustment time is shorter, which makes the system robust. Through Matlab/Simulink simulation, comparing the traditional VSG fuzzy control and the ant colony optimization fuzzy virtual synchronous generator parameter adaptive control, the validity of the control method is verified.

34.1 Introduction Along with the growth of microgrid, most of its distributed power supply is connected to the microgrid through power devices such as inverters, but the problems of insufficient inertia and lack of damping of the inverter are increasingly apparent. The proposal of the Virtual Synchronous Generator (VSG) improves the problems of low inertia and underdamping in the power grid, and improves the stability performance of the system [1]. Literature [2] proposes a self-adaption virtual inertia control policy according to the attack angle curve of SG, which adaptively adjusts the inertia on the basis of fixed inertia to suppress its own frequency and power oscillations, but ignores the affect of damping coefficient on the transient response of the system. Literature [3] proposes a self-adaption damping coefficient control policy, which can also restrain Y. Linping (B) · L. Mengda · L. Zhichao · Z. Xubin School of Electrical Engineering, Shanghai DianJi University, Shanghai 201306, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_34

339

340

Y. Linping et al.

low-frequency oscillation by adjusting the damping coefficient adaptively. Literature [4] proposed an adaptive virtual inertia damping control strategy, which cooperatively adaptively adjusts the virtual inertia damping, greatly reduces the power overshoot, and effectively restraints the oscillation. Literature [5] proposes a selfadaption parameter control policy based on fuzzy control. On the grounds of the relationship between the angular frequency deviation and quotiety of vary of the SG and virtual inertia and damping coefficient, a fuzzy controller is designed to regulate virtual inertia and damping coefficient adaptively, which can further optimize the system stability on the basis of self-adaptation. The parameters of fuzzy control system can not be optimized at the same time, which makes it difficult to select the parameters of the system and the control effect is not perfect. In response to the above questions, this article comes up with a virtual synchronous generator parameter adaptive control based on ant colony optimization fuzzy, which adjusts the membership function of fuzzy controller and the precision of fuzzy rules by ant colony optimization, so that the system has strong robustness. The effectiveness of this method is proved by MATLAB/Simulink simulation.

34.2 Basic Principle of Virtual Synchronous Generator The topology of the adaptive control strategy for the virtual synchronous generator is shown in Fig. 34.1. Among them: DC voltage source, inverter and LC filter, etc. The active power P and reactive power Q are obtained through power calculation. The set values Pref and Qref are introduced into the VSG control module for power regulation and control of the grid. The virtual potential phase can be regulated in real time in the active power loop through VSG control. In reactive power ring, through

Fig. 34.1 Adaptive control strategy topology of virtual synchronous generator

34 Parameter Adaptive Control of Virtual …

341

Fig. 34.2 VSG power control structure

comparing the inverter output voltage U and reference, to compare the output reactive power Q and reference, multiplied by the adjustment coefficient, get the virtual potential amplitude, again through the VSG electromagnetic equation for output control voltage and current double loop, control PWM pulse modulation signal, so as to realize control change output power characteristics. According to the traditional SG two-order model, assuming that the amount of pole logarithm is 1. Formula (34.1) is the equivalent VSG rotor motion equation: 

Pm − Pe − PD = Pm − Pe − Dω = J ω dω = Tm − Te − TD dt ω = dθ dt

(34.1)

where,Pm is the mechanical power,Pe is the electromagnetic power, and J is the virtual inertia, D is the damping coefficient,ω is the angular frequency deviation, and θ is the electrical angle. On the grounds of formula (34.1) and using the traditional power system SG small-signal model analysis method is used for reference [6], assuming that the output resistance of virtual synchronous generator R = 0, the transfer function of VSG active power is obtained, as shown in Eq. (34.2), which is a representative second-order transfer function. In the equation, the adjustment coefficient is kω . 1

G(s) =

EU

P(s) J ω0 Z = Pr e f (s) S 2 + ( DJ + JKωω0 ) +

1 EU J ω0 Z

(34.2)

According to Eq. (34.2), the traditional oscillation angular frequency ωn and damping coefficient ξ are procured from the corresponding second-order model: ⎧ ⎨ ⎩ξ = D

ωn =  ω0 Z 4J EU



EU J ω0 Z

+ Kω



Z 4J ω0 EU

(34.3)

Taking 0 < ξ < 1 and error bars ±5%, then the overshoot corresponding to the √ −ξ / 1−x2 ×100% second-order system is σ% = e , and the adjustment time is ts =  3.5 = 3.5 (D/2J + K /2J& ) & 0 ξωn

342

Y. Linping et al.

The adjustment coefficient is unchanged, D is constant, the larger the J, the smaller the ξ, the larger the overshoot σ %, the longer the adjustment time ts . Similarly, when J is changeless, D is larger, ξ is larger, the smaller the overshoot σ %, and the less the adjustment time ts . From this, the relationship between active power and virtual synchronous generator parameters can be obtained (Fig. 34.2).

34.3 Design and Optimization of Fuzzy Controller 34.3.1 Fuzzy Variable Relation By analyzing the virtual synchronous generator model, the equation is transformed from Eq. (34.1) to: ω =

Tm − Te − J dω dt D

dω Tm − Te − Td = dt J

(34.4) (34.5)

According to the above formula, if Tm − Te − J dω is constant, the corresponding dt damping coefficient D is controlled to control the angular frequency deviation; simiis constant, the angular frequency dω/dt change rate is larly, if Tm − Te − J dω dt controlled by the virtual inertia J. According to the power angle curve and angular frequency curve in Fig. 34.3, the relationship between the two can be known. The changes in power and angular frequency in each interval have different trends, the damping coefficient and the virtual inertia will also change accordingly. In the interval t1 −t2 , the angular velocity

Fig. 34.3 Angular frequency correlation diagram under disturbance

34 Parameter Adaptive Control of Virtual …

343

Fig. 34.4 Fuzzy control block diagram

of the virtual rotor of the virtual synchronous generator is higher than that of the power network and increases gradually, that is ω> 0, the rotor angular frequency alter rate is dω/dt> 0, so the virtual t1 − t2 damping coefficient D and inertia J need to be increased in the interval to prevent dω/dt, ω too large, restrain the rotor angular velocity from being too large. In the interval t2 − t3 , the virtual rotor angular speed change rate of the virtual synchronous generator dω/dt0. Therefore, a smaller virtual inertia J is used in this interval to accelerate the angular speed recovery to the rated value and increase. Large damping coefficient to further restrain the deviation of angular velocity. Similarly, in the other two intervals, the damping and virtual inertia must be adjusted appropriately.

34.3.2 Fuzzy Controller Design The fuzzy controller includes three processes: fuzzification, fuzzy reasoning, and defuzzification. a) The angular frequency change rate dω/dt and angular frequency deviation ω are input as the fuzzy controller. b) Fuzzy the input variable to get the fuzzy value. The quantization factor ke =0.1,kec =0.08 normalizes the input value. c) According to the corresponding fuzzy rules, the virtual inertia, and damping coefficient are obtained through defuzzification inference. d) Fuzzy controller output and initial parameters JS , DS , in order to obtain real-time parameters (Fig. 34.4). 

J (t) = JS + J D(t) = D S + D

(34.6)

Set the basic domain according to the system parameters. In this paper, the basic domain of frequency change rate dω/dt in the fuzzy controller is [−6,6], the basic domain of angular frequency deviation ω is [–0.25, 0.25], and the domain of virtual inertia J is [−0.2,0.2], the universe of damping coefficient D is [−5, 5]. The input and output variables are divided into fuzzy sets {NB, NS, ZE, PS, PB}, which represent {large negative, small negative, zero, positive small, positive large}, output scale factor, fuzzy controller adopts Mamdani type controller. It adopts the principle of “minimum–maximum” composition, and uses the center of gravity method to

344

Y. Linping et al.

solve the problem. The membership function is a triangular membership function (Tables 34.1 and 34.2).

34.3.3 Ant Colony Optimization Fuzzy Controller This paper divides the fuzzy control rules and membership functions into two independent populations, sharing the same objective function S to reflect the output characteristics of the system. The selection of the objective function is the key to system parameter optimization. In this paper, the objective function uses the time integral of the absolute value of the deviation e as the performance index of the evaluation. In this paper, the fitness function is the reciprocal 1/S of the objective function.  ∞ t|e(t)|dt (34.7) S= 0

(a)

Initialization:

(1) Initialization of the membership function: the membership function uses the triangle membership function. In the past, the three boundary points of the triangle were used as the initial points, but it is not complete. This article chooses to perform the triangle vertex bi , the right span coefficient Q i , and the overlap factor αi coding. Take two adjacent membership functions PS and PM as an example, as shown in Fig. 34.5. In the i-th triangle, di,l is the left width and di,r is the right width. This article defines the above options to be coded to obtain the three endpoints of the triangle. The right span coefficient Q i ∈(0,1) is defined as: Fig. 34.5 Adjacent membership

34 Parameter Adaptive Control of Virtual …

345

Fig. 34.6 Ant Colony Optimization Fuzzy Control Block Diagram

Qi =

sgn( dbi,ri ), di,r < sgn(bi ) sgn( dbi,ri ), di,r > sgn(bi )

(34.8)

The left width can be derived from the concept of similar triangles in the mathematical formula: di,l =

bi+1 − bi − Q i × sgn(bi ) 1−α

(34.9)

According to the above relational expressions (34.8)(9), the value range of the left end point, right end point, and vertex αi of the triangle can be calculated as [0.3, 0.8]. (2) Fuzzy rule initialization: This paper analyzes the fuzzy controller with dual input and a single output. It sets input and output to have five language values, a total of 25 fuzzy rules, and m ants are located at 25 rules as the initial position to search. (3) Pheromone initialization: the amount of initialization information τi j (0) = C, C is a small constant. (b) Road King Construction: This paper improves the roulette selection method of the classic ant colony algorithm, and selects the path between the two populations. After the search starts, each ant independently selects the next move target according to the probability pikj . pikj (t)

=

⎧ ⎨



β

τiαj (t)ηi j (t)

n∈allowed k

β

τinα (t)ηin (t)

, j ∈ allowed k

0, jnot ∈ allowed k

(34.10)

In the formula: pkij is the ant transition probability; τij is the pheromone concentration; ηij is the visibility; allowedk is the set of paths that the ants have not go by;

346

Y. Linping et al.

α is the information heuristic factor, β is the expected heuristic factor, the value of the two is the path probability has an impact. (c) Pheromone update: After each ant constructs the path according to the path, the pheromone concentration needs to be updated after each iteration is completed. τi j (t + 1) = (1 − r)τij (t) + τij

(34.11)

k where τi j = m k=1 τi j , the sum of the pheromone of each ant, ρ is the volatilization factor (Fig. 34.6).

34.4 Simulation Result Analysis For the sake of verifying the correctness of the VSG strategy proposed in this paper, this paper builds a virtual synchronous generator simulation model through the MATLAB/Simulink platform. Table 34.3 shows the main simulation parameters: The simulation time was 4 s. The rated output active power is set to 2 kW, and 2 kW load is put in for 2 s. In this paper, the membership function and fuzzy rules were optimized by ant colony optimization. Figure 34.7 is the unoptimized membership function E, E c , Fig. 34.8 is the ant colony optimization E, E c , Fig. 34.9 is the nonoptimized virtual inertia J compared with the optimized virtual inertia J membership function, as shown in the figure: Table 34.1 J Fuzzy rules J

NB

NS

ZE

PS

PB

NB

NB

NS

PB

PS

PB

NS

NB

NS

ZE

PS

PB

ZE

NB

ZE

ZE

ZE

NB

PS

PB

PS

ZE

NS

NB

PB

PB

PS

PB

NS

NB

Table 34.2 D Fuzzy rules D

NB

NS

ZE

PS

PB

NB

NB

NS

PB

PS

PS

NS

NS

NS

NB

PS

PS

ZE

ZE

ZE

ZE

ZE

ZE

PS

PS

PS

NB

NS

NS

PB

PS

PS

PB

NS

NB

34 Parameter Adaptive Control of Virtual … Table 34.3 Main simulation parameters

347

Parameter

Dereference

Parameter

Dereference

Dc voltage

750v

Single phase line voltage

380v

Rated frequency

50

Filter capacitor

50e–6f

Resistance

0.05

The virtual moment of inertia

0.1

Filter inductance

1.5

Damping coefficient

10

Fig. 34.7 Unoptimized E,E c

Fig. 34.8 Optimized E,E c

Fig. 34.9 Unoptimized, optimized J

After the system enters a stable operating state, the initial active power of the virtual synchronous generator is 2 kW, and a 2 kW load is increased to the system for 2 s. From Fig. 34.10, we can see the traditional fixed parameter overshoot is 7.5%, and the adjustment recovery time is 1.9 s; the adaptive parameter overshoot based

348

Y. Linping et al.

Fig. 34.10 VSG outputs active power

on fuzzy control is 5%, and the adjustment recovery time is 0.4 s; this paper adopts a fuzzy virtual synchronous generator based on ant colony optimization. The adaptive control overshoot is 1.25%, and the adjustment recovery time is 0.15 s. In this article, the fuzzy self-adaption method of ant colony optimization is better and makes the system more stable. Figure 34.11 is a comparison of the virtual synchronous generator frequency under different control strategies. The fuzzy control after ant colony optimization has a more obvious influence on the frequency change rate. The maximum frequency deviation under fixed parameters is 0.035 Hz, and the maximum frequency deviation under fuzzy self-adaptation is 0.033. HZ, ant colony optimization fuzzy adaptive maximum frequency deviation is 0.03 Hz, and the oscillation time is short.

Fig. 34.11 VSG frequency waveform

34 Parameter Adaptive Control of Virtual …

349

From the simulation results, we can see the parameter adaptive control strategy of virtual synchronous generator based on ant colony optimization fuzzy is better than the two strategies compared in this paper.

34.5 Conclusion Based on the traditional fuzzy control adaptive parameters of the virtual synchronous generator, the ant colony optimizes the membership function and fuzzy rules in the fuzzy control to make the parameters of the fuzzy controller more precise and optimized. This article puts forward a virtual synchronous generator parameter adaptive control method based on ant colony optimization and fuzzy, the feasibility of the put forward strategy is proved by simulation: the stability of the system is improved, the power and frequency oscillations are reduced, after the system is disturbed by the load. The adjustment recovery time is shortened. When the system is under a 2kw load, compared with other adaptive parameter adjustments, the system overshoot is 1.25% and the adjustment recovery time is 0.15 s, thus ensuring the system is more stable.

References 1. Wang, X. S., Jiang, H., Liu, H., Song, Peng, F.: Summary of the research on the grid-connected stability of virtual synchronous generators. North China Electric Power Technology, (09), pp.14– 21(2017). 2. Li, Y. L., Zhao, Q. E.: Island VSG frequency dynamic response control based on adaptive virtual inertia. Electrical Automation, 43(02), pp.74–76+80(2021). 3. Ma, C. S., Zhao, Y., Liu, Q. F., Yang, F.: Adaptive damping control algorithm strategy for suppressing low-frequency oscillation of virtual synchronous generator. Electrical Application,39(03), pp.78–87(2020). 4. Yan, J. B., Yang, C., Chang, L. L., Hou, C.: Virtual Synchronous Generator Inertia Damping Cooperative Adaptive Control Strategy. Journal of Harbin University of Science and Technology,24(06), pp.58–63(2019). 5. Wang, S. Y., Jiang, L.: Adaptive control strategy of inertia damping of virtual synchronous generator based on fuzzy.Electric Tools,(01), pp.1–4+15(2021). 6. Lu, Z. P., Sheng, W. X., Zhong, Q. C., Liu, H. T., Zeng, Z., Yang, L., Liu, L.: Virtual synchronous generator and its application in microgrid. Proceedings of the Chinese Society of Electrical Engineering, 34(16), pp.2591–2603(2014).

Chapter 35

Attitude Calculation of Quadrotor UAV Based on Gradient Descent Fusion Algorithm Li Dengpan, Ren Xiaoming, Gu Shuang, Chen Dongdong, and Wang Jinqiu

Abstract The stability of the quadrotor UAV in flight mainly depends on the attitude solution and control system. In flight, it is easy to drift by relying solely on the attitude solution of the sensor. In this paper, the traditional Mahony complementary filtering algorithm based on the quaternion method is improved, and the gradient descent method is introduced to perform data fusion, and a platform is built for verification. The experimental results show that the GD fusion algorithm can not only ensure the acquisition of more UAV attitude information, but also effectively suppress the drift error of the gyroscope and filter the noise of the accelerometer, improve the calculation accuracy, and quickly respond to the mutation of UAV flight attitude.

35.1 Introduction The real-time acquisition and accurate calculation of attitude information of the quadrotor UAV are particularly critical to the stability of UAV [1]. However, the gyroscope has temperature drift, so the integral operation of diagonal velocity will produce a cumulative error. The accelerometer is easily affected by the vibration of the body [2]. The magnetometer is vulnerable to interference from the surrounding magnetic field, etc., so the flight data collected directly by the sensor cannot guarantee accurate control. Therefore, it is necessary to combine the multisensor data with the data fusion algorithm to obtain accurate attitude Angle data [3, 4]. Commonly used data fusion algorithms include extended Kalman filtering algorithm, gradient descent method, complementary filtering algorithm, etc. [5–7]. Sabatini studied the extended Kalman filter based on quaternion, introduced an improved adaptive measurement noise covariance matrix construction method for carrier linear acceleration and magnetic field interference, but this method has linear error [8]. Wang Xin et al. used CKF to calculate the attitude and position of the aircraft. Compared with UKF, its estimated mean square error was smaller and more accurate [9]. However, the amount of calculation was large, so it was not suitable for the L. Dengpan (B) · R. Xiaoming · G. Shuang · C. Dongdong · W. Jinqiu School of Electrical Engineering, Shanghai DianJi University, Shanghai 201306, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_35

351

352

L. Dengpan et al.

UAV with STM32 as the main control chip [10]. Wang Li et al. used complementary filtering algorithm to eliminate noise in the frequency domain, combined with highpass filter to eliminate low-frequency noise of gyroscope, and through low-pass filter to eliminate high-frequency noise of accelerometer [11]. However, the problem of the transfer frequency of high-pass filter and low-pass filter has always been difficult to solve. Therefore, based on the traditional data direct fusion method and complementary filtering algorithm, this paper proposes an improved complementary filtering algorithm to solve the attitude, and verifies the superiority of the algorithm after the actual measurement.

35.2 Attitude Calculation and Algorithm Implementation 35.2.1 Profile Modeling Attitude solution is to calculate the attitude Angle of the UAV by reading the measured data of the UAV sensor and using different methods, and then judge the real-time status of the UAV. Therefore, the attitude Angle of the UAV is the key to its accurate and stable flight. In order to better describe the attitude Angle of the UAV, the attitude coordinate system, namely, the relationship between the coordinate system of the quadrotor body and the geographic coordinate system, should be defined by the “right-hand rule” first. In this paper, based on geographical north, east, and ground directions, the relationship Angle between the body coordinate system α of the quadrotor and the geographic coordinate system ζ system was established as shown in Fig. 35.1: The attitude Angle of UAV is defined by the relative change relationship between the body coordinate system and the geographic coordinate system when the UAV moves. Assuming that the UAV body coordinate system coincides with the geographic coordinates at the beginning, and then the UAV body rotates in the rotation order of X, Y, and Z, respectively, three angles can be obtained: roll Angle θ, pitch Angle γ, and yaw Angle ψ. Fig. 35.1 Relationship between body coordinates and geographic coordinates

35 Attitude Calculation of Quadrotor UAV …

353

35.2.2 Basic Concepts of Quaternion Method In order to reduce the amount of calculation, the quaternion method is introduced into the model, that is, the four-dimensional vector is used to represent the rotation in three-dimensional space, and the most used representation is shown in Eq. (35.1) [12]: q = q0 + q1 i + q2 j + q3 k = cos

θ θ + λsin 2 2

(35.1)

In the formula, Q_0 is the quaternion coefficient of the scalar part, and its value is equal to the cosine value of half of the rotation Angle of the coordinate system. Then, Rodrigo rotation is introduced to obtain the quaternion representation of the UAV coordinate transformation matrix (2): ⎡

⎤ q02 + q12 − q22 − q32 2(q1 q2 − q0 q3 ) 2(q1 q3 + q0 q2 ) Cnb = ⎣ 2(q1 q2 + q0 q3 ) q02 − q21 + q22 − q32 2(q2 q3 + q0 q1 ) ⎦ 2(q2 q3 + q0 q1 ) q02 − q12 − q22 + q32 2(q1 q3 + q0 q2 )

(35.2)

Through quaternion expression (35.2), Euler Angle expression (35.3) of three rotation angles can be derived: ⎧ + q2 q3 )] ⎪ ⎨ θ = arcsin[2(−q0qqq1 +q 0 2 1 q3 γ = −arctan( q 2 −q 2 2 2) 0 1 −q2 +q3 ⎪ ⎩ ψ = −arctan( q1 q2 +q0 q3 )

(35.3)

q02 −q12 +q22 −q32

− → Let the quaternion q = cos θ2 + λ sin θ2 , and differentiate the time T, the differential Eq. (35.4) can be obtained [13]: ⎤ ⎡ ⎤ q0 0 −ωx −ω y −ωz ⎥ ⎢ q1 ⎥ 1⎢ dq 0 ω −ω ω x z y ⎥·⎢ ⎥ = ⎢ ω x ⎦ ⎣ q2 ⎦ dt 2 ⎣ ω y −ωz 0 ωz ω y −ωx 0 q3 ⎡

(35.4)

The first order Runge–Kutta method is used for Eq. (35.4) to obtain Eq. (35.5): q(t + t) = q(t) + t

dq dt

After substituting and sorting, Eq. (35.6) can be obtained:

(35.5)

354

L. Dengpan et al.

⎤ ⎡ ⎤ q0 0 −ωx −ω y −ωz ⎥ ⎢ ⎥ 1⎢ dq 0 ω −ω ω x z y ⎥ ⎢ q1 ⎥ ·⎣ ⎦ = ⎢ ⎦ ⎣ ωx q2 dt 2 ω y −ωz 0 ωz ω y −ωx 0 q3 ⎡

(35.6)

With the current angular velocity and the quaternion of the previous moment, the quaternion of the current moment can be obtained, which can more accurately show the motion state of the UAV body and greatly reduce the calculation amount. Therefore, the quaternion method is the basic method of UAV attitude calculation.

35.2.3 Attitude Solution of Mahony Complementary Filtering Algorithm The measured value of the gyroscope is the angular acceleration of the carrier coordinate system, and the measured value of the accelerometer is its linear acceleration. The measurement accuracy has its own advantages, and different sensors form a complementary relationship. Based on the quaternion method, the main idea of Mahony complementary filtering algorithm is to integrate the data acquired by the accelerometer, gyroscope, and magnetometer of UAV, eliminate the noise and static difference of sensor data, and integrate the advantages of the accelerometer and gyroscope to collect data at different frequencies. In other words, the instantaneous value measured by the accelerometer in a short time is used to compensate the drift error accumulated by the gyroscope over a long period of time [14]. Its schematic diagram can be expressed as shown in Fig. 35.2:

Fig. 35.2 Block diagram of Mahony algorithm

35 Attitude Calculation of Quadrotor UAV …

355

35.2.4 Attitude Solution Based on Improved Gradient Descent Fusion Algorithm Gradient descent method is usually used in machine learning. The main principle is to calculate the gradient of all samples for weight update in each iteration, and to randomly select a training data to update the parameters each time. Let’s assume that the angular velocity of the UAV measured in the ground coordi g g g T g g g nate system is ω g = ωx ω y ωz , the accelerometer measures a g = ax a y az . Stand the drone and apply gravity g n = [00g]T , rotate it all the way back to the ground system g g . Normalize ag and gg, and get the quaternion function: ⎡

g⎤ 2(q1 q3 − q0 q2 ) − ax E = ⎣ 2(q0 q1 + q2 q3 ) − a yg ⎦ g 1 − 2(q12 + q22 ) − az

(35.7)

When solving the UAV flight, it is necessary to make the attitude quaternion iterate in the direction where the function value becomes smaller, then solving the minimum value of the quaternion function is the optimal solution of the attitude solution. Gradient descent method was introduced to solve the minimum value in the process of solution, and the negative gradient direction of the function was iteratively calculated until the optimal solution was obtained. The gradient of the error function was: ⎤ 4q0 q22 + 2q2 ax + 4q0 q12 − 2q1 a y ⎥ ⎢ 4q1 q 2 − 2q3 ax + 4q1 q 2 − 2q0 a y − 4q1 + 8q 3 q 2 + 4q1 az 3 0 1 2 ⎥ ∇E = ⎢ ⎣ 4q2 q 2 + 2q0 ax + 4q2 q 2 − 2q3 a y − 4q2 + 8q 3 + 8q2 q 2 + 4q2 az ⎦ 0 3 2 1 4q3 q12 − 2q1 ax + 4q3 ax + 4q3 q22 − 2q2 a y (35.8) ⎡

Each iteration moves along the negative gradient direction of the error function to minimize it, and the optimal solution of the UAV attitude Angle can be considered. The iterative formula of attitude Angle is as follows: q(i + 1) = q(i) + δ

∇E |∇ E|

(35.9)

where: δ is the iteration step size. When solving the attitude of UAV, the iterative step δ is proportional to the angular velocity and sampling period of the UAV, and the gradient descent of the fixed step length is prone to the solution results of slow convergence speed and low precision. Therefore, the dynamic step size equation is quoted: 2

λ = αT ω + eq (q, a b )

(35.10)

356

L. Dengpan et al.

 2 Among them: ω = ωx2 + ω2y + ωz2 ; α as the coefficient; eg (q, a b ) is the square of the two norms of the error function. In the actual attitude solution of UAV, the optimal value cannot be guaranteed to be obtained if only the quaternion is iterated once due to the large amount of data processed. In order to solve this problem, the dynamic quantity is introduced in the process of gradient descent solution, the gradient in the previous iteration in flight is multiplied by the attenuation factor, and the momentum generated in flight is reused, to accelerate the convergence speed and improve the convergence accuracy. The iterative formula is as follows [14]: 

vi = γ1 vi−1 + λ∇eg (q, a b ) q(i) = q(i − 1) + vi

(35.11)

where: γ1 ∈ [0, 1] is the attenuation coefficient. The principle of the improved Mahony method of gradient descent can be summarized as shown in Fig. 35.3: The iterative formula will be carried out in the next iteration in the form of attenuation, so that the solution of attitude Angle will be more stable, and no data loss will occur in the face of UAV attitude mutation. According to the preset iteration formula (16), the calculation result q∇,t−1 of each iteration can be obtained. The quaternion data obtained by Mahony algorithm is q,t . You fuse the two and you get: qα,t = αq∇,t + (1 − α)q,t

(35.12)

Among them: α is the weight value, 0 ≤ α ≤ 1. The dynamic step size Eq. (16) is introduced for convergence, and the convergence rate β of the attitude Angle quaternion equation is obtained. (1 − α)β = α λt , where λ is the dynamic step size. The final solution of quaternion q f,t can be obtained by fusion of the algorithm data

Fig. 35.3 Gradient descent improved Mahony method principle

35 Attitude Calculation of Quadrotor UAV …

357

(13) (14)

(15) (16) (17)

Fig. 35.4 Block diagram of gradient descent algorithm λ obtained above, q f,t = qα,t−1 + (q,t − β λ )t [15]. The block diagram of gradient descent algorithm is shown in Fig. 35.4:

35.3 Experimental Test and Result Analysis In order to verify the feasibility of the algorithm proposed in this paper, UAV was built as the experimental platform, as shown in Fig. 35.5. The experimental platform includes flight controller APM2.8, inertial measurement unit (IMU), magnetometer, pressure sensor, etc. The flight control system integrates a three-axis gyroscope and a three-axis accelerometer. In order to test the effect between different methods, the UAV was used to conduct dynamic flight experiment, the UAV flight pitch Angle was selected as the judgment index, and the Mission Planner was used to collect the original attitude data of Fig. 35.5 UAV test platform

358

L. Dengpan et al.

the UAV sensor (the acquisition frequency was set as 66 Hz, through the wireless communication module “Quanseng V5 Data Transmission Radio”). The software Matlab is used to analyze the data of UAV pitch Angle. In order to compare the accuracy of the attitude solution of the fusion algorithm of gradient descent proposed in this paper, the Mahony algorithm was used for comparison and verification while the gradient descent algorithm was implemented. In this paper, the most important pitch Angle is selected as the flight standard to conduct a 10 s UAV dynamic flight experiment outdoors. Figure 35.6 is the pitching Angle data of UAV in random flight. The pitching Angle calculated by the quaternion method can completely reflect the pitching flight data of UAV. However, because the vibration of the motor affects the data collection, the data contains too much noise, which also leads to the attitude Angle drift phenomenon in the actual flight. Figure 35.7 shows the effect comparison between the traditional Mahony algorithm and the gradient descent fusion algorithm. Among them, the variance of the calculated data can measure the stability of UAV flight, and the mean value can represent the retention degree of the original data by different algorithms. The variances and mean values of the solution algorithms for different attitudes of the UAV in random dynamic conditions were summarized, as shown in Table 35.1. It can be seen from Fig. 35.7 and Table 35.1 that the noise is significantly reduced, which is relatively smooth compared with the attitude Angle output by the original sensor of the UAV. The attitude Angle calculated has good filtering effect and small dynamic error. In Fig. 35.7, the green curve is the attitude data solved by the traditional Mahony algorithm, and the red curve is the attitude data calculated by the gradient descent fusion algorithm. As can be seen from the figure, when Mahony algorithm and gradient descent fusion algorithm are used to calculate the UAV attitude Angle, the attitude Angle is basically the same, which is consistent with the data obtained in Table 35.1. The mean value of the two methods is very small, and the error is only 2.68% and 2.02% from the original data, and the retention degree of gradient descent

Fig. 35.6 Data of UAV pitch Angle measured by gyroscope

35 Attitude Calculation of Quadrotor UAV …

359

Fig. 35.7 Comparison of effects between traditional Mahony algorithm and gradient descent fusion algorithm

Table 35.1 Summary of attitude angle calculation data under random dynamic conditions Attitude solution algorithm

Attitude angle

The variance

The mean

Traditional quaternion method

θ

17.967

7.909

Complementary filtering method

θ

12.552

7.697

Gradient descent method

θ

12.513

7.749

method is slightly higher than that of complementary filtering method. The variance of the two algorithms is greatly improved, indicating that the drift error of UAV is improved, which is 30.13% and 30.35% higher than that of the traditional quaternion method, respectively. Both algorithms can improve the dynamic flight error of UAV, but the traditional Mahony algorithm has obvious solution lag. As shown in Fig. 35.7, the Mahony algorithm is significantly lagging the gradient descent method, with a lag time of about 0.2 s. To sum up, in the actual flight debugging of UAV, the solution effect of the gradient descent fusion method is better than that of the Mahony algorithm.

35.4 Conclusion This paper proposes a fusion of attitude algorithm of gradient descent algorithm, in the traditional Mahony algorithm is introduced in the gradient descent method to solve the optimal solution of quaternion, and construct platform for unmanned aerial vehicle, through the actual flight experiment verification, the results show that the method can effectively realize the sensor data fusion between, reduce the drift error of attitude Angle. The noise measured by the UAV accelerometer can be filtered out, and the dynamic error of the attitude solution is small. Moreover, the gradient

360

L. Dengpan et al.

descent fusion algorithm is better than the traditional Mahony algorithm in solving the mutation attitude Angle curve velocity, so as to optimize the performance of UAV in actual flight.

References 1. Lu, Y. J., Chen, Y. D., Li. Y. L.: Experimental research on attitude calculation algorithm of Quadrotor aircraft. Electron. Opt. Contr. 26(11), 45–50 (2019) 2. Long, Y. L., Chen, Y., Teng, X.: Attitude calculation and filtering for Quadrotor aircraft. Comput. Measur. Control 24(10), 194–197,201 (2016) 3. Chu, K. B., Zhao, S., Feng, C. T.: Mahony-EKF algorithm for unmanned aerial vehicle attitude calculation. Electronics 32(12), 12–18 (2020) 4. Yan, J. H., Guo, C. J.: SINS attitude solving algorithm based on Elman neural network. J. Electron. Measur. Instrum. 32(6), 1–5 (2018) 5. Li X. S., Wang Z., Shi, G., Bai, Y. R., Zheng, C. C.: Research on output attitude information correction method of magnetic sensor. Chin. J. Sci. Instrum. 40(3), 47–53 (2019) 6. Fang, G.Z., Li, F.H.: Design and implementation of attitude estimation system for small aircraft. J. Electron. Measur. Instrum. 31(3), 474–480 (2017) 7. Chen, X., Song, X. M., Zhang, Y. H.: Wearable motion capture system. Foreign Electron. Measur. Technol. 36(10), 60–63 (2017) 8. Sabatini, A. M.: Quaternion-based extended Kalman filter for determining orientation by inertial and magnetic sensing. IEEE Trans. Biomed. Eng. 53(7), 1346–1356 (2006) 9. Khamseh, H. B., Janabi Sharifi, F.: UKF-based LQR control of manipula-unmanned aerial vehicle. Unmanned Syst. 5(03), 131–139 (2017) 10. Wang, X., Zhang, L. J.: Attitude data fusion algorithm based on adaptive CKF. Electron. Measur. Technol. 42(3), 11–15 (2019) 11. Wang, L., Zhang, Z., Sun, P.: An adaptive complementary filter attitude estimation algorithm. Control Eng. China 22(5), 881–886 (2015) 12. Zheng, J., Wang, H. Y., Pei, B. N.: Study on attitude calculation of Quadrotor aircraft in wind turbine environment. Calculation Comput. Simul. 35(6), 71–75 (2018) 13. Wang, J., Ma, J. L.: Research on attitude algorithm based on EKF and complementary filtering. Chin. J. Sens. Actuators 31(8), 881–886 (2018) 14. Lv, C. S.: An improved approach of Quadrotor attribute solution based on complementary filtering. Electron. Measur. Technol. 43(18), 69–73 (2020) 15. Zhang, X., Liu B., Wang X. Y., Gao, R., He, F.: Intelligent home system based on OneNet cloud platform. Electron. Manuf. 356(15), 33–35(2018)

Chapter 36

A Mixed Reality Application of Francis Turbine Based on HoloLens 2 Wenqing Wu and Chibing Gong

Abstract Francis turbine is the most commonly used turbine today. To better learn the operating principle of the Francis turbine, Mixed Reality is adopted to build a new learning environment. This paper presents a framework of Francis turbine based on HoloLens 2. The novel system has four modules: structure of Francis turbine, the principle of Francis turbine, parts recognition of Francis turbine, and assemble exam of Francis turbine. The modes of student learning behaviors are classified into learning, practice, and evaluation based on the different modules. The proposed learning application occupies a small space, is low in cost, can be used anytime, and ensures the safety of students in the learning process.

36.1 Introduction Francis turbine is an internal flow reaction turbine that mixes the ideals of radial and axial flow. It is the most commonly used turbine today. In the process of visiting and learning Francis turbine, the internal structure of the equipment is relatively large and the internal structure is hidden by the outer shell, students are difficult to understand the operating principle of the turbine. Even during the overhaul period of the Francis turbine, learners can see some turbine components, and they are not allowed to disassemble and assemble turbine components by hand, it is still tough for learners to enhance their practical skills. Therefore, the current learning environment of Francis turbine must be transformed and new advanced Mixed Reality (MR) should be introduced in this learning environment.

W. Wu · C. Gong (B) Guangdong Polytechnic of Water Resources and Electrical Engineering, Guangzhou 510925, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_36

361

362

W. Wu and C. Gong

36.2 MR Provides a New Learning Environment for Francis Turbine MR is a 3D visualization technology that adds virtual objects to the real world in a hybrid manner to realize the interaction between virtual objects and the real world, and between virtual objects and people. MR is related to virtual reality (VR) and augmented reality (AR). On a horizontal coordinate axis, when the real object is placed at its left end point, and the virtual object is placed at its right end point, in this case, Augmented Reality (AR) is located at the left end area and virtual reality (VR) is located at the right end point. MR is put in the middle continuous regions of the line, excluding the minimal areas on the left and right [1]. If the percentage of the real world in MR is large, the difference between MR and AR is small; on the contrary, if the percentage of the real world is relatively small, MR will move to VR. Microsoft started to sell HoloLens 2, the latest Mixed Reality device, to the world on November 7, 2019. Compared with HoloLens, HoloLens 2 is lighter in weight and more comfortable for users to wear. The current field of view is 52°, while the previous one is 34°. It is equipped with the Snapdragon 850 processor, which realizes the eye-tracking function, and the interactive gesture operation is more powerful. Advanced AI technology is also integrated into the HoloLens 2 [2].

36.2.1 HoloLens 2 is More Comfortable for Users to Wear Compared with HoloLens To improve the wearing comfort of HoloLens 2, Microsoft has performed subtle 3D scanning of thousands of human heads of different races, genders, and ages, and adopted an ergonomic design. With carbon fiber, the battery is placed at the rear and the center of gravity is shifted by 58 mm, which better balances the weight of HoloLens 2. Although the overall weight is only reduced by 10 g, the comfort of the user experience is greatly improved.

36.2.2 HoloLens 2 is More Immersive Than HoloLens Compared with the first generation of HoloLens, HoloLens 2 not only improves the resolution but also doubles the angle of view. The original HoloLens single eye is 720P, while the HoloLens 2 single eye has reached a 2 K resolution. The average 1° pixel has been increased from 23 to 47 px, and the display area ratio has also increased from the original 16:9 to 4:3, The user’s vertical field of vision is greatly improved.

36 A Mixed Reality Application of Francis Turbine …

363

36.2.3 Eye-Tracking Eye-tracking is an important function provided by the latest HoloLens 2. It uses a built-in camera to identify where the user is gazing, knowing exactly what the user is looking at, and interacting with virtual objects. Even when the user wears glasses, this function can still be used. The true value of eye tracking can be applied to the user’s reading. When the user is reading a long article with the old HoloLens, hand gestures or voice commands are used to scroll down or up the content. HoloLens may be in a noisy environment, and the device cannot receive voice commands. With the eye-tracking function provided by HoloLens 2, when the user reads to the bottom of the article, the position of the user looking at can be recognized, and HoloLens 2 will automatically scroll down the content for further reading. If the user needs to go back to view the content that has been read before, the user only needs to stare at the top of the text, and HoloLens 2 will scroll up the content of the article without the user having to move his head. The speed at which HoloLens 2 moves text is controlled by the changing speed of the user’s eyes gazing.

36.2.4 The Gesture Operation of HoloLens 2 is More Powerful Than that of HoloLens Compared with HoloLens, the gesture operation of HoloLens 2 is more natural, intuitive, and convenient. HoloLens 2 is no longer equipped with a physical controller and relies entirely on gesture recognition and voice control. Through machine learning algorithms, Microsoft can accurately identify the 25 joint points of a single hand and the direction of the palm, to identify the user’s hands well, such as finger bending, gesture movement, and precise finger positioning. At the same time, gesture operations have also increased. For example, subtle differences such as picking, grasping, and pinching can be recognized, so that objects can be moved, rotated, and the size of objects can be changed. The user can move virtual objects with one or two hands, and use more intuitive grips to move or resize objects.

36.2.5 Integrated Advanced AI Technology When the user puts on the new HoloLens 2 device for the first time, the device will automatically recognize the real shape of the user’s hand and the exact distance between the eyes. To build a hand recognition and tracking system, Microsoft’s AI team uses a special camera to record various shapes of the human hand, and then use the cloud to process and construct the 3D model of human hand shapes and movements.

364

W. Wu and C. Gong

36.2.6 HoloLens is Used in Various Education Fields HoloLens provides a new learning environment and is widely used in various education fields. It is used to teach students how to play an entry-level guitar [3]. Based on HoloLens, an interactive game and a virtual museum of Huangmei Opera culture and knowledge are established to explore the application of MR in education [4]. Taking human leukocyte antigen (HLA) and aspirin as examples, how to display the threedimensional structure of complex proteins and small molecules in HoloLens are described [5]. The study of HoloLens to improve learning experiences and memory recall in physiology and human anatomy is also explored [6]. Therefore, HoloLens 2 can provide a new learning environment for Francis turbine.

36.3 A Framework of Francis Turbine Based on HoloLens 2 The purpose of this application is to provide a novel learning environment for students to learn Francis turbine. To reach this objective, a framework of Francis turbine based on HoloLens 2 has been designed with four modules: structure, principle, parts recognition, and assemble exam. The modes of student learning behavior are classified into learning, practice, and evaluation based on the different modules. This framework was illustrated in Fig. 36.1.

36.3.1 Structure Module of Francis Turbine In this structural module, the learner stands about 2.5 m in front of the Francis turbine and listens to the teacher’s voice teaching. When the teacher explains a part of the turbine, this part will be highlighted in a specific installation position of the whole turbine, and be presented separately at the close range of the learner, so that the student can carefully observe the structure of the part.

Module Mode MR Device

Structure

Principle

Learning

parts recognition Practice HoloLens 2

Fig. 36.1 A framework of Francis turbine based on HoloLens 2

Assembly exam Evaluation

36 A Mixed Reality Application of Francis Turbine …

365

Fig. 36.2 The structure scene of Francis turbine

The learner enters the structure module and clicks the “start” button to listen to the teacher’s explanation. When explaining the structure of the turbine runner, it is shown in Fig. 36.2.

36.3.2 Principle Module of Francis Turbine In the principle module, learners can choose the three parts of runner, water flow, and guide vane mechanism of the Francis turbine for in-depth study. In the teaching of the water flow principle of the turbine, the animation demonstrated the water flow process; the runner can be animated by the radial section and the axial section, and how the runner is rotated by the impact of the water flow. The guide vane mechanism is more complicated, and the operation animation of the guide vane mechanism is displayed.

36.3.3 Parts Recognition Module of Francis Turbine The previous two modules which are the structure module and principle module belong to the learning mode of learner behaviors. This module belongs to the practice mode of learner behaviors. To allow learners to study a certain part of the turbine in a more targeted manner, in the parts recognition module of the Francis turbine, the learner selects a certain part to learn separately, and can move, rotate, and zoom the part, thereby strengthening the part Cognition.

366

W. Wu and C. Gong

36.3.4 Assembly Exam Module of Francis Turbine The final module is the assembly exam module of Francis turbine and is used to evaluate learners’ learning results. In this module, all the parts of the turbine are randomly placed around the learner, a certain part of the turbine in front of the learner is highlighted randomly, flashes, and needs to be assembled. The learner is asked to choose the appropriate part to assemble the Francis turbine. The learner observes the surrounding related components, selects the related flashing component that needs to be installed, and drags and drops it to the relevant position of the turbine. If the installation is successful, the next part will be displayed randomly and wait for the learner to install it until the end of the installation; if the component selection is wrong, the installation fails and an additional ten seconds penalty time will be added to the installation time. After the final assembly of the whole turbine is finished, the total time of installation can be used to evaluate the effect of learning.

36.4 Application Development of Francis Turbine Based on HoloLens 2 The application development of Francis turbine based on HoloLens 2 includes three main steps: design of instructional content, choices of hardware and software, and implementation and testing of the application [7, 8].

36.4.1 Design of Instructional Content The design of instructional content is to prepare details in the MR learning environment, and its purpose is to achieve evaluable instructional results. The analysis, design, development, implementation, and evaluation (ADDIE), which is a well-accepted model for the design of instructional content, is adopted to this application of Francis turbine based on HoloLens 2.

36.4.2 Choices of Hardware and Software The suitable hardware should be selected according to various application goals, such as costs of equipment, immersion, and interactivity. In this application system, Microsoft HoloLens 2 was chosen as the hardware device because it has a great advantage over other headsets in terms of powerful gesture commands and voice commands.

36 A Mixed Reality Application of Francis Turbine …

367

Autodesk 3DStudio Max is the most widely used software for 3D modeling. It was used to build a 3D model of the Francis turbine. As the development engine of HoloLens 2 applications, Unity can be used to develop various applications based on HoloLens 2. And Unity also allows developers to use the free version at their work.

36.4.3 Implementation and Testing of the Application In the beginning, Autodesk 3DStudio Max software was used to establish the 3D model of the Francis turbine. In the process of constructing a 3D model, the number of faces of the model should not be too many. The 3D model in FBX format can be imported into the Unity project, and the application could be directly exported to the HoloLens 2 device to run for developers to check whether the model can be displayed normally. C# language is adopted in Unity 2019.2 while programming. To play the teacher’s voice, display related components, and run related animations at the specified time, the Timeline editor in Unity is used to implement these functions. MRTK 2.5.0 was imported in Unity to implement interactive functions in this learning application for HoloLens 2. The MRTK is jointly developed by Microsoft and the HoloLens community of developers and it makes HoloLens 2 easy to develop. Finally, the developed application was packed into the HoloLens 2 device for testing. Some pilot people check whether the application has achieved the expected goals, otherwise, necessary modifications may be required. The running interface of the assemble exam scene of the Francis turbine is shown in Fig. 36.3.

Fig. 36.3 The assemble exam scene of Francis turbine

368

W. Wu and C. Gong

36.5 Conclusions To better understand the operating principle of the Francis turbine, MR was adopted to build a new learning environment. The application of Francis turbine based on HoloLens 2 was developed. The proposed learning application occupies a small space, is low in cost, can be used anytime, and ensures the safety of students in the learning process. The system will be put into practical learning for students in the future and will be continuously enhanced according to the feedback result.

References 1. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77(12), 1321–1329 (1994) 2. Palmer, D.: Hands-on with Microsoft HoloLens 2: More comfortable, more intuitive and more potential. https://www.zdnet.com/article/hands-on-with-microsoft-hololens-2-more-com fortable-more-intuitive-and-more-potential/. Accessed 20 Apr 2021 3. Torres, C. Figueroa, P.: Learning how to play a Guitar with the HoloLens: a case study. In: 2018 XLIV Latin American Computer Conference (CLEI), pp. 606–611 (2018) 4. Wang, G. et al.: Application of mixed reality technology in education with the case of a Huangmei opera cultural education system. In: 2020 IEEE 2nd International Conference on Computer Science and Educational Informatization (CSEI), pp. 301–305 (2020) 5. Hoffman, M. Provance, J.: Visualization of molecular structures using HoloLens-based augmented reality. AMIA Summits Transl. Sci. Proc. 68–74 (2017) 6. Chen, C. et al.: Using microsoft HoloLens to improve memory recall in anatomy and physiology: a pilot study to examine the efficacy of using augmented reality in education. J. Educ. Technol. Dev. Exch. (JETDE) 12(1), 17–31 (2019) 7. Vergara, D., et al.: On the design of virtual reality learning environments in engineering. Multimodal Technol. Interact 11, 1–12 (2017) 8. Gong, C.: Development of a HoloLens mixed reality training system for drop-out fuse operation, emerging trends in intelligent and interactive systems and applications. IISA 2020. In: Advances in Intelligent Systems and Computing, vol. 1304, pp. 469–476 (2021)

Chapter 37

Design of Intelligent Lighting Controller Based on Fuzzy Control Ting Zheng, Yan Peng, Yang Yang, and Yimao Liu

Abstract Aiming at the problems of excessive energy waste and serious light pollution in traditional lighting system, this paper proposes an intelligent lighting control system based on fuzzy logic control algorithm and ZigBee wireless communication technology. The system uses the illuminance sensor to detect the environmental illuminance information in real time, and provides the data source for the fuzzy logic controller. At the same time, the collected data information is uploaded to the monitoring center through ZigBee communication technology. In this paper, the fuzzy controller of intelligent lighting system is designed, and the corresponding fuzzy control rule table and fuzzy control query table are obtained. By querying the fuzzy control table, different PWM duty cycles are output under different illumination errors, so as to control the brightness adjustment of LED lamp. The simulation results show that the algorithm basically achieves the expected effect, and has good control ability.

37.1 Introduction With the progress of society and the development of science and technology, energysaving and environmental protection and intelligent control will be a development trend of intelligent lighting. Under this demand, relevant scholars have made great achievements [1]. Tahan et al. proposed a LED driver that can realize multiple groups in series, which can realize current balance error of less than 1% and dimming range between 4 and 100% [2]. Li et al. proposed a single inductance multioutput LED T. Zheng (B) · Y. Yang · Y. Liu College of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin 644000, China Y. Peng College of Computer Science, Sichuan University of Science and Engineering, Yibin 644000, China T. Zheng · Y. Yang · Y. Liu Sichuan Key Laboratory of Artificial Intelligence, Yibin, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_37

369

370

T. Zheng et al.

driver, which has a dimming accuracy of 0.8% and can achieve full range dimming [3]. Zheng Qingliang et al. combined simulation and pulse width modulation (PWM) dimming, designed the LED drive power supply and combined dimming circuit, and the experiment proved that the dimming efficiency was above 86% and the ripple was below 4% [4]. Lin Haijun et al. proposed an adaptive control method of LED street light intensity based on neural network, so that the brightness of LED street lamp can be automatically adjusted with the change of external environment illumination. Field test shows that the maximum energy saving of LED street lamp with this method is more than 20% [5]. Although some achievements have been made, the illumination and environmental changes are time-varying and nonlinear, so its mathematical model is very difficult to establish, which makes it difficult for many classical control algorithms to achieve their functional requirements. In this paper, through the study of fuzzy control theory and digital signal processing technology, an intelligent control algorithm based on fuzzy logic is proposed to realize the automatic control of lighting system. In this paper, the intelligent lighting control algorithm is studied in depth. The fuzzy control algorithm is selected as the control algorithm of intelligent lighting dimming. The fuzzy controller of intelligent lighting system is designed and implemented, and the luminance of lamps is automatically controlled according to the intensity of environmental illumination. The experimental results show that the algorithm can make the lighting system adjust the brightness adaptively. The design not only has a more humanized sense of service, but also reduces the excessive waste of energy to a certain extent.

37.2 Overall System Architecture At present, the average luminous efficiency of LED lamp is about 130 lm/W, which is far more than 20 lm/W of incandescent lamp and 50–65 lm /W of compact fluorescent lamp [4]. And the ultimate goal of the controller is to reduce the excessive loss of energy, so this paper selects the LED lamp as the light source. In order to meet the requirements of indoor lighting and reduce the power consumption as much as possible, this paper designs an intelligent lighting system with adaptive dimming, which uses fuzzy logic algorithm to control the lighting system. And combined with ZigBee wireless communication technology, get rid of the way of wired control, use the function of ZigBee ad hoc network, design a set of intelligent lighting system to support remote control lighting. The overall framework of the intelligent lighting control system designed in this paper is shown in Fig. 37.1, which is mainly composed of user terminal equipment, ZigBee WiFi wireless communication module, intelligent lamp node, sensor module, and so on. It can be seen from Fig. 37.1 that the system is a single input single output nonlinear system, and intelligent control of lighting brightness in lighting system is needed. Its working process is as follows: the lower computer designs the illumination detection module to monitor the illumination information of natural light in real time. The

37 Design of Intelligent Lighting Controller Based on Fuzzy Control

371

Fig. 37.1 Structure diagram of intelligent lighting system

microprocessor automatically converts the measured value into digital signal, and then uploads it to the upper computer monitoring center by wireless communication technology. The illumination information is displayed in the data management center in real time. At the same time, the output required by the current system is inferred according to the fuzzy control algorithm When the natural light intensity is lower than the preset value, the system will automatically turn on the light and make up the light.

37.2.1 Functional Requirement The system uses wireless communication technology, remote control of lighting changes using the upper computer, the user can change the lighting effect by adjusting the function mode of the monitoring interface, making the user use more convenient and fast. The required functions of each module are as follows: (1)

(2)

User terminal equipment: through the development of the host computer, remote control the brightness change of lamps, real-time display the environmental light information detected by the sensor, and store the fuzzy control query table in the upper computer interface, and give the system output through the query control table. ZigBee WiFi gateway: the collected data is stored in the database, the illumination information is processed and analyzed by querying the output table of fuzzy control rules, and the output results are fed back to the intelligent lamp node module to control the brightness adjustment of LED lights.

372

T. Zheng et al.

(3)

Intelligent lamp node: after receiving the dimming instruction, the driving circuit adjusts the brightness change of the lamp through PWM dimming mode, and different illumination errors automatically output different duty cycle. Sensor module: The illuminance sensor collects the data of environmental illuminance in real time, and uploads the data information to the upper computer interface through the gateway to provide the data source for the fuzzy algorithm.

(4)

37.2.2 Workflow This paper focuses on the design of fuzzy controller, briefly introduces the functions of other modules in the intelligent lighting system, and focuses on the design process of intelligent controller. The workflow is shown in Fig. 37.2. The implementation of the controller is based on the expert language control rules. Firstly, the intensity of natural light is monitored in real time through the illumination sensor, and the detected illumination value is uploaded to the monitoring interface of the upper computer. The system converts the detected value into fuzzy variable, and then according to the actual control rules, the fuzzy quantity inference outputs start System initialization

Save and Fuzzifying E and EC

Illuminance sensor collects information

Quantitative factor correction

Calculation deviation E and EC

If the upper (lower) limit exceeded? Y Set the value as the upper (lower) limit

Query control table N

Proportional factor correction control quantity Limit the output U Send the control quantity to the actuator End

Fig. 37.2 Flow chart of fuzzy control

37 Design of Intelligent Lighting Controller Based on Fuzzy Control

373

the corresponding control quantity, and finally, the output value is output through the past fuzzy interface The system can adjust the brightness of the light adaptively according to the natural light intensity.

37.3 Intelligent Lighting Controller Design Fuzzy control system can realize automatic control of the system. This control mode integrates fuzzy mathematics, fuzzy language, and fuzzy logic. It is a closed-loop numerical control system built by a computer. The principle is that the status of the controlled object is converted into the fuzzy quantity described by a human natural language, that in line with the human language control rules. The fuzzy value of output control quantity is obtained by fuzzy reasoning, and then the fuzzy value of the control quantity is converted into the precise value that the actuator can receive through the clear interface [6]. The fuzzy control algorithm is relatively simple, fast in execution, and suitable for complex systems. It does not depend on the mathematical model of the controlled object, and is often used in nonlinear, time-delay, and time-varying systems [6].

37.3.1 Structure of Fuzzy Controller The working principle of fuzzy controller is that the input digital signal is transformed into fuzzy quantity through fuzzy interface, and then the control experience of human experts is applied to the fuzzy rule theory for reasoning, and finally transformed into a realizable controller, so as to realize the purpose of controlling the passive object. In this paper, two-dimensional fuzzy controller is used. Compared with onedimensional controller, it has one more input component, so its dynamic characteristic effect is better, and it can better reflect the dynamic changes of output variables, so its control effect will be better. The structure of the adaptive dimming intelligent lighting controller based on this idea is shown in Fig. 37.3. Where, e is the illumination deviation value and EC is the deviation change rate. The actual environmental illumination value detected is compared with the preset value needed to turn on the light, and the actual output brightness u is obtained through fuzzy rule reasoning. The fuzzy control algorithm is used to correct the error value, thus the controlled object is provided with better static and dynamic performance.

374

T. Zheng et al. E

Fuzzification

Setting value

ec

Fuzzy rule reasoning

U

EC

Defuzzification

e

Output value

u

lamps

Measurement

Light intensity sensor Fig. 37.3 Schematic diagram of fuzzy control

37.3.2 Identify Variables and Membership Functions After the light sensor transmits the actual light information to the paste controller, the fuzzy controller turns on and off the light and dimmers the light. In simple terms, the environmental illumination value (unit LX) is taken as the input quantity, and the rate of illumination change is calculated according to the illumination value, so as to decide whether to turn on the light and dimmer the light. Illumination deviation function E by formula (37.1): e(k) = θt (k) − θ (k)

(37.1)

where: θt (k) is the actual illumination value of the external environment; θ(k) is the set illumination value. Illuminance rate function Ec by formula (37.2): e(k) = [e(k) − e(k − 1)]/T

(37.2)

where: T is the sampling period. According to the road illumination standard [7], it is basically determined that when the ambient illumination in the morning is greater than 200 lx, the lighting shall be stopped. When the ambient illumination is lower than 300 lx at dusk, turn on the lighting. The basic universe of illumination deviation e is re = [0, 500], the quantitative universe of fuzzy language variable E is Xe = {0, 1, 2, 3, 4, 5, 6}, the fuzzy set of E is set as {SS, SM, Sb, m, BS, BM, BB}, and the quantitative factor Ke = 6/500, so the basic universe is transformed into quantitative universe by formula (37.3):   n = e × ke (n ∈ X e , e ∈ Re ) where [] represents the rounding operator.

(37.3)

37 Design of Intelligent Lighting Controller Based on Fuzzy Control

375

In this paper, triangle membership function is used because of its simple form, high calculation efficiency, and good control performance, Fuzzy subsets satisfy the formula (37.4). ⎧ 1 ⎪ ⎨ b−a (x − a) a ≤ x < b 1 u(x) = b−c (37.4) (x − c) b ≤ x < c ⎪ ⎩ 0 other Similarly, the universe of deviation change rate EC is [−50,50], the quantitative universe of fuzzy language variable EC is xec = {−3, −2, −1, 0, 1, 2, 3}, the fuzzy set of EC is set as {Nb, nm, NS, Zo, PS, PM, Pb}, and the quantitative factor KEC = 6/100. The mapping relationship from the basic universe to the quantitative universe by formula (37.5):    n = e × ke (n ∈ Xec , ec ∈ Rec )

(37.5)

Finally, the universe of the system output brightness u is Ru = [0100%], the basic universe is Xu = {0, 1, 2, 3, 4, 5}, the scale factor is Ku = 100%/5, and the fuzzy set of E is set as {Zo, SS, SM, m, BS, BB}. The mapping relation by formula (37.6): ru = ku × n(n ∈ Xu , ru ∈ Ru )

(37.6)

37.3.3 Establish Fuzzy Rules The establishment of fuzzy control rules is generally verified by empirical induction and a large number of experiments [8]. When the measured ambient illuminance value is greater than the set value, it means that the external light is sufficient, so it is necessary to suppress the lamp brightness; if the measured ambient illuminance value is less than the set value, and the illuminance change rate is zero, it is necessary to enhance the lamp brightness. Based on these fuzzy rules and combined with the analysis of the actual situation, the fuzzy reasoning rules can be written as “if e is BB and EC is Nb, then u is m; if e is SM and EC is Zo, then u is SS.” Thus get the fuzzy rule table as shown in Table 37.1.

37.4 Matlab Simulation and Result Analysis The fuzzy controller is established by MATLAB software. The controller takes E and EC as input and u as control output. The membership function diagram shown in Fig. 37.4 is obtained by writing the agreed fuzzy subset assignment table about input

376

T. Zheng et al.

Table 37.1 Fuzzy control rules table U EC

E NB

SS

SM

SB

M

BS

BM

BB

ZO

ZO

SS

SS

SM

SM

M

NM

ZO

ZO

SS

SS

SM

M

M

NS

ZO

SS

SS

SM

SM

M

BS

ZO

SS

SS

SS

SM

M

BS

BS

PS

SS

SS

SM

SM

M

BS

BB

PM

SS

SS

SM

M

BS

BB

BB

PB

SS

SM

M

BS

BS

BB

BB

(a) Input variable E

(b) Input variable EC

(c) Output variable U

Fig. 37.4 Membership function table of E, EC, U

and output into the fuzzy controller. They represent the membership function diagram of input variable E, input rate of change EC, and output variable U, respectively. At the same time, according to the above fuzzy controller structure established by MATLAB software, input 49 fuzzy control rules shown in Table 37.1, and get the fuzzy controller rule viewer, as shown in Fig. 37.5. And the spatial distribution is shown in Fig. 37.6. The illumination fuzzy controller can be verified by the Fuzzy controller Rule Viewer window in Fig. 37.3. That is, when the illumination deviation value E = 3Lx and the illumination change rate Ec = 0Lx/s are input, the actual control quantity of the system is 2, the mapping from the quantization field to the basic field is completed according to the scale factor, and the PWM duty ratio is obtained by formula (37.7): Ru = K u × X u

(37.7)

That means: when the illumination deviation value E = 250Lx and the illuminance change rate EC = 0Lx/s are detected, the system output PWM duty ratio U = 40%, and the control output table is summarized as shown in Table 37.2. The output table of fuzzy control is stored in the development software of the upper computer, and the corresponding control quantity is obtained through different input quantities. The corresponding PWM duty cycle is output by the fuzzy query table. The output value is sent to the intelligent lamp node through the coordinator for

37 Design of Intelligent Lighting Controller Based on Fuzzy Control

377

Fig. 37.5 Fuzzy controller rule viewer

Fig. 37.6 Spatial distribution of fuzzy control

data analysis and processing, so as to achieve the remote control of light brightness adjustment.

37.5 Comparison of Simulink Modeling The simulation model is established by using Simulink software package, and the fuzzy control algorithm is compared with the traditional PID control algorithm. The

378

T. Zheng et al.

Table 37.2 Output table of fuzzy control U EC

E 0

1

2

3

4

5

6

−3

3.62%

3.62%

20%

20%

40%

40%

60%

−2

3.62%

3.62%

20%

20%

40%

60%

60%

−1

3.62%

20%

20%

40%

40%

60%

80%

0

20%

20%

20%

40%

60%

80%

80%

1

20%

20%

40%

40%

60%

80%

94.73%

2

20%

20%

40%

60%

80%

94.73%

94.73%

3

20%

40%

60%

80%

80%

94.73%

94.73%

simulation model is built as shown in Fig. 37.7, which are fuzzy control block diagram and PID control block diagram. The PID parameters are adjusted by trial and error method [9]. Finally, it is concluded that the effect is ideal when KP = 0.04, Ki = 0.03, Kd = 1.2. The simulation effect is shown in Fig. 37.8. In the figure, green is the input signal, purple is the result of PID control algorithm, and yellow is the result of fuzzy control algorithm. According to the figure, PID control algorithm waveform is relatively stable, but the peak value is relatively large, the regulation time is long, the overshoot is large, fuzzy control algorithm although the waveform is relatively unstable, but the peak value is small, the overshoot is small, the regulation time is short, has a better control effect.

Fig. 37.7 Simulink simulation diagram

37 Design of Intelligent Lighting Controller Based on Fuzzy Control

379

Fig. 37.8 Simulation results

37.6 Conclusion This paper presents an intelligent lighting control strategy based on fuzzy control, which causes problems such as “long light” phenomenon, low control efficiency, and single lighting effect. By inquiring the fuzzy control output table, the system can get the optimal PWM duty ratio under the current illumination environment, so that the lighting system can automatically turn on and off the lights and adjust the brightness of the lamps flexibly according to the illumination of the external environment, so as to achieve a more intelligent and energy-saving effect. The control algorithm in this paper still has some shortcomings, fuzzy control algorithm has no obvious advantages, but after repeated debugging can also appear better control effect.

References 1. Wang, H.F.: Changes in world energy supply and demand from world energy statistical yearbook. Energy Res. Util. 4, 8–9 (2013) 2. Tahan, M., Hu, T.: Multiple string LED driver with flexible and high performance PWM dimming control. IEEE Trans. Power Electron. 12, 1–1 (2017) 3. Li, S., Guo, Y., Tan, S.C., et al.: An off-line single-inductor multiple-output LED driver with high dimming precision and full dimming range. IEEE Trans. Power Electron. 32(6), 4716–4727 (2017) 4. Zheng, Q.L., Lin, W.M.: A dimming strategy and implementation of LED drive power supply combination. J. Power Supply 4, 71–75 (2013) 5. Lin, H.J., Lai, X.Q., Lan, H.: Adaptive control method of LED lighting intensity. J. Electron. Meas. Instrum. 30(6), 887 (2016) 6. Yao, S., Jiang, N. P.: Simulation study of PID controller based on fuzzy control theory. Comput. Syst. Appl. 20(10), 125–128 (2011) (in Chinese)

380

T. Zheng et al.

7. Zhao, Z.M., Li, Q.Q., Wang, R.L.: Application of wireless sensor network and fuzzy control in lighting system. J. Light. Eng. 25(2), 102–106 (2014) 8. Wu, Y., Jiang, J.G.: Energy-saving lighting control system based on fuzzy control. Ind. Mining Autom. 6, 84–87 (2005) 9. Jin, Q., Deng, Z. J.: PID control principle and parameter setting method. J. Chongqing Inst. Technol. (Natural Science edition) 5, 91–94 (2008)

Chapter 38

Remote Sensing Monitoring of Cyanobacteria Blooms in Dianchi Lake Based on FAI Method and Meteorological Impact Analysis Wenrui Han Abstract Cyanobacteria bloom is a kind of natural phenomenon that algae multiply and congregate in water body, which is the result of various meteorological factors to some extent. Based on Landsat remote sensing data from 1999 to 2018, planktonic algae index (FAI) method was used to identify and extract cyanobacteria bloom information in Dian Lake, then, the FAI values were classified according to the classification method of cyanobacteria bloom intensity in relevant literatures, and then the temporal and spatial distribution data of cyanobacteria bloom were obtained. Combined with the meteorological data of Kunming meteorological monitoring station and analyzed by SPSS software, the relationship between the temporal and spatial distribution of cyanobacteria bloom in Dian Lake and meteorological factors was obtained. The results show that during the 20 years from 1999 to 2018, Dian Lake Bloom generally occurred in the first half of the year in the northeastern, central and northern, and coastal parts of the Southern Open Sea of northern Caohai Lake, and in the second half of the year, Dian Lake Bloom spread from the northern part of the open sea to the Lake Center, the degree of bloom in Caohai has also increased, especially from June to August. It was found that the outbreak of blue-green algae in Dian Lake had a positive correlation with temperature and precipitation, but a negative correlation with sunshine hours.

38.1 Introduction Water ecology and water environment are the problems that have received continuous attention and research, among which cyanobacteria bloom is the most serious. According to the water area where the bloom occurs, the probability of the occurrence of bloom in oceans, rivers, and lakes and the difficulty of treatment is successively increased. Lakes are frequently prone to cyanobacteria bloom because the water exchange is not as good as that of oceans and rivers. In recent years, under the W. Han (B) College of Big Data and Intelligent Engineering, Southwest Forestry University, Kunming 650224, Yunnan, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_38

381

382

W. Han

research and exploration of a large number of scholars [1–12], the lake water environment in Taihu Lake, Chaohu Lake, and other plain areas has been well improved, and the outbreak of cyanobacteria bloom has also been effectively curbed. In recent years, researchers have focused their attention on plateau lakes while studying lakes in plain areas. Compared with plain lakes, plateau lakes are more severe and difficult to control when blooms occur due to their high altitude and slow water exchange. Zhang Jiao and others [13] made a comparative analysis of the data used in the bloom study of Erhai Lake, which is a plateau lake, and Taihu Lake, which is a typical lake in the Yangtze River Basin, and pointed out that MODIS data is more suitable for monitoring such a large lake as Taihu Lake in a small range of time due to its high temporal resolution and low spatial resolution. For small and medium-sized lakes such as Erhai Lake, the high spatial resolution of Landsat data is generally used to monitor them in a long period of time. As a plateau Lake, Dianchi Lake is also the largest freshwater Lake in Yunnan-Guizhou Plateau. Many scholars have never stopped studying it. Over the years, due to the efforts of several scholars like Yu Dong, Lu Weikun, He Yunling, and others, cyanobacteria blooms in Dianchi Lake was established based on the original observation method [14]. Later, a widely used cyanobacteria extraction method was established by analyzing the observed data with a variety of methods [15–19], and finally, the blue-algae disaster early warning platform was established in Dianchi Lake [20]. However, the internal relationship between the “meteorological factor,” the most important factor that causes cyanobacteria bloom, and the outbreak of cyanobacteria bloom in the plateau lake area has not been thoroughly studied. Based on the previous work of cyanobacteria bloom in Dianchi Lake and combined with a large number of meteorological factor data, this paper analyzed the occurrence of cyanobacteria bloom in Dianchi Lake in the past 20 years year by year and month by month. In addition to identifying the occurrence scale and time period of cyanobacteria blooms in Dianchi Lake, the relationship between the occurrence of cyanobacteria bloom and “meteorological factors” is also explored in depth.

38.2 Data Sources and Research Methods 38.2.1 Overview of the Study Area Dianchi Lake (24°23 ~ 26°22 N, 102°10 ~ 103°40 E), in ancient times, it was called Yunnan Nanze, also known as Kunming Lake. Located in the middle of the YunnanGuizhou Plateau, it is the largest freshwater lake in Yunnan Province, known as the “Pearl of the Plateau.” The lake is located in the southwest of Kunming City, Yunnan Province, with an altitude of about 1886 m, an area of about 330 km2 and an average depth of about 5 m. The lake body is all within the jurisdiction of Kunming City. The northern part is Kunming urban area, the northwestern part is Kunming Xishan District, the northeastern part is Kunming Guandu District, the northeastern part is

38 Remote Sensing Monitoring of Cyanobacteria Blooms …

383

Kunming Chenggong District, and the southern part is Kunming Jinning County. The Lake is composed of the Caohai Lake in the north and the outer sea in the south, separated by the Haigeng Dam in the middle. The area of the Caohai Lake is about 10 km2 , while the outer sea is the main body of Dianchi Lake, with an area of about 289 km2 .

38.2.2 The Data Source ➀Remote sensing data: In this paper, Landsat image products in the Dianchi Lake area (strip 129, row 43) downloaded from the geospatial data cloud (www.gsclou d.cn) were selected as the data source. Cloud images were eliminated by visual inspection, and a total of 76 available images were obtained from 1999 to 2018 (see Table 38.1). ➁Meteorological data: It was download from China Meteorological Science Data Sharing Service Network, including monthly mean temperature (°C), monthly sunshine hours (h), monthly precipitation (mm), monthly maximum wind speed (m/s), and wind direction of maximum wind speed.

38.3 Research Method In this paper, the phenomenon of cyanobacteria blooms in Dianchi Lake was explained by Cao Qiaoli et al. “The reproductive activity of some algae causes obvious changes in watercolor, and the formation of thin or thick green or other colored algal floats on the water surface” [21]. This view has been accepted by most literature and media related to remote sensing monitoring of water bloom. For remote sensing image, this definition is helpful to distinguish the spectral features of water bloom area and waterless area, making the remote sensing monitoring results more accurate and reliable. Thus, the method of Zhang Jiao et al. [22] was used to study the spatial and temporal distribution of remote sensing water bloom in Dianchi Lake. ENVI remote sensing image processing software is used to preprocess image data such as radiometric calibration, and then the water of Dianchi Lake is extracted by the NDWI method. In addition, atmospheric correction is carried out for the extracted water body, and then the FAI of the water body is calculated with the band arithmetic device. Finally, the calculated FAI values are classified into three levels according to the classification method of Zhang Jiao and others [22]: the water bloom risk level I (−0.006 < FAI ≤ 0), level II (0 < FAI ≤ 0.006) and level III (FAI > 0.006) were increased successively, so as to determine the spatial distribution difference of water bloom. Then, the collected meteorological data of Dianchi Lake region from 1999 to 2018 were applied to Dianchi Lake, a plateau Lake to analyze the relationship between the temporal and spatial distribution of water bloom and meteorological factors in

7

6

2

Total

1

1

2

1

1

1

12

11

10

9

8

7

6

5

1

1

4

2

1

1

3

1

2

1

4

1

1

2

4

1

2

1

2

1

1

3

2

1

3

2

1

3

1

1

1

1

1

7

1

1

2

2

1

5

1

1

1

1

1

4

1

2

1

5

1

2

1

1

2

1

1

3

1

1

1

5

1

1

1

1

1

4

1

1

2

3

1

1

1

4

1

1

1

1

76

4

10

3

1

3

2

2

6

10

9

16

10

Month In In In In In In In In In In In In In In In In In In In In Total 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018

Table 38.1 Time distribution of remote sensing image acquisition

384 W. Han

38 Remote Sensing Monitoring of Cyanobacteria Blooms …

385

Fig. 38.1 Research method flow chart

combination with previous analysis of the meteorological cyanobact-algae relationship in lakes along the Yangtze River Basin by a large number of scholars [8, 12]. Specific process (see Fig. 38.1).

38.4 Results and Analysis 38.4.1 Time and Space Distribution of Cyanobacteria Blooms in Dianchi Lake 38.4.1.1

Time Distribution

According to the monitoring results, the occurrence time of cyanobacteria blooms in Dianchi Lake is concentrated from late spring to mid-autumn, that is, from May to October every year. Generally, the accumulation of cyanobacteria begins in May, and the average monthly value of 20 years reaches 130 km2 . The most serious blooms occurred from June to September, and the average monthly blooms in 20 years were above 250 km2 . Then, the bloom generally begins to subside in November (see Fig. 38.2).

386

W. Han

Average monthly area (m2)

Fig. 38.2 The average area of cyanobacteria blooms in Dianchi Lake from 1999 to 2018

Average annual area (m2)

Fig. 38.3 Annual average area of cyanobacteria blooms in Dianchi Lake from 1999 to 2018

The interannual variation observation of 20 years shows that the annual average value of cyanobacteria bloom in Dianchi Lake is basically stable before 2013, and the average annual area is above 50 km2 . However, after 2013, due to the government and society’s attention and protection of plateau lakes, cyanobacteria blooms in Dianchi Lake decreased significantly, with an average annual area of less than 50 km2 (see Fig. 38.3).

38.4.1.2

Spatial Distribution

According to the statistical results (see Fig. 38.4), water blooms can be seen all year around in the northeast of Caohai Lake. The Caohai Lake, the area near the Caohai Lake and the coastal area of the Caohai Lake have weaker water mobility than the central area of the lake and are closer to the living and active areas of human beings, so pollutants are more likely to accumulate and not easy to be discharged. Therefore, every flood bloom starts from these three parts and accumulates. After the bloom

38 Remote Sensing Monitoring of Cyanobacteria Blooms …

387

Normal water area Risk level I Risk level II Risk level III

Fig. 38.4 FAI method for extracting cyanobacteria blooms from Dianchi Lake (Partial display)

increased, it gradually invaded the central area of Dianchi Lake from the outer sea, especially from the north. Through a survey of the changes of cyanobacteria blooms in Dianchi Lake in the past 20 years, we found that almost every year’s cyanobacteria blooms conform to this rule.

38.4.2 Mechanism Analysis of Meteorological Factors Influencing the Occurrence and Distribution of Cyanobacteria Blooms in Dianchi Lake 38.4.2.1

Mechanism Analysis of Temperature Factors Affecting the Occurrence and Distribution of Cyanobacteria Bloom

According to Pearson correlation analysis of 20-year monthly average temperature and bloom area by SPSS software, there is a general positive correlation between temperature and the outbreak of cyanobacteria bloom (r = 0.513). For the past two decades, the average monthly temperature change has peaked in June and July, followed by a gradual cooling in subsequent months, which is positively correlated with the algae bloom in Dianchi Lake (see Fig. 38.5). With the increase of water temperature, the growth rate of cyanobacteria increased. At normal temperature, beneficial single-celled algae grow no slower than cyanobacteria. And when temperatures rise, the algae grow faster than other algae. Due to the inhibition of the growth of other algae, cyanobacteria are unlikely to erupt in large scale at normal temperature, and only in the hot season (June and July), the growth of cyanobacteria is rapid. Therefore, temperature is one of the main factors in blue-green algae blooms.

388

W. Han

Average temperature Bloom area

Fig. 38.5 Line chart of the relationship between average temperature and water bloom area from 1999 to 2018

38.4.2.2

Effect of Sunshine Duration on the Occurrence and Distribution of Cyanobacteria Bloom

According to Pearson correlation analysis of monthly sunshine hours and bloom area in 20 years by SPSS software, sunshine hours and bloom area have a general negative correlation (r = −0.516). When the outbreak degree of cyanobacteria bloom in Dianchi Lake was relatively high, the sunshine duration was generally low (see Fig. 38.6). This is contrary to what I usually know, “sunshine is conducive to bloom outbreak.” This is due to the fact that Yunnan, where Dianchi Lake is located, is located in a high plateau area with high ultraviolet light, and too strong ultraviolet radiation inhibit the growth of blue algae.

Sunshine hours Bloom area

Fig. 38.6 Line chart of relationship between sunshine hours and water bloom area from 1999 to 2018

38 Remote Sensing Monitoring of Cyanobacteria Blooms …

389

Precipitation Bloom area

Fig. 38.7 Line graph of precipitation and water bloom area from 1999 to 2018

38.4.2.3

Mechanism Analysis of the Occurrence and Distribution of Precipitation on Cyanobacteria Bloom

Theoretically, the impact of precipitation on the outbreak of cyanobacteria bloom is mainly reflected in the dilution of rainfall. The intuitive result is that with the increase of precipitation, the detected cyanobacteria area will decrease, that is, precipitation and cyanobacteria outbreak is negatively correlated. However, due to the rapid development of industrialization and urban construction in Dianchi Lake in recent years, a large amount of waste and wastewater generated by human production and living flowed into Dianchi Lake. As precipitation increased, the inflow rate of wastewater and waste into Dianchi Lake was aggravated, leading to the aggravation of eutrophication in the water body of Dianchi Lake and the largescale outbreak of blue-green algae. In addition, Dianchi Lake is located in the plateau monsoon climate zone. In June and July, the temperature in this region reaches the maximum value of a year together with the precipitation, so the temperature here is also suitable for the growth of blue algae in the Lake area. Therefore, as shown in Fig. 38.7, in June and July of each year, precipitation and bloom area are mostly positively correlated, which is the result of combined action of water eutrophication and temperature suitable for cyanobacteria bloom.

38.4.2.4

Mechanism Analysis of Wind on the Occurrence and Distribution of Cyanobacteria Bloom

Generally speaking, in the short term, if the wind blows the bloom away from the lakeshore, it will facilitate the downwind drift of cyanobacteria bloom and cause the expansion of the bloom area. On the contrary, if the wind blows the bloom toward

390

W. Han

Normal water area Risk level I Risk level II Risk level III

Fig. 38.8 Water bloom image on January 11, 2000 and April 6, 2008

the lakeshore, it will cause the bloom to gather, the bloom area will be reduced, but the bloom risk level will be increased. For example, on January 11, 2000, the maximum wind speed was 9 m/s and the wind direction was southwest, which made the water bloom on the west coast of Dianchi Lake erode toward the Lake center. On April 6, 2008, the maximum wind speed was 8.6 m/s, and the wind direction was northwest, which made the bloom on the east coast of Dianchi Lake tend to gather, and the bloom risk level increased (see Fig. 38.8).

38.5 Discussion The key to accurately identify and extract the information of cyanobacteria bloom is to distinguish the water area in bloom from the normal water area. The FAI method utilizes the high reflectance of cyanobacteria in the near-infrared band to extract cyanobacteria. Compared with other methods, this method is not easily affected by atmospheric scattering, solar altitude Angle and aerosol, and reduces the sensitivity to environmental conditions. Therefore, it can be used to establish a long-time series remote sensing bloom monitoring. In this paper, FAI method and Landsatellite remote sensing data were used to extract cyanobacteria bloom information, and the advantages of the two data were combined to make the monitoring results more global, long-term, and reliable. It provides data support for analyzing the temporal and spatial law of cyanobacteria blooms in Dianchi Lake. Usually, cyanobacteria bloom occurs in the bay and coastal waters of Caohai Lake and offshore seas, and then diffuses to the central waters of

38 Remote Sensing Monitoring of Cyanobacteria Blooms …

391

Dianchi Lake. Because Caohai Lake is close to human production and living areas, pollutants produced by human activities have damaged the water environment of Caohai Lake. The special geographical location of the bay area makes the water flow slow down because of the curved lakeshore, and the pollutants flowing into the bay area are not easy to diffuse. The coastal area and the entrance of rivers into the lake are the main ways for pollutants to enter the lake. The increase of pollutant concentration makes the water body enter the eutrophication state. All three of these cause a bloom of cyanobacteria. Influenced by the temporal resolution of remote sensing satellites and cloud obstruction of remote sensing images in the study area, cyanobacteria bloom cannot be accurately monitored in the time period of image absence or cloud obstruction, which may cause the omission of bloom information and lead to incomplete representation of the law and trend of bloom outbreak. In addition, cyanobacteria blooms in Dianchi Lake mainly occur in the summer and autumn of the second half of the year. At this time, there are many cloudy and rainy days in Yunnan, which greatly affects the monitoring of cyanobacteria blooms in Dianchi Lake. It can also be seen from the time distribution of acquired images (Table 38.1) that in recent 20 years, there were few available images in the study area from June to September, so the monitoring results may not accurately reflect the occurrence of cyanobacteria blooms in Dianchi Lake in summer and autumn. However, the FAI results of eight landscape images from June to September in the past 20 years also show that summer and autumn are indeed the seasons of cyanobacteria bloom in Dianchi Lake. Blooms in the central waters of Dianchi Lake mainly occur in late summer and fall. Dianchi Lake is a subtropical plateau monsoon climate, with warm spring, dry and little rain, large daily temperature variation, and the average monthly temperature is mostly below 20 °C. There is no summer heat and rainfall is concentrated, which accounted for more than 60% of the annual rainfall with the average temperature 22 °C. In autumn, the temperature and the air is cool, the rain decreases, and the frost period begins. Rainfall can increase the surface runoff, making the land-based pollutants flow into the lake area, increasing the accumulation of nitrogen, phosphorus, and other nutrients in the lake area, and inducing the rapid reproduction of blue algae and the formation of water bloom, especially in the lakeshore area. Through the analysis of remote sensing monitoring and meteorological data, the relationship between the distribution of cyanobacteria bloom in Dianchi Lake area and temperature, sunshine duration, precipitation, and wind speed is obtained, which is consistent with the conclusion monitored by remote sensing image, making the conclusion more convincing.

38.6 Conclusion According to the monitoring results of cyanobacteria bloom obtained from Landsat images from 1999 to 2018, in the first half of each year, cyanobacteria bloom appears in Caohai Lake and offshore coastal areas on a small scale. In the second half of

392

W. Han

each year, in summer and autumn, the environment was suitable for the growth of cyanobacteria, and cyanobacteria broke out on a large scale, invading from the coast to the center of the lake, and the risk of bloom increased in most of Caohai Lake and the coastal areas of the outer sea. This conclusion is supported by meteorological data. The relevant departments can take advantage of the qualitative relationship between the outbreak of cyanobacteria bloom and meteorological factors to effectively prevent and control the outbreak of cyanobacteria blooms in Dianchi Lake. Based on the influence of natural factors on cyanobacteria bloom, some effective bloom prevention and control measures can be developed artificially, and the reference information can be provided for further exploring the relationship between Lake water quality and meteorological environment. Acknowledgements This paper was supported by the National Natural Science Foundation of China (Project No.: 31860181). At the same time, I would like to express my gratitude to the reviewers and experts who provide opinions and suggestions for this article.

References 1. Ma, R.H., Kong, F.X., Duan, H.T., Zhang, S.X., Hong, W.J., Hao, J.Y.: Understanding the temporal and spatial distribution of cyanobacteria bloom in Taihu Lake based on satellite remote sensing. Lake Sci. 20(6), 687–694 (2008) 2. Liu, J.P., Zhang, Y.C., Qian, X., Qian, Y.: Research on remote sensing monitoring of cyanobacteria bloom in Taihu lake. Environ. Pollut. Cont. 31(08), 79–83 (2009) 3. Jin, Y., Zhang, Y., Jiang, S.: Research on the application of EOS/MODIS data in the extraction of temporal and spatial distribution of cyanobacteria bloom in Taihu Lake. Environ. Sci. Technol. 22(S2), 9–11+14 (2009) 4. Li, X.W., Niu, Z.C., Jiang, S.: Landsat5 TMremote sensing image Study on the characteristics of reflectance spectra of cyanobacteria bloom in Upper Taihu Lake. Environ. Monit. Manag. Technol. 22(06), 25–31 (2010) 5. Liu, J.T., Yang, Y.S., Gao, J.F., Jiang, J.H.: Classification of cyanobacteria bloom in Lake Taihu and its temporal and spatial changes. Resour. Environ. Yangtze Basin 20(2), 156–160 (2011) 6. Xu, Y.F., Li, Y.M., Wang, Q., Lv, H., Liu, Z.H., Xu, X., Tan, J., Guo, Y.L., Wu, C.Q.: Evaluation of the eutrophication status of the Three Lakes and One Reservoir based on the multi-spectral image data of the Environment 1 satellite. Acta Sci. Circum. 31(1), 81–93 (2011) 7. Liu, X.Y., Ni, F., Zhou, Y.H.: Analysis of the temporal and spatial law of cyanobacteria bloom in Taihu Lake based on MODIS. J. Nanjing Normal Univ. (Natural Science Edition) 35(1), 89–94 (2012) 8. Wang, J.H., He Lv, Q.S., Yang, C., Dao, G.H., Du, J.S., Han, Y.P., Wu, G.X., Wu, Q.Y., Hu, H.Y.: Comparison of water blooms in Taihu, Chaohu, and Dianchi Lake with related meteorological and water quality factors and their responses (1981–2015). Lake Sci. 30(4), 897–906 (2018) 9. Sha, L.W., Liu, G., Wen, Z.D., Song, K.S.: Research on the temporal and spatial changes of cyanobacteria bloom in Taihu Lake based on MODIS data. Wetland Sci. 16(03), 432–437 (2018) 10. Li, X. W., Shi, H., Zhang, Y., Niu, Z. C., Wang, T. T., Ding, M., Cai, K.: Remote sensing monitoring of cyanobacteria in Taihu Lake based on the European Space Agency “Sentinel-2A” satellite. China Environ. Monit. 34(4), 169–176 (2018)

38 Remote Sensing Monitoring of Cyanobacteria Blooms …

393

11. Li, X.W., Zhang, Y., Shi, H., Jiang, S., Wang, T.T., Ding, M., Cai, K.: Application of the maximum chlorophyll index based on sentinel-3A satellite OLCI data in the monitoring of cyanobacteria bloom in Taihu Lake. China Environ. Monit. 35(03), 146–155 (2019) 12. Hang, X., Xu, M., Xie, X.P., Li, Y.C.: Assessment of the impact of meteorological conditions on cyanobacteria bloom in Taihu Lake under eutrophication. Sci. Technol. Eng. 19(07), 294–301 (2019) 13. Zhang, J., Chen, L.Q., Chen, X.L., Tian, L.Q.: HJ-1B and Landsat satellite cyanobacteria bloom monitoring capability evaluation-taking Erhai as an example. J. Water Resour. Water Eng. 27(4), 38–43 (2016) 14. Yu, D., Li, F.R., Wang, J.T.: Analysis of the evolution of the water environment of Dianchi Lake and the development of algae monitoring technology. Environ. Sci. Guide 32(5), 53–57 (2013) 15. Lu, W. K., Xie, G. Q., Yu, L. X., Yang, S. P.: MODIS remote sensing monitoring the distribution of cyanobacteria blooms in Dianchi Lake. Meteorol. Sci. Technol. 37(5), 618–620+646 (2009) 16. He, Y.L., Xiong, Q.L., Luo, X., Li, T.Y., Yu, L.: Research on spatiotemporal changes based on the water bloom characteristics of NDVI Dianchi Lake. Chin. J. Ecoenviron. 28(3), 555–563 (2019) 17. Zhao, D.: Remote sensing monitoring of typical inland lake reservoir cyanobacteria bloom. Xi’an University of Science and Technology (2018) 18. Sheng, Hu., Guo, H. C., Liu, H., Yang, Y. H.: Inversion of cyanobacteria bloom in the offshore waters of Dianchi Lake and discussion on its regularity. Acta Ecologica Sinica 32(1), 0056– 0063 (2012) 19. Wang, J.H., Yang, C., He Lv, Q.S., Dao, G.H., Du, J.S., Han, Y.P., Wu, G.X., Wu, Q.Y., Hu, H.Y.: Meteorological factors and water quality changes of Plateau Lake Dianchi in China (1990– 2015) and their joint influences on cyanobacterial blooms. Sci. Total Environ. 665, 406–418 (2019) 20. Fang, S.Z., Yan, X., Peng, B., Wu, Y., Qin, Y.J.: Cyanobacteria blooms in Dianchi Lake monitoring and early warning spatial information system design and implementation. Inf. Technol. Netw. Secur. 38(4), 97–101 (2019) 21. Cao, Q.L., Huang, Y.L., Chen, M.X.: Simulation experiment research and discussion on the generation and disappearance of cyanobacteria bloom under hydrodynamic conditions. Disaster Prevent. Eng. 1, 8–10 (2008) 22. Zhang, J., Chen, L.Q., Chen, X.L.: Erhai cyanobacteria bloom remote sensing monitoring based on FAI method. Lake Sci. 28(4), 718–725 (2016)

Chapter 39

Situation Analysis of Land Combat Multi-platform Direct Aim Confrontation with Non-combat Loss Guosheng Wang, Bing Liang, and Kehu Xu

Abstract The time-domain mathematical model of land combat multi-platform direct aim confrontation system with non-combat loss is established by introducing situation variables. Through the Laplace transform, the analytical solution of the situation vector of multi-platform direct aim confrontation with non-combat loss is given, which shows that the situation vector at any time can be obtained by the transformation of the initial situation vector through the situation transfer matrix. On this basis, the number of remaining platforms and confrontation time is given when one side wins in the land combat multi-platform direct aim confrontation with non-combat loss. Finally, the situation solving method and confrontation outcome of the proposed land combat multi-platform direct aim confrontation with non-combat loss are verified by three cases: both sides have non-combat loss, and both sides haven’t non-combat loss. The analysis results show the effectiveness of the proposed method in this paper.

39.1 Introduction With the continuous emergence and application of high and new technologies, modern land warfare has developed into a three-dimensional contract combat, which has the major characteristics of great killing and destruction power, high intensity and tension, rapid situation change, rapid transformation of combat style, complex command coordination and arduous service support [1, 2]. The incapacitation of land combat platform caused by non-combat reasons is called non-combat loss, which is usually the irreparable loss or damage of the platform. In the process of confrontation between both sides, non-combat loss will also G. Wang (B) · K. Xu Department of Weapon and Control, Army Academy of Armored Force, Beijing 100072, China e-mail: [email protected] B. Liang School of Electronic and Information Engineering, North China Institute of Science and Technology, Langfang 100190, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_39

395

396

G. Wang et al.

have a negative impact on combat effectiveness. Therefore, non-combat loss should be considered [3]. Therefore, it is very necessary to analyze and design the land war multi-platform confrontation qualitatively and quantitatively. The conclusions can provide more scientific basis and help for commanders to grasp the scale of command decision-making and predict the outcome of combat command decision-making [4–7]. Based on the analysis of the combat mechanism of the land combat multi-platform direct aim confrontation system with non-combat loss, the time-domain situation space model of the countermeasure system is established by introducing situation variables. By introducing the situation transfer matrix through Laplace transform, the time-domain analytical solution of the situation vector of land combat multi-platform direct aim confrontation with non-combat loss is given, and the transfer transformation relationship between the situation vector and the initial situation vector at any time is revealed. Based on this, taking one side’s situation variable as zero as the end standard of confrontation, the number of remaining platforms and confrontation time of the winner at the end of land combat multi-platform direct aim confrontation with non-combat loss are given. Finally, the examples of land combat multi-platform direct aim confrontation with and without non-combat losses are compared and analyzed to verify the effectiveness of the proposed solution method and conclusion of land combat multi-platform direct aim confrontation with non-combat loss in this paper.

39.2 Time Domain Situation Model of land Combat Multi-Platform Direct Aim Confrontation with Non-Combat Loss Figure 39.1 shows the schematic diagram of land combat multi-platform direct aim confrontation with non-combat loss on both sides of X and Y. In order to destroy the other side’s land combat platform and achieve the purpose of combat victory, X and Y implement tactical layout according to the other side’s intelligence information obtained from combat field reconnaissance. Finally, the number of platforms of one side is zero as the standard of confrontation victory. However, in the process of confrontation between the two sides, there are inevitably non-combat loss, that is, platform failure caused by non-combat loss, including irreparable loss and damage on the combat field. Assume that the direct aim direct aim confrontation coefficient of Y side to X side is β1 , the non-combat loss coefficient of X side is α1 , the direct aim direct aim confrontation coefficient of X side to Y side α2 , the non-combat loss coefficient of Y side is β2 , where β1 , α1 , α2 , and β2 are normalized constants between 0 and 1. Assume that the confrontation system contains two situation variables x(t) and y(t), which represent the number of instantaneous remaining platforms of X side and Y side at the time t in the land combat multi-platform direct aim confrontation, respectively; x(0) and y(0) are the initial number of platforms of X side and Y side, respectively; f [x(t), y(t), α1 , β1 , t] and g[y(t), x(t), α2 , β2 , t] are functions

39 Situation Analysis of Land Combat Multi-platform …

397

Fig. 39.1 Schematic diagram of land combat multi-platform direct aim confrontation with noncombat loss

associated with self-variables x(t), y(t), α1 , β1 , α2 , β2 , and t, thus the time-domain mathematical model of land combat multi-platform direct aim confrontation with non-combat loss is  x(t) ˙ = f [x(t), y(t), α1 , β1 , t] (39.1) y˙ (t) = g[y(t), x(t), α2 , β2 , t] Only consider the linear relationship between confrontation situation variables, then there holds  f [x(t), y(t), α1 , β1 , t]= − β1 y(t) − α1 x(t) (39.2) g[y(t), x(t), α2 , β2 , t]= − α2 x(t) − β2 y(t) Form (39.1), obtain 

x(t) ˙ = −β1 y(t) − α1 x(t) y˙ (t) = −α2 x(t) − β2 y(t)

(39.3)

From (39.3), we can find that −β1 y(t) can −α2 x(t) play the role of reducing the number of platforms of the other side, −α1 x(t) and −β2 y(t) play the role of reducing its own platform. Set the situation vector of land combat multi-platform direct aim confrontation as   x(t) X (t) = (39.4) y(t) Then equation (39.3) is written as the following matrix form

398

G. Wang et al.

X˙ (t) = AX (t)

(39.5)

where 

−α1 −β1 A= −α2 −β2

 (39.6)

The characteristic equation of matrix A is |s I − A| = s 2 + (α1 + β2 )s + α1 β2 − α2 β1 = 0

(39.7)

The eigenvalues of matrix A are s1,2

−(α1 + β2 ) ± = 2





(39.8)

where =(α1 + β2 )2 − 4(α1 β2 − α2 β1 )

(39.9)

Because the number of remaining platforms of both sides decreases monotonously with the increase of time in the process of land combat multi-platform direct aim confrontation with non-combat loss, and the final outcome of the confrontation is the defeat of one party, that is, the confrontation process is a system in which the defeated side is unstable and the winner is stable, so only the case with one positive eigenvalue and one negative real eigenvalue is considered. It is not difficult to find from (39.8) that the following condition that the two eigenvalues of characteristic equation (39.7) have one negative eigenvalue and one positive eigenvalue: √  > α1 + β2

(39.10)

α1 β2 < α2 β1

(39.11)

Thus there holds

39.3 Situation of Land Combat Multi-platform Direct Aim Confrontation with Non-combat Loss Theorem 1. The situation vector X (t) at any time in the land combat multi-platform direct aim confrontation system with non-combat loss represented by equation (39.5) satisfying condition (39.11) can be obtained from the initial direct aim confrontation

39 Situation Analysis of Land Combat Multi-platform …

399

situation vector X (0) at t=0 through the situation transfer matrix (t), that is X (t) = (t)X (0)

(39.12)

where α



1+ √

(t)=

 s1 t e 2  α2 s1 t √ e 2 

− −

√ β1 s1 t α1 − √  es2 t √ e 2  2√ β2 +  s1 t α2 s2 t √ √ e e 2  2 

− −

β1 s2 t √ e 2 √ β2 − √  es2 t 2 

 (39.13)

where =(α1 + β2 )2 − 4(α1 β2 − α2 β1 ) and (t) is called the situation transfer matrix. Proof. Apply the Laplace transformation on both sides of (39.5) and obtain s X  (s) − X (0) = AX  (s) then there holds   X (t) = L −1 (s I − A)−1 X (0) where   L −1 (s I − A)−1 =

 α +√ 1



es1 t −

2  α2 s1 t √ e 2 



√ β1 s1 t α1 − √  es2 t √ e 2  2 √ β2 +  s1 t α2 s2 t √ √ e e 2  2 

− −

β1 s2 t √ e 2 √ β2 − √  es2 t 2 

thus α

√  s1 t e 2  α √2 es1 t 2 

1+ √

X (t) =

− −

√ β1 s1 t α1 − √  es2 t √ e 2  2√ +  β α 2√ √2 es2 t es1 t 2  2 

− −

β1 s2 t √ e 2 √ β2 − √  es2 t 2 

 X (0)

Set α (t)=



1+ √

 s1 t e 2  α2 s1 t √ e 2 

− −

√ β1 s1 t α1 − √  es2 t √ e 2  2√  s1 t β2 + α2 s2 t √ √ e e 2  2 

then X (t) = (t)X (0) This proof is over.

− −

β1 s2 t √ e 2 √ β2 − √  es2 t 2 





400

G. Wang et al.

39.4 Situation Simulation Analysis of Land Combat Multi-Platform Direct Aim Confrontation with Non-Combat Loss In a land combat multi-platform direct aim confrontation with non-combat loss, the initial platforms of X side is x(0)=60 and the initial platforms of Y side is y(0) = 100, non-combat loss coefficients and confrontation coefficients are β1 = 0.5, α2 = 0.6, α1 =0.2, β2 =0.1. Because α1 β2 =0.2 < α2 β1 =0.3, thus the condition (39.11) is satisfied. Then the matrix A in the mathematic model (39.5) is 

−0.2 −0.5 A= −0.6 −0.1



Their eigenvalues are s1 = − 0.7 and s2 =0.4. Thus there holds =(α1 + β2 )2 − 4(α1 β2 − α2 β1 ) = 1.21 From (39.13), obtain the situation transfer matrix as 

0.4091e−0.7t + 0.5909e0.4t 0.2273e−0.7t − 0.2273e0.4t (t)= 0.2727e−0.7t − 0.2727e0.4t 0.5455e−0.7t − 0.5455e0.4t



From (39.12), obtain the situation vector is 

0.4091e−0.7t + 0.5909e0.4t 0.2273e−0.7t − 0.2273e0.4t X (t)= 0.2727e−0.7t − 0.2727e0.4t 0.5455e−0.7t − 0.5455e0.4t



60 100



The simulation results are shown in Fig. 39.2 which is consistent with the above theoretical calculation results. 100

Remaining platform(number)

X Y

80

60

40

20

0 0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

Time(day)

Fig. 39.2 Change of instantaneous number of remaining platforms on both sides of land combat multi-platform direct aim confrontation with non-combat loss

39 Situation Analysis of Land Combat Multi-platform …

401

39.5 Conclusion In this paper, the time-domain situation model of land combat multi-platform direct aim confrontation system with non-combat loss is established, and its display analytical solution is given. Based on this solution, the final number of remaining platforms of the winner at the end of the confrontation and the end time of the confrontation are given. Finally, the simulation analysis is carried out through the examples of land combat multi-platform direct aim confrontation with non-combat loss.

References 1. Sha, J.C.: Mathematical Tactics. Science Press, Beijing (2003) 2. Sha, J.C., Ma, C.L., Chen, C.: War Design Engineering. Science Press, Beijing (2009) 3. Zhu, F., Hu, X. F.: Overview and research prospect of combatfield situation assessment based on deep learning. Military Oper. Syst. Eng. 30(3), 22–27 (2016) 4. Xu, K.H., Zhang, M.S.: Key technologies of firepower coordination of synthetic equipment system. Fire Command Contr. 45(6), 1–7 (2020) 5. Xu, K. H., Wang, G. S.: Solution of fire coordination scheme of equipment system based on fuzzy clustering-auction mechanism. CCC 2020, Shenyang, pp.2135–2139 (2020) 6. Wang, G. S., Qi, Z. F.: AHP effectiveness evaluation of electronic combatfare command and control system under complex electromagnetic environment. Adv. Mater. Res. 989–994, 3212– 3215 (2014) 7. Chen, X.Y.: Research on Some Command Decisions and Countermeasures Based on Lanchester Equation. Northeast University Press, Shenyang (2011)

Chapter 40

Automotive Steering Wheel Angle Sensor Based on S32K144 and KMZ60 Zhiqiang Zhou, Yu Gao, Long Qian, Tianyu Li, and Wenhao Peng

Abstract Using a rotary magnetoresistive sensor (Chen in Research on highprecision electronic compass design based on anisotropic magnetoresistance. University of Electronic Science and Technology of China, 2019) as a steering wheel angle measurement tool. The sensor adopts the high-precision and high-stability KMZ60 chip, combined with the automotive-grade processor chip S32K144, to ensure the accuracy and reliability of the sensor. At the same time, it is equipped with a CAN transceiver and CAN bus to ensure the interaction between the sensor and other units of the car. The sensor chip can perform high-precision steering wheel angle measurement. It adopts a three-gear mechanical structure, taking into account the measurement accuracy and measurement range at the same time, and the gear mechanical structure is matched in the software design to ensure the zero-point calibration and angle calculation. Through reasonable hardware arrangement and Angle algorithm programming, the automotive Angle sensor chip with high precision, strong adaptability, and a suitable price is obtained.

40.1 Introduction Commonly used angle sensors include resistive type, Hall type, photoelectric type, magnetoresistive type, etc. At present, there is no absolutely perfect angle sensor on the market. As a more reliable Angle measurement method, the measurement method based on the principle of anisotropic magnetoresistance (AMR) has many advantages, and is more suitable for automobile Angle measurement. The direction angle sensor is a part of the car circuit [2], and the torque sensor is similar to it, they are all automotive parts with extremely high-reliability requirements, so they use professional automotive-grade sensors S32K144 and microprocessor chips from NXP. Z. Zhou · Y. Gao (B) · L. Qian · T. Li · W. Peng Shanghai Dianji University, No. 300 Shuihua Road, Pudong New Area, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_40

403

404

Z. Zhou et al.

In addition, a power supply, filter amplifier, and other peripheral circuits are built to support the basic operation of the system. In the mechanical structure, a threegear structure with self-quantity range is adopted, and a certain transmission ratio is designed through calculation to ensure that the measurement range is greater than 1080°, but also take into account higher measurement accuracy [3]. The design of the program mainly matches the gear ratio: calculate the difference between the rotation angles of the two pinions, and calculate the arctangent to obtain the true rotation angle of the steering wheel. Setting the interrupt program, it is guaranteed that the zero mark is implemented when the sensor is installed.

40.2 Analysis of the Principle of Magnetoresistance and Mechanical Structure The relationship between the resistance of anisotropic magnetoresistive material and its external magnetic field is shown in the following formula: R = Rv sin 2 α + R p cos 2 α

(40.1)

In the formula: Rv represents the resistance when the external magnetic field is perpendicular to the direction of the current. R p represents the resistance when the external magnetic field is parallel to the direction of the current. R is the total resistance of the magnetoresistive material. It can be seen from the above formula that by measuring the resistance of the alloy resistance, the direction of the magnetic field in the material can be obtained, thereby knowing the direction of the external magnetic field, and then obtaining the rotation angle of the external permanent magnet (steering wheel rotation angle). The principle of a magnetoresistive sensor is simple, with fewer intermediate steps, which guarantees the reliability of the system to a certain extent. The reliability is specifically manifested in the following two aspects: 1. There is no direct contact between the permanent magnet and the sensor chip, so there is no wear and noise problems; 2. The response speed is fast, and the corner signal is transmitted through the magnetic field. This article uses NXP’s KMZ60 sensor chip, which contains two bridges as shown in Fig. 40.1. Using two adjacent bridge structures can solve the problem of temperature compensation, to improve the anti-interference ability of the sensor. When the angle of the magnetic field is at an angle of α to R1, according to formula 40.1 and simple derivation, the output voltage of the bridge on the right is Vcos . Vcos = +Vo2 − (−Vo2 ) = Vcc2

Rv − R p cos2α Rv + R p

(40.2)

40 Automotive Steering Wheel Angle Sensor Based on S32K144 and KMZ60

405

Fig. 40.1 Schematic diagram of electric bridge

Similarly, the output voltage of the left bridge is Vsin . Vsin = +Vo1 − (−Vo1 ) = Vcc1

Rv − R p sin2α Rv + R p

(40.3)

Rv represents the resistance when the external magnetic field is perpendicular to the direction of the current. R p represents the resistance when the external magnetic field is parallel to the direction of the current. α = ar ctan

Vsin Vcos

(40.4)

After the derivation of the expressions of Vcos and Vsin , the relationship between α and the measured voltage is obtained. By measuring the output voltage of the two bridges, and then calculating by a certain algorithm, the angle can be perfectly measured.

40.3 Mechanical Structure Design According to the characteristics of the principle of magnetoresistance, the magnetoresistive material can accurately reflect the direction of the external magnetic field within 180◦ , but if there is a multi-turn measurement [4], when the measurement range is far beyond the measurement range of the magnetoresistive bridge, a single sensor cannot uniquely indicate the measured rotation angle. The actual steering wheel’s rotation range is more than ±540◦ (1080◦ ). So it is more appropriate to use the gear transmission ratio to expand the measurement range. The following issues need to be paid attention to when setting up the gear structure: If the rigidly connected gear of the steering wheel (here called the steering wheel gear) is set as a gear with few teeth, the gear meshing with it (referred to as the sensor

406

Z. Zhou et al.

Fig. 40.2 Gear transmission diagram

gear here) is set as a gear with many gear teeth. When the steering wheel gear rotates multiple times, the sensor gear will only rotate within 180◦ , which can ensure that the sensor can directly measure the steering wheel angle, but the error of this solution is relatively large, because the accuracy is directly related to the transmission ratio and needs Manufacturing a gear with many gear teeth causes the sensor to be too large, and this solution is not feasible. This article adopts a large and two small gear transmission structure, the steering wheel shaft is rigidly connected with the large gear, and the rotation angle difference between the two small gears is calculated by measurement, and then the large gear rotation angle (the real steering wheel angle) is calculated, which can not only expand the measurement range, but also maintain high measurement accuracy. The gear design is shown in Fig. 40.2: Gear 1: 56 teeth Gear 2: 30 teeth Gear 3: 28 teeth Suppose the rotation angle of gear 1, that is, the real rotation angle of the steering wheel is α. Then the rotation angles of gear 2 and gear 3 are: 

β1 = α × β2 = α ×

56 30 56 28

(40.5)

56 is the transmission ratio, which can also be regarded as the Among them, 30 can be regarded as the magnification of gear 3. magnification of gear 2. Similarly, 56 28 When the steering wheel angle is α, the angle difference between gear 2 and gear 3 is:

β = β2 − β1 = α ×

56 56 −α× 28 30

(40.6)

40 Automotive Steering Wheel Angle Sensor Based on S32K144 and KMZ60

407

Bringing the steering wheel angle to reality: when α = ±540◦ , β = ±72◦ , that ◦ 56 ◦ and 56 . is, whenever the steering wheel rotates 1°, gear 2 and gear 3 rotate 30 28 ◦ 2 Respectively, the difference is it is 15 . It can be seen that when the steering wheel rotation angle changes greatly, the rotation angle difference of the two pinion gears changes very little, that is, the rotation angle change period of the two pinion gears is small, but their rotation angle difference change period is large. It can also be to 1350◦ , so for concluded that the structure expands the original range of 180◦ by 15 2 any steering wheel angle, there is always a unique angle difference corresponding to it.

40.4 Hardware Design The hardware schematic diagram of the sensor is shown in Fig. 40.3: The hardware of the angle sensor mainly includes the following parts: signal acquisition module, microprocessor module, power supply module, CAN module, etc. The hardware workflow is as follows: the rotation of gear 1 drives the rotation of the ring magnets on the two sensors, and the rotation of the ring magnet makes the KMZ60 chip’s bridge induce a sine and cosine induced voltage corresponding to the angle [5]. The induction signal is relatively small and needs to be filtered. The signal enters the amplification and filtering module successively, and then enters the S32K144 special automotive-grade microprocessor. The microprocessor comes with a 12bit A/D conversion module, which converts the filtered sine and cosine signals into digital signals, and the signals are converted into angle information through program processing. Angle information usually goes to two places: in order to obtain the current steering wheel angle immediately every time the car starts [6], it is necessary to save the angle signal in the EEPROM. On the other hand, the real-time signal enters the CAN transceiver module and uploads the CAN bus for use by the car ECU [7].

Fig. 40.3 Hardware flow chart

408

Z. Zhou et al.

The electric energy of the sensor comes from the car battery, but the electric energy of the car battery is not stable enough, and the output voltage is easily affected by conditions such as temperature and power. For example, when the car starts, the battery output voltage drops significantly, and this electric energy needs to be stabilized before it can be used. Therefore, the c5005 linear stabilized power supply module is used to solve the power quality problem, and the current after the stabilized voltage is used by the various components of the sensor.

40.5 Software Algorithm The software design includes two parts. The first is zero mark calibration, and the second is angle calculation.

40.5.1 Zero Mark The zero point of the steering wheel means that at this angle, except for other factors, the car can just go straight forward. If the car is deflected to the left or right on the basis of this angle, the car will no longer move straight, so the accurate calibration of the zero point directly affects the safety problem. At the same time, it should be considered that it is almost impossible for the car to stay at the zero position when the vehicle is turned off. It is necessary to record the turning angle relative to the zero point when parking. To achieve zero calibration, the steering wheel needs to be accurately turned to the initial state [8]. After accurate installation, press the interrupt button to interrupt the system, and record the angle corresponding to the zero point in the EEPROM [9]. The angle is set to zero through the program, and then the angle signal generated by the sensor must be compared with the angle to determine the real-time true angle of the steering wheel.

40.5.2 Angle Calculation Since most magnetoresistive sensors, including the KMZ60 sensor, have a limited measuring angle range, the measuring angle is determined by the two bridges of . It can also be seen that the sensor. The angle expression formula:α = ar ctan VVcos sin the measurement range is 0° to 180°. The program needs to cooperate with the two pinions to convert the real steering wheel angle. The filtered signal is still four channels of sine and cosine analog quantities. Write a digital-to-analog conversion program to process the four channels of sine and cosine analog quantities into digital signals with the on-chip A/D module.

40 Automotive Steering Wheel Angle Sensor Based on S32K144 and KMZ60

409

After the digital signal has been converted, the Angle of each pinion is calculated by arctangent operation, and the Angle difference is calculated, and then the real Angle of the big gear is calculated according to the transmission ratio. The external interrupt program is set up to ensure zero input. The program flow chart is shown in Fig. 40.4.

Fig. 40.4 Software flow chart

410

Z. Zhou et al.

40.6 Conclusion The sensor expands the original 180° measurement range by 7.5 times and becomes a rotation angle measurement range that can meet ± 675°. The accuracy and reliability of the sensor are guaranteed due to the non-contact measurement method. The interrupt program is set to ensure the zero mark, and the CAN transceiver can share the angle signal with the ECU system of most cars. The sensor has high reliability and strong practicability.

References 1. Chen, M.: Research on high-precision electronic compass design based on anisotropic magnetoresistance. University of Electronic Science and Technology of China (2019) 2. Wang, Y. H.: Research and development of electric power steering system controller based on V mode. Guangxi University (2017) 3. Li, C. Q., Tian, Y., Li, J., Zhang, B.: Design of multi-turn intelligent angle sensor based on magnetic sensor MLX90363. Microcontr. Embedded Syst. Appl. 18(5), 76–79 (2018) 4. Gong, X.: Performance Optimization of AMR Linear Magnetoresistive Sensor. University of Electronic Science and Technology of China (2019) 5. Chen, G.: Design and Research of Cross-Axis Universal Joint Nutation Reducer. Harbin Institute of Technology (2020) 6. Liang, D. D.: Development of EPS Torque Angle Sensor and EPS Controller. Harbin Institute of Technology (2017) 7. Fang, Y. Y.: Design of multi-ECU Communication System Based on CAN Bus. Nanchang University (2020) 8. Zhou, C.: Design and Implementation of CAN-Based Vehicle-Mounted Auxiliary Driving Unit. University of Chinese Academy of Sciences (School of Artificial Intelligence, University of Chinese Academy of Sciences) (2020) 9. Niu, B.: Research on Non-Contact Steering Wheel Angle Detection Algorithm. Harbin Institute of Technology (2016)

Chapter 41

Research on Vehicle Routing Problem of Urban Emergency Material Delivery with Time Window Yabin Wang, Jinguo Wang, and Shuai Wang

Abstract In the face of natural disasters and emergencies, the problem of emergency material distribution vehicle routing is a major and difficult problem. Based on the actual situation of emergencies, this paper concentrates on the multi-objective vehicle routing problem with time window constraints, and establishes the optimization model of distribution vehicle routing affected by time window constraints considering the importance of timely delivery of emergency supplies. On the premise of satisfying the time window and various constraints, the improved ant colony algorithm is adopted to obtain the optimized path with the shortest path as the goal, and the algorithm is verified the feasibility and effectiveness through case analysis in the paper.

41.1 Introduction The city is the symbol of human civilization. Once natural disasters and emergencies occur, they will have a serious impact on social stability and economic development [1]. In the face of it, time is life. A reasonable system of emergency material distribution can provide emergency relief materials to disaster-stricken points in a timely manner [2]. Then, how to safely, timely, and accurately deliver emergency supplies to disaster-stricken points. It has become an issue of increasing concern. Therefore, it is of great significance to study the problem of vehicle routing in the distribution of emergency supplies. The vehicle routing problem (VRP) was first proposed by famous scholars Dantzig and Ramser [3] in 1959. In the past 20 years, researchers have designed a large number of intelligent algorithms for the study of VRP problems. Wang et al. [4] used ant colony algorithm to optimize the transportation route problem of vehicle equipment distribution. Lv et al. [5] presented a new pheromone update method to improve Y. Wang · J. Wang (B) · S. Wang Department of Management Engineering, Shijiazhuang Campus, Army Engineering University of PLA, Shijiazhuang, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_41

411

412

Y. Wang et al.

the traditional ant colony algorithm. Li et al. [6] gave the mutation operator in the genetic algorithm to optimize the parameter design of the ant colony algorithm, and optimize the distribution of emergency supplies. This paper studies the distribution of emergency materials under the constraints of the hard time window of the disaster site in the context of urban emergency. According to the characteristics of the problem, a model that satisfies the time window constraint with delivering the shortest total route is constructed.

41.2 Model Building 41.2.1 Problem Description Therefore, emergency material distribution has very strict time requirements, and the time required to be serviced by disaster-stricken points must be subject to certain restrictions, that is, time window restrictions. It is of great research significance to deliver the emergency supplies needed by the disaster-stricken area to the disasterstricken point within a reasonable time limit and ensure that the people can receive emergency supplies in time [7].

41.2.2 Mathematical Model Denote an emergency distribution center v0. Assuming that there are m vehicles in it, emergency supplies are to be distributed to n disaster-stricken points. The disaster point is V = {vi|i = 1,2…,n}, the vehicle is K = {k = 1,2,…,m}, the maximum loading capacity of the vehicle is Q. The demand of the point is di (i = 1,2…n), the delivery distance from the disaster-affected point i to the disaster-affected point j is cij (i,j = 0,1,2…n). The delivery time from disaster-stricken point i to disasterstricken point j is tij, xijk (i,j = 1,2…n, k = 1,2,…,m) is logical variable, when the vehicle k travels from point i to point j, it is 1, otherwise, it is 0. In emergencies, there is a very strict requirement for the distribution of emergency supplies, which is the time constraint. [ai, bi] is the time window for the service of the disaster-stricken point i. Use si to indicate the earliest time that disaster-affected point i can receive services, and use ei to indicate the latest time that disaster-affected point i can receive services. It is required to determine the number of vehicles needed to ensure that each disaster-stricken point is met, and at the same time make the total delivery path be the shortest. From this, the following mathematical model is obtained: min L =

n  n  m  i=0 j=0 k=1

ci j xi jk ;

(41.1)

41 Research on Vehicle Routing Problem of Urban Emergency … n m  

x0 jk ≤ m;

413

(41.2)

k=1 j=1 n 

di yik ≤ Q, k = 1, 2..., m;

(41.3)

λi yik ≤ , k = 1, 2..., m;

(41.4)

i=1 n  i=1 m  n 

xi jk = 1;

(41.5)

xi jk = 1;

(41.6)

k=1 i=1 m  n  k=1 j=1 n 

x0 jk ≤ 1, k = 1, 2, ..., m;

(41.7)

xi0k ≤ 1, k = 1, 2, ..., m;

(41.8)

j=1 n  i=1

 xi jk =

ti = [ai , bi ];

(41.9)

ai ≤ si ;

(41.10)

bi ≥ ei ;

(41.11)

1, if vehicle k travels from vertex i to j, 0, otherwise,

xi jk = 0 or 1, iν j = 1, 2...n, k = 1, 2ν...νm;  1, if vertex i is served by vehicle k, yik = 0, otherwise,

(41.12)

(41.13)

yik = 0 or 1, i = 1, 2...n, k = 1, 2, ..., m. Among them: Eq. (41.1) is the objective function, which means that the total distribution path is the shortest. Equation (41.2) means that the number of vehicles used does not exceed the total number of vehicles in the distribution center. Equations (41.3) and (41.4) represent the sum of demand of any vehicle completing

414

Y. Wang et al.

the delivery to disaster points is not greater than the load of the vehicle. Equations (41.5) and (41.6) indicate that there is one and only one vehicle for delivery at any disaster-stricken point. Equations (41.7) and (41.8) indicate that one delivery is completed and the vehicle starts once. Equation (41.9) presents the time window for receiving delivery at each disaster-affected point. Equations (41.10) and (41.11) represent the relationship between the earliest and latest time when the vehicle is delivered to the disaster-affected point and the time window for receiving services at the disaster-affected point. Equations (41.12) and (41.13) indicate the value range of the corresponding variable.

41.3 An Improved Ant Colony Algorithm for MOVRPTW Problem 41.3.1 Ant Colony Algorithm The classic ant colony algorithm is a simulation optimization algorithm proposed by Dorigo et al. The basic idea is as follows. At the beginning, m ants uniformly start from the distribution center, and select disaster-stricken points to visit one by one according to their respective conditions. When a single ant has visited all the disaster-stricken points, it will return to the starting point. After all the ants return to the starting point, the shortest route is selected and the pheromone is updated according to the rules. Based on the traditional ant colony algorithm, the improved ant colony algorithm considers multiple factors in the selection of each distribution route and prioritizes feasible solutions to solve the MOVRPTW [8] problem. In the improved ant colony algorithm, vehicles load goods at the distribution center, and comprehensively consider the time window constraints of each distribution point and the pheromone size of each route, and select the next distribution point according to a combination of determinism and randomness. When the next node is reached, the goods required by the distribution point are subtracted, and the weight and volume of the goods carried by the vehicle are reduced at this time. Then, according to the same selection strategy, the next node is selected for delivery. After completing the distribution, record each set of feasible solutions to strengthen the pheromone of the path taken. After many iterations, the solution with the shortest path is selected.

41.3.2 Selection Strategy In each route selection, the ant at point i selects the next delivery point j according to the following equation:

41 Research on Vehicle Routing Problem of Urban Emergency …

 j=

arg max{(τi j )α (ηi j )β (ψi j )γ }, S,

415

if q ≤ q0 ,

otherwise,

in which S is to select the next node according to the selection probability pij, using roulette, where the selection probability formula for point j is as follows: ⎧ α β γ ⎪ ⎨  (τi j ) (ηi j ) (ψi j ) , if j ∈ / visited, (τi j )α (ηi j )β (ψi j )γ pi j = j ∈visited / ⎪ ⎩ 0, otherwise, where τij represents the size of the pheromone from point i to point j. ηij represents the heuristic factor, which is defined as the reciprocal of the distance from point i to point j, i.e., ηij = 1/dij . ψij is the urgency from point i to point j under the time window constraint, ψij = 1/[max(1,|tij −ej |)], where tij ∈ [sj , ej ] is the period from point i to j, and [sj , ej ] represents the time window at point j. The parameter α represents the importance of pheromone, β is the importance of heuristic factors, and γ is the importance of time constraints. q is a random variable from 0 to 1, and q0 is the value that controls the degree of importance of certainty and randomness when choosing a route. The set visited represents the set of points that the ant has visited.

41.3.3 Pheromone Updating Pheromone is an important reference for ants to select access points. The pheromone update method used in this article is as follows: τi j (k + 1) = (1 − ρ) × τi j (k) +

m 

τilj ν

l=1

where τij represents the pheromone from point i to point j, k represents the number of iterations, ρ is the pheromone volatilization factor, m is the total number of ants, and τlij is expressed as follows: ⎧ ⎨ Eν l τi j = L l ⎩ 0ν

if edge (i, j) belongs to Tl ν otherwise,

where E is a normal number, used to control the accumulation of ant’s pheromone quantity; Ll represents the length of the route taken by ant l; and Tl is the set of points of all the routes of ant l. The update method of pheromone means that the

416

Y. Wang et al.

edge that belongs to the path of all ants will receive more pheromone, which can speed up the convergence of other ants to a shorter path. However, it should be noted that too much or too little pheromone may lead to premature convergence, which is not conducive to choosing the best route. Therefore, all pheromones τij should be set with the maximum pheromone τmax and the minimum pheromone τmin, to satisfy τmin ≤ τ ≤ τmax.

41.3.4 Algorithm Steps Based on the above discussion, the improved ant colony algorithm is described as follows: Step 1: Initialize parameters, ant number l = 1,2,..,m, iteration number k = 1,2,…,MAXGEN, and use i and j to represent the number of each delivery point. Set the pheromone between the initial points to the same normal number, which is τij(0) = C. The distance between each point is dij, then set the heuristic factor between each point as ηij = 1/dij. Define the load and volume of each ant as W and T, respectively, and the required weight and volume of each distribution point as wi and ti respectively. The time window of each disaster point is constrained to be [ai,bi], and the speed of each ant is constant as v. Step 2: Enter the coordinates of the distribution center and the distribution point, and calculate the distance between each point, that is, the Euclidean norm. In this way, a Laplacian matrix D of the distance of each point is created. According to the constant ant velocity v, calculate the transportation time between points, and construct the time matrix E. Step 3: At the beginning of the iteration, place m ants in the distribution center, set the number of iterations k, and select the next distribution point for distribution according to the given selection strategy formula. Step 4: Considering the constraints, the priority is to determine whether the conditions are met after selecting the next delivery point. That is, whether the load and volume of each ant meet the requirements of the next delivery point, and whether the time for the ant to reach the next delivery point is within the time window. Then, the delivery will start when the conditions are met, go to the next delivery point, and repeat the selection strategy of Step 3; otherwise, return to the delivery center to complete the delivery. Step 5: Before the l ant goes to the next delivery point, record the delivered point as visited, store it in the collection Tl, and select the next delivery point from the remaining points. At the same time update the delivery route distance Ll. Step 6: When all m ants complete the delivery, compare all the routes, select the optimal delivery, and update the pheromone. Step 7: Each cycle, the number of iterations k = k + 1, when k ≤ MAXGEN, continue to the next iteration. Otherwise, the result is output and the algorithm ends.

41 Research on Vehicle Routing Problem of Urban Emergency …

417

41.4 Case Analysis According to the model construction ideas in the literature [9], it is assumed that in an emergency, an emergency material distribution center needs to implement emergency material protection to 19 disaster-stricken points. The delivery process uses vehicles, with a constant speed of 60 km/h, a load of 9 tons, and a volume of 8 cubic meters per vehicle. Assuming that there are roads passing between the distribution points, find the distribution method with the shortest total distribution path mileage. Table 41.1 gives the coordinates of each point, the weight and volume of required materials, and the demand time window of each distribution point. Table 41.2 calculates the distance matrix D between two points according to the coordinates of each point. Use MATLAB for algorithm programming, and set the parameter values as follows: α = 1, β = 1, γ = 2, ρ = 0.15, E = 15, the total number of ants m = 60, and the number of iterations MAXGEN = 50. The calculation example was simulated and solved 6 times, and the results are shown in Table 41.3. Among them, the third time is the shortest path, and the driving route of each vehicle is: Vehicle 1: Table 41.1 Basic data of emergency distribution cases Number

X-axis /km

Y-axis/km

Equipment requirements d/t

Window opening ai /h

Window closing bi /h

Size λ/m3

1

348.00

153.00

0.0

0.00

0.00

0.0

2

345.00

190.00

1.2

3.00

3.50

2.5

3

372.00

129.00

1.8

0.70

0.85

0.5

4

323.00

122.00

2.0

0.60

0.70

3.0

5

419.00

104.00

0.8

1.40

1.50

2.6

6

431.00

179.00

1.5

2.65

2.75

3.6

7

430.00

157.00

1.0

2.30

2.40

1.4

8

300.00

197.00

1.0

1.90

2.00

1.0

9

350.00

125.00

1.0

1.10

1.20

2.0

10

378.00

139.00

1.7

0.55

0.65

1.4

11

323.00

110.00

0.6

1.70

1.80

2.0

12

341.00

220.00

0.2

2.70

2.80

1.2

13

371.00

150.00

2.4

0.30

0.40

0.5

14

391.00

230.00

1.9

1.45

1.55

0.8

15

378.00

221.00

2.0

1.20

1.30

1.3

16

367.00

257.00

0.7

4.00

4.10

1.6

17

402.00

283.00

0.5

3.30

3.40

1.7

18

426.00

283.00

0.2

2.90

3.00

1.5

19

417.00

224.00

3.1

1.90

2.00

0.8

20

332.00

105.00

0.1

1.60

1.70

1.4

418

Y. Wang et al.

Table 41.2 The distance between the distribution center and each disaster site Distance 1

2

3

4

5

6

7

8

9

10

1

0.00

37.12

33.94

39.82

86.26

86.97

82.09

65.11

28.07

33.10

2

37.12

0.00

66.70

71.47

113.45 86.70

91.18

45.54

65.19

60.74

3

33.94

66.70

0.00

49.49

53.23

77.33

64.40

99.03

22.36

11.66

4

39.82

71.47

49.49

0.00

97.67

122.11 112.57 78.44

27.16

57.56

5

86.26

113.45 53.23

97.67

0.00

75.95

54.12

151.02 72.12

53.90

6

86.97

86.70

77.33

122.11 75.95

0.00

22.02

132.23 97.34

66.40

7

82.09

71.18

64.40

112.57 54.12

22.02

0.00

136.01 86.16

55.02

8

65.11

45.54

99.03

78.44

151.02 132.23 136.01 0.00

87.65

9

28.07

65.19

22.36

27.16

72.12

97.34

86.16

87.65

0.00

31.30

10

33.10

60.74

11.66

57.56

53.90

66.40

55.02

97.20

31.30

0.00

11

49.73

82.96

52.55

12.00

96.18

128.16 116.86 89.98

30.88

62.17

12

67.36

30.26

96.13

99.63

139.78 98.89

109.04 47.01

95.42

89.05

13

23.19

47.70

21.02

55.56

66.48

66.64

59.41

85.14

32.64

13.03

14

88.19

60.95

102.77 127.62 129.07 64.81

82.76

96.79

112.72 91.92

15

74.32

45.27

92.19

82.46

81.60

16

105.72 70.51

17

140.76 109.07 156.89 179.33 179.80 107.96 129.07 113.41 166.33 145.98

18

151.60 123.32 163.19 191.12 179.13 104.12 126.06 152.55 175.32 151.78

19

99.00

20

50.59

113.25 123.97 67.62

128.09 141.98 161.59 100.89 118.19 89.93

79.62

105.11 138.70 120.01 47.12

68.24

97.20

100.00 82.00 133.09 118.51

120.07 119.54 93.52

85.98

46.64

19.23

87.00

123.60 110.94 97.40

23.90

Distance 11

12

13

14

15

16

19

1

49.73

67.36

23.19

88.19

74.32

105.72 140.76 151.60 99.00

50.59

2

82.96

30.26

47.70

60.95

45.27

70.51

85.98

3

52.55

96.13

21.02

102.77 92.19

4

12.00

99.63

55.56

127.62 113.25 141.98 179.33 191.12 138.70 19.23

5

96.18

139.78 66.48

129.07 123.97 161.59 179.80 179.13 120.01 87.00

6

128.16 98.89

66.64

64.81

67.62

100.89 107.96 104.12 47.12

123.60

7

116.86 109.04 59.41

82.76

82.46

118.19 129.07 126.06 68.24

110.94

8

89.98

47.01

85.14

96.79

81.60

89.93

9

30.88

95.42

32.64

112.72 100.00 133.09 166.33 175.32 119.54 26.90

10

62.17

89.05

13.03

91.92

11

0.00

111.46 62.48

12

111.46 0.00

76.15

50.99

37.01

45.22

13

62.48

0.00

82.46

71.34

107.07 136.56 143.92 87.13

59.54

14

137.92 50.99

82.46

0.00

15.81

36.12

54.12

63.51

26.68

138.22

15

123.87 37.01

71.34

15.81

0.00

37.64

66.48

78.40

39.11

124.78

76.15

82.00

17

18

57.20 20

109.07 123.32 79.62

128.09 156.89 163.19 105.11 46.64

133.41 152.55 120.07 97.40

118.51 145.98 151.78 93.52

57.20

137.92 123.87 153.44 190.18 201.34 147.75 10.29 87.69

105.80 76.10

115.35

(continued)

41 Research on Vehicle Routing Problem of Urban Emergency …

419

Table 41.2 (continued) Distance 1

2

3

4

5

6

7

8

9

10

37.64

0.00

43.60

64.47

59.90

155.97

136.56 54.12

66.48

43.60

0.00

24.00

60.84

191.26

201.34 105.80 143.92 63.51

78.40

64.47

24.00

0.00

59.68

201.29

39.11

59.90

50.87

59.68

0.00

146.23

16

153.44 45.22

107.07 36.12

17

190.18 87.69

18 19

147.75 76.10

20

10.29

87.13

115.35 59.54

26.68

138.22 124.78 155.97 191.26 201.29 146.23 0.00

Table 41.3 Six simulation results Simulation

1

2

Delivery mileage/km

1335.1

1345.3

Calculating time/s

2.99

Difference from minimum

18

2.94 28.2

3

4

5

1317.1

1325.2

3.05

3.01

0

8.1

6

1332.3 2.86 15.2

1327.5 2.91 10.4

1-13-10-3-9-20-11-1; Vehicle 2: 1-15-14-19-18-17-16 -1; Vehicle 3: 1-4-8-12-2-1; Vehicle 4: 1-5-7-6-1. The total shortest path is 1317.1 km, and the program execution time is 3.05 s. The driving route of each car is shown in Fig. 41.1, and the iterative diagram is shown in Fig. 41.2.

Fig. 41.1 Distribution roadmap

420

Y. Wang et al.

Fig. 41.2 Diagrammatic sketch of the optimal path iteration

41.5 Conclusion Aiming at the reality of “time tight and heavy tasks” in emergency material distribution in case of emergencies, this paper considers the time window constraint conditions in the vehicle routing problem, and proposes an improved ant colony algorithm to optimize the distribution path. In view of the limitations of the traditional ant colony algorithm, the improved ant colony algorithm prioritizes constraints such as time window, load, and volume constraints in path selection, and fully considers time in the way of pheromone update. Through the verification of specific examples, the improved ant colony algorithm is feasible and effective, and can better solve this kind of MOVRPTW problem, and has strong practical application value. At the same time, it should be noted that when the scale of the problem is larger, the solution time is longer, and we will conduct further research in the future.

References 1. Ren, J., Xi, H.W., Shi, X.F.: Genetic algorithm for vehicle scheduling problem of city emergency logistics distribution (in Chinese). J. Military Transport. Univ. 13(9), 70–73 (2011) 2. Li, Z., Jiao, Q.Q., Zhou, Y.F.: Multi-objective dynamic location-allocation model for postearthquake emergence facilities (in Chinese). Comput. Eng. 43(6), 281–288 (2017) 3. Dantzig, G.B., Ramser, J.H.: The truck dispatching problem. Manag. Sci. 6(1), 80–91 (1959). https://doi.org/10.1287/mnsc.6.1.80 4. Wang, F. Z., Yang, B. F., Deng, W., Zhang, Y.: Optimization of vehicle equipment distrubution route based on ant colony algorithm (in Chinese). Logist. Technol. 34(22), 204–206+219 (2015)

41 Research on Vehicle Routing Problem of Urban Emergency …

421

5. Lv, Y., Yuan, J.H., Sun, Y., Liu, J., Gong, C.Y.: Optimization of vehicle routing problem in military logistics on wartime (in Chinese). Contr. Decis. 34(1), 121–128 (2019). https://doi.org/ 10.13195/j.kzyjc.2017.0983 6. Li, Y.Q., Zhang, L.Y., Guo, C.S., Yu, R.H.: Application of animproved ACO in emergency logistics VRP (in Chinese). Math. Pract. Theory 42(9), 91–95 (2012) 7. Wei, X.: Research on vehivleroutin problem of emergemcy logistics based on improved ant colony algorithm. University of Jinan (2015) 8. Zhang, H.Z., Zhang, Q.W., Ma, L., Zhang, Z.Y., Liu, Y.: A hybrid ant colony optimization algorithm for a multi-objective vehicle routing problem with flexible time windows. Inf. Sci. 490, 166–190 (2019). https://doi.org/10.1016/j.ins.2019.03.070 9. Bai, M., Zhang, J.: A feasible priority solution of vehicle routing problem withant colony algorithm (in Chinese). Comput. Syst. Appl. 18(1), 110–113 (2009)

Chapter 42

UAV Trajectory Planning Based on PH Curve Improved by Particle Swarm Optimization and Quasi-Newton Method Aoyu Zheng, Bingjie Li, Mingfa Zheng, and Lisheng Zhang

Abstract Trajectory planning technology is an important foundation for the development of modern military intelligence games. This paper, based on the particle swarm optimization (PSO) and quasi-Newton method, aims to study the optimal trajectory planning of UAVs under threats such as obstacles and radar. Firstly, the five-order PH path curve in the complex number domain is established to determine the decision parameters. Secondly, in view of the shortcomings of traditional PH trajectory planning algorithms, the particle swarm optimization with directionality in the optimization process is introduced, which can iterate to a better solution at a faster rate. However, the PSO has a slower convergence speed in the later iteration, and then the quasi-Newton method is introduced to overcome its shortcomings and iterate to the optimal solution quickly. By organically combining the two algorithms, a new trajectory planning algorithm that can solve the defects of the basic PH algorithm is designed, which overcomes the shortcomings of the traditional path planning algorithm and the particle swarm algorithm. Meanwhile, the performance of the algorithm can be effectively improved. Finally, the effectiveness of our algorithm is verified by some simulation experiments.

42.1 Introduction UAV trajectory planning plays a vital role in the military intelligence game and has become a hot research topic in the field of operations research optimization. At present, trajectory planning algorithms mainly include A* algorithm [1], Dijkstra algorithm [2], artificial potential field method [3], genetic algorithm, particle swarm algorithm, ant colony algorithm, and other intelligent algorithms [4]. The following defects, however, are existed in the algorithms proposed above: the calculation amount of the A* algorithm increases exponentially, which planning effect is strongly dependent on the heuristic function; Dynamic programming algorithms are A. Zheng · B. Li · M. Zheng (B) · L. Zhang Air Force Engineering University, Xi’an, Shaanxi 710051, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_42

423

424

A. Zheng et al.

limited by the spontaneous state space, which result in the combinatorial explosions; Genetic algorithms are prone to fall into premature maturity. What’s more, the initial and final velocity directions are not considered in the algorithms above [5]. The PH (Pythagorean hodograph) curve proposed by Farouki in 1990 was applied to the trajectory planning, which achieved certain research results [6]. Some shortcomings, however, are still existed in PH curve trajectory planning, which results in low efficiency in actual trajectory planning and difficulty in implementation: firstly, for the complicated interpolation of the PH curve in the real number field, it is not easy for engineering practice; secondly, traditional trajectory planning based on the PH curve does not take the factor of the maximum curvature of the UAV into account; thirdly, the continuous change of the curvature of the path curve is ensured in the original PH curve, but there is no effective algorithm to ensure the safety of the UAVs in the environment with obstacles; It is inefficient to adjust the track by changing the relevant parameters of the PH curve, as well as the poor performance of the algorithm; What’s more, the convergence speed of the particle swarm algorithm, which is fast at the initial stage of the iteration, will slow down when it near the optimal solution. Because of the problems above, there are still numerous difficulties in trajectory planning. Only by simplifying the calculation method of the PH curve and mentioning the operation efficiency of the high algorithm can the efficient of the trajectory planning be improved. This paper aims to construct an efficient and accurate algorithm to realize UAV trajectory planning by designing corresponding improvement algorithm. To simplify the calculation, PH curve is interpolated in the complex domain, then by combining the respective advantages of the particle swarm and the quasi-Newton method, a new path planning algorithm has been designed. According to the simulation results, the performance of the trajectory planning algorithm that based on the PH curve improved by the particle swarm optimization and quasi-Newton method has been improved, and the relatively ideal results have been achieved.

42.2 Problem Description The trajectory planning, that objective function is the shortest flight path, can be described as an optimization problem. At the same time, the obstacle including radar, surface-to-air missile, and mountains should be avoided. What’s more, there are also various constraints, such as the limitation of the maximum curvature, should be satisfied. Then, the mathematical model of trajectory planning is expressed as follows:  min J (r ) (42.1) s.t. ..Pi (r ) = 0.

42 UAV Trajectory Planning Based on PH Curve Improved …

425

where J (r ) is the cost function of the trajectory r , and Pi (r ) is the trajectory constraint.

42.3 UAV Trajectory Planning Model A trajectory planning model based on the PH curve is able to be established in this paper. The construction of the cost function and the modeling of constraint conditions are the two main aspects in the trajectory evaluation.

42.3.1 Trajectory Evaluation Model Assume that the parametric equation of the track r(t) = [x(t), y(t)], t ∈ [0, 1][0, 1], then the length s of the trajectory can be expressed as follows: 1 |˙r (t)|dt =

s= 0

1 

x˙ 2 (t) + y˙ 2 (t)dt

(42.2)

0

The trajectory should be as short as possible to reduce fuel consumption on the one hand, and shorten the flight time on the other hand to improve the efficiency of mission execution. Let ρ(t) be the bending curvature of the curve at a certain moment and K be the maximum bending curvature, then the curvature calculation formula is as follows: 

y  (t)x  (t) − y  (t)x  (t) k = max  3/2 x 2 (t) + y 2 (t)

 (42.3)

The maximum bending curvature should be as small as possible to avoid the enormous risk caused by the sharp turn. Three types of threat sources are mainly existed during the flight: terrain, radar, and surface-to-air missile. The threat of radar and surface-to-air missiles and an obstacle avoidance safety model is analyzed and constructed as follows. Assume that the maximum range of the radar, which is the most threatening detection threat in the process of UAV penetration at low altitude, is Rmax , then its maximum range can be calculated as follows: Rmax =

Pt G t G r λ2 σ 3 (4π ) L t L r Fr kT Br Dd

1/4 (42.4)

426

A. Zheng et al.

where Pt is the transmitting power, G t and G r are the gains of the transmitting and receiving antennas, λ is the wavelength of the transmitted wave, and σ is the average cross-sectional area of the detection target. L t and L r are the feeder loss, Fr is the receiver noise figure, k is the Boltzmann constant, T is the absolute temperature, Br is the receiver bandwidth, and Dd is the detection terminal identification constant. The UAV, which is detected by the radar, will be attacked by the hostile missile. In this paper, the threat of the surface-to-air missiles, as the same as the radar, will be regarded as a hemispherical area. Assume the probability that the UAV entering the threat area hit by the missile is PM , then PM = PV [1 − (1 − A M Y M ω M ) N ]

(42.5)

where PV is the probability of the UAV found by the missile, A M is the reliable launch probability of the missile, Y M is the reliable flight probability of the missile,ω M is the kill probability when a single-shot missile is launched, and N is the number of missiles launched together. This paper defines a function G to transform the obstacle avoidance problem into a mathematical model: G=

n i=1

1  2 min( (x(t) − xob,i ) + (y(t) − yob,i )2 ) − rob,i

(42.6)

where xob,i , yob,i are the center coordinates of threat source i, and rob,i is the threat radius of threat source i. Thus, the following trajectory planning model can be established: 

min J =s + G; s.t. k ≤ kmax .

(42.7)

It can be seen from the above formula that the trajectory planning problem is transformed into finding the minimum value of s + G under the constraint condition k ≤ kmax (Table 42.1). Table 42.1 Curvature and path length of the trajectory under the different parameter settings ε1 /ε0

100

200

400

800

s

k

s

k

s

k

s

k

100

1020.343

0.303

1041.560

0.108

1084.844

0.126

1172.878

0.152

200

1020.038

0.290

1040.704

0.103

1083.186

0.044

1169.996

0.053

400

1020.223

0.272

1040.108

0.097

1081.477

0.037

1166.626

0.019

800

1021.834

0.243

1040.570

0.089

1080.352

0.035

1163.229

0.016

42 UAV Trajectory Planning Based on PH Curve Improved …

427

42.3.2 The PH Curve The PH (Pythagorean hodograph) curve, which can be represented by an accurate rational, is a special polynomial parameter curve. The arc length of the PH curve is a polynomial function of the original parameters, and the quintic PH curve is the lowest order that has a bending point and can provide sufficient flexibility for the flight path. Definition 14 Assume a polynomial parameter curve r(t) = [x(t), y(t)], if there is a polynomial σ (t) to satisfy σ 2 (t) = x 2 (t) + y 2 (t), then r(t) is called the PH curve. The necessary and sufficient condition for the plane polynomial curve r(t) to be the PH curve is   x (t) = w(t)[u 2 (t) − v 2 (t)] (42.8) y  (t) = 2w(t)u(t)v(t). where w(t), u(t), v(t) are non-zero real polynomials, and (u(t), v(t)) = 1; this paper take w(t) = 1 to ensure that the curve has no singularity. The polynomial of degree n is used as follows to achieve numerically stable of the PH curve: r(t) =

n

Pk bkn (t), t ∈ [0, 1].

(42.9)

k=0

where bkn (t) = Cnk (1 − t)n−k t k , Pk is the control points, which define the vertices of the “control polygon.” The Bezier form of the quintic PH curve, u(t) and v(t) are quadratic polynomials, are expressed as follows: ⎧ 2  ⎪ ⎪ u k C2k (1 − t)2−k t k , t ∈ [0, 1] ⎨ u(t) = k=0

2  ⎪ ⎪ ⎩ v(t) = vk C2k (1 − t)2−k t k , t ∈ [0, 1].

(42.10)

k=0

where u k , vk (k = 1, 2) are real numbers. Incorporating it into Eq. (42.7), then the relationship between the control points of the fifth-order PH curve can be obtained:

428

A. Zheng et al.

⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

(u 2 −v 2 ,2u v )

P1 = P0 + 0 05 0 0 P2 = P1 + (u 0 u 1 −v0 v15,u 0 v1 +u 1 v0 ) 2(u 2 −v 2 ,2u v ) P3 = P2 + 1 51 1 1 + (u 0 u 1 −v0 v115,u 0 v1 +u 1 v0 ) ⎪ ⎪ ⎪ P4 = P3 + (u 1 u 2 −v1 v25,u 1 v2 +u 2 v1 ) ⎪ ⎪ ⎩ (u 2 −v 2 ,2u v ) P5 = P4 + 2 25 2 2 .

(42.11)

The complex coordinate system, which is able to avoid the disadvantages of the complex computation and difficult engineering realization in the real number domain, is introduced as follows to calculate the PH curve: βk = u k + vk i, (k = 0, 1, 2).

(42.12)

At the same time, let α = β0 , β = β1 /2, γ = β2 , then the expression (2.10) can be converted into the following form: ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨

 P1 − P0 = α 2 5 P2 − P1 = αβ 10  15 P3 − P2 = β 2 30+αβ  ⎪ ⎪ ⎪ P4 − P3 = βγ 10 ⎪ ⎪ ⎩ P5 − P4 = γ 2 5

(42.13)

Add the expressions of the above formula together and set P5 − P0 = d, then the following equation is obtained: β 2 + 3(α + γ )β + 6(T0 + T5 ) + 2αγ − 30d = 0

(42.14)

where T0 , T5 is the tangent vector, which is satisfying T0 //˙r (0), T5 //˙r (1). Then it can be obtained as follows by the endpoint conditions of the interpolation: α 2 = T0 , γ 2 = T5 .

(42.15)

Solve the Eqs. (2.13) and (2.14) to get α, β, γ , then u k , vk , (k = 0, 1, 2), and Pi (i = 0, 1, 2, 3, 4, 5) can be obtained. Four sets of different solutions, which corresponds to four PH curves, will be generated after setting the initial conditions, and the curve with the smallest bending energy will be selected as the flight path of the aircraft. The bending energy E of the curve can be calculated as follow:  1 ρ 2 (t)σ (t)dt. (42.16) E= 0

where ρ(t) is the curvature of the curve, which can be calculated as follows:

42 UAV Trajectory Planning Based on PH Curve Improved …

ρ(t) =

2[u(t)v(t) ˙ − v(t)u(t)] ˙ . 2 2 w(t)[u (t) + v (t)]

429

(42.17)

It can be concluded from the analysis above that the PH curve is determined by three elements: the coordinate position of the initial and the end point P0 , P5 , the direction of the speed, and its magnitude. The runway of the airport is fixed, and the coordinate positions of the initial point as well as the end point and the direction of the speed are determined. Therefore, consideration should be given to increase the amplitude of the direction vectors T0 and T5 instead of changing the direction angle, then the control point Pk will be moved, which will bring about a change in the trajectory on the UVA. Then we can get: 

T0 = ε0 (cos θs + i sin θs ) T5 = ε1 (cos θ f + i sin θ f ).

(42.18)

By taking different ε0 and ε1 into Eq. (2.17), we can obtain the PH track and its curvature, and the related data can be shown in Table 2.1. Once the parameters are given, the PH curve is uniquely determined, that is, the track length s, the aircraft curvature k and the obstacle function G are the functions of ε0 and ε1 . The crux of the trajectory planning is to find the suitable parameters to make the trajectory meet the obstacle avoidance safety and reach the optimum. Generally, an iterative method is used to match the flyable track with the Dubins path to obtain a flyable track that satisfies the curvature constraint. However, this method is cumbersome and has low computational efficiency [7]. The optimization process of the particle swarm optimization, compared with the traditional optimization algorithms, is directional. It can drive the particles to continuously advance toward the goal optimal direction according to the current global optimal and local optimal solution positions. The vicinity of the optimal solution will be iterated to at a faster rate, but the convergence speed as well as the efficiency decreases in the later stage. The quasi-Newton method, however, can converge to the optimal solution super linearly from the initial point which is near the optimal solution. Therefore, the convergence speed of the algorithm will be sped up by introducing the quasi-Newton method after the particle swarm algorithm finds a better solution.

42.4 Trajectory Planning Algorithm Based on the particle swarm optimization and the quasi-Newton method, a new algorithm of the trajectory planning model will be designed in this paper in order to realize the model both efficiently and accurately. The initial cluster is composed of N particles in a two-dimensional space region. Let εi and V i be the current coordinates and velocity of the particles:

430

A. Zheng et al.

εi = (εi0 , εi1 ), V i = (vi0 , vi1 ).

(42.19)

The individual extremum is the optimal position obtained by searching for the i-th particle after several iterations, which denotes as pbest , and the global extremum is the optimal position obtained by searching for the entire particle swarm after several iterations, which denotes as gbest . Then the updated formula of particle velocity and position is as follows: 

vid = w · vid + c1r1 ( pid − xid ) + c2 r2 ( pgd − xid ) εid = εid + vid .

(42.20)

where w is the inertia weight that used to balance the global and local search; c1 and c2 , which is called the learning factors, are the acceleration coefficients that adjust the relative speed to the global and local optimal direction; r1 and r2 are uniform random numbers on [0,1]. Adaptive particle swarm algorithm, which can update the inertia weight under different conditions to improve the efficiency of the algorithm, is used in this paper. The inertia weight will be reduced to enhance the local search ability in order to update the global optimal solution quickly, while the inertia weight will be increased to enhance the global search ability for the particles with low fitness.  w=

wmin −

(wmax −wmin )·( f − f min ) , f av − f min

f ≤ f av

wmax , f > f av

(42.21)

where f represents the objective function value of the particle at a certain moment, f av and f min represent the average and minimum values of all particles. The quasi-Newton method is introduced to make the algorithm converge to the optimal solution super linearly after the particle swarm algorithm iterate to the vicinity of the optimal solution. The trajectory planning model should be transformed into an unconstrained optimization by using the external penalty function method, for the quasi-Newton method is suitable for it. Let K = −(k − 0.05), then 2  min F(ε) = s(ε) + G(ε) + σ K − (ε)

(42.22)

where K − (ε) = min{0, K (ε)}, σ is a very large number. In the quasi-Newton method, ∇ 2 F(εk ) is replaced by a positive definite matrix B k , which is composed of the function value of F(ε) and its first derivative, that is, use 1 q˜ k (δ) = F(εk ) + ∇ F(ε k )T δ + δ T B k δ 2

(42.23)

42 UAV Trajectory Planning Based on PH Curve Improved …

431

to approximate F(ε), and the smallest point of q˜ k (δ) is used as the search direction d k , let d k = −B −1 k ∇ F(ε k ).

(42.24)

Then the new iteration point is calculated as follows: ε k+1 = ε k − τk B −1 k ∇ F(ε k ).

(42.25)

where αk is the optimal step size, which is determined by precise linear search. Let H k = B −1 k to avoid calculating the inverse matrix, then ε k+1 = ε k − τk H k ∇ F(ε k ).

(42.26)

It is generally believed that the H k obtained by the BFGS method have the best effect. Let the initial matrix H 0 = I, then the correction formula for H k is:  H k+1 =

I−

δ k γ Tk δ Tk γ k



 Hk I −

γ k δ Tk δ Tk γ k

 +

δ k δ Tk δ Tk γ k

.

(42.27)

Then εk+1 can be calculated from ε k according to the iterative formula (42.26). It can be proved that the point sequence {ε k } generated by this method converges to the minimum point of F(ε) super linearly. Then the trajectory planning optimization algorithm can be designed as follows: Step 1: Initialize εi = (εi0 , εi1 ) and V i = (vi0 , vi1 ); Step 2: Calculate the fitness value of each particle; Step 3: Obtain the individual global optimal value of the particle, that is, the current optimal track; Step 4: Adaptively update the weights according to the state of the particle swarm; Step 5: Update the speed and position of the particles according to formula (42.20); Step 6: If the algorithm meets the accuracy requirements, output the result, marked as ε0 , and go to Step 7; otherwise, go to Step 2. Step 7: Take ε 0 as the initial point, set parameter ξ , and set k = 0; Step 8: Find the search direction d k according to formula (42.24); Step 9: Calculate   the optimal step size τk and get ε k+1 = ε k + τk d k ; Step 10: If  g k  < ξ , the algorithm terminates and the output result is marked as ε; otherwise, go to Step 11; Step 11: Calculate H k+1 according to formula (42.27), set k = k + 1, and transfer to Step 8.

432

A. Zheng et al.

42.5 Experimental Simulation MATLAB is used to simulate the model in this paper (Fig. 42.1). It is proved in Ref. [8] that when c1 + c2 = 4.1 and c1 /c2 = 2.8/1.3, the particle swarm algorithm can achieve better convergence effects. The population size is selected as 30, and the number of iterations is selected as 100. Initializes the population randomly, and sets the maximum speed vmax = xmax . Take the maximum inertia weight wmax = 1.2 and the minimum inertia weight wmin = 0.9. Assume that the maximum allowable curvature kmax = 0.05. A number of the terrain obstacles, as well as the radar and surface-to-air missile are set up in the planning area, then the result of the trajectory planning obtained are shown in the figure. It can be known from the result that when ε0 = 432.253 and ε1 = 1326.512, the optimal trajectory, which length is 1284.850 km and bending curvature that satisfies the constraint of the maximum trajectory curvature is 0.0197, is obtained in this area. Therefore, this trajectory planning algorithm has achieved an ideal effect.

(a) Initial trajectory group

(c) Iterative process of trajectory planning based on the PSO

(b)Final trajectory

(d) Iterative process of trajectory planning based on PH curve improved by particle swarm optimization and quasi-Newton method

Fig. 42.1 Simulation results of the UAV trajectory planning based on PH curve improved by particle swarm optimization and quasi-Newton method

42 UAV Trajectory Planning Based on PH Curve Improved …

433

Comparing Fig. 4.1c and Fig. 4.1d, it can be concluded that the particle swarm optimization algorithm has a faster convergence speed in the initial stage of the iteration, but when it is close to the optimal solution, the convergence speed is slower. The iteration speed has been significantly improved after introducing the quasi-Newton method, which overcomes the defects of the particle swarm algorithm in the later stage.

42.6 Conclusion Aiming at the defects of traditional PH curve trajectory planning, this paper proposes a trajectory planning algorithm based on PH curve improved by particle swarm optimization and quasi-Newton method. The PSO with the faster calculation speed and better global search ability compared with the traditional optimization algorithm is able to converge to the global optimal solution with a greater probability. The quasi-Newton method is able to converge to the optimal solution super linearly if the initial point is near the optimal solution. By organically combining the two algorithms, a new trajectory planning algorithm that can solve the defects of the basic PH algorithm is designed, which overcomes the shortcomings of the traditional path planning algorithm and the particle swarm algorithm. Finally, the effectiveness of our algorithm is verified by some simulation experiments. Acknowledgements This work was supported in part by the Natural Science Foundation of Shaanxi Province of China under Grant 2019JM-271.

References 1. Stentz, A.: Optimal and efficient path planning for partially-known environments. Robotics and Automation, 1994. Proceedings. 1994 IEEE International Conference on. IEEE (1994) 2. Dijkstra, E.: A note on two problems in connexi on with graphs. Numer. Math. 1(1), 269–271 (1951) 3. Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. Proceedings. 1985 IEEE International Conference on Robotics and Automation. IEEE (2003) 4. Tu, J., Yang, S. X.: Genetic algorithm based path planning for a mobile robot. 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422). IEEE (2003) 5. Labakhua, L., Nunes, U., Rodrigues, R., et al.: Smooth Trajectory Planning for Fully Automated Passengers Vehicles: Spline and Clothoid Based Methods and Its Simulation (2008) 6. Farouki, R.T., Sakkalis, T.: Pythagorean hodographs. IBM J. Res. Dev. 34(5), 736–752 (1990) 7. Shanmugavel, M., Tsourdos, A., White, B., et al.: Co-operative path planning of multiple UAVs using Dubins paths with clothoid arcs. Control. Eng. Pract. 18(9), 1084–1092 (2010) 8. Li-Ping, Z., Huan-Jun, Y., Shang-Xu, H.: Optimal choice of parameters for particle swarm optimization. J. Zhejiang Univ. ence 6(6), 528–534 (2005)

Chapter 43

Maintenance Data Collection and Analysis of Road Transport Vehicles’ Safety Components Fujia Liu, Xiaojuan Yang, Shuquan Xv, and Guofang Wu

Abstract Vehicle technology management covers the whole process from purchase to scrapping. Only by improving the level of vehicle technology management can we ensure the safe operation of vehicles and enhance the use efficiency of vehicles. In order to improve the management technology of road transport vehicles, ensure the safety of vehicles on the road, promote the utilization rate of transport vehicles, and reduce the transportation cost, this paper carries out large data collection and analysis of vehicle maintenance for the safety components of road transport vehicles, studies, and determines the structural principle, fault characteristics, and fault causes of each safety component, so as to provide technical support for ensuring vehicle operation safety.

43.1 Introduction In recent years, with the rapid growth of social logistics demand in China, the road transportation market has an increasingly broad prospect. Although the rapid development of road transportation industry has effectively promoted the process of China’s modernization, it also brings social problems such as traffic safety, energy consumption, and environmental pollution. The decline of automobile technology makes these problems more serious. During the use of the vehicle, with the increase of mileage and service life, the comprehensive technical indicators and performance indicators of the vehicle will deteriorate. How to scientifically manage vehicle technology is inseparable from the research on the key system components of vehicles, especially for road transport vehicles, the failure law of safety components should be analyzed and mastered, which is related to the driving safety of drivers and the economic benefits of automobile production enterprises. Therefore, this paper focuses on the fault characteristics of key safety components of vehicles, so F. Liu · X. Yang (B) · S. Xv · G. Wu Research Institute of Highway Ministry of Transport, Beijing 100088, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_43

435

436

F. Liu et al.

as to improve the overall reliability level of road transport vehicles and reduce the occurrence of dangerous road accidents.

43.2 Key Safety Components of Road Transport Vehicles Based on the standards and specifications as “Technical specifications for safety of power-driven vehicles operating on roads” (GB 7258-2017) [1],”Items and methods for safety technology inspection of motor vehicles” (GB 38900-2020) [2], “Safety specifications for commercial bus” (JT/T 1094-2016) [3], “Safety specification for commercial vehicle for cargos transportation –Part 1: Goods vehicle” (JT/T 1178.12018) [4], and so on[5], this paper focuses on sorting out the key onboard safety systems and components of road transport vehicles, which do not involve conventional mechanical components. The key safety components of operating passenger cars and trucks are shown in Tables 43.1 and 43.2.

43.3 Maintenance Data Collection of Road Transport Vehicles’ Safety Components In this paper, the vehicle maintenance record data of operating passenger cars and trucks are collected from the national automobile maintenance electronic health archives system. The collected data elements include maintenance enterprise name, VIN code, repair date, settlement date, repair mileage, fault description, maintenance items, maintenance accessories, etc. Among them, there are 11,804 operating passenger cars, with a maintenance quantity of 20,000 vehicles/time, and 19,875 trucks with a maintenance quantity of 38,000 vehicles/time. The specific maintenance records are shown in Tables 43.3 and 43.4.

43.4 Fault Analysis of Safety Components of Road Transport Vehicles For the key safety components of road transport vehicles sorted out above, through text preprocessing and fault data screening, this project extracts the fault mileage, fault description, maintenance items, fault causes, and other information of various system components of passenger cars and trucks from the maintenance records, as shown in Tables 43.5 and 43.6.

43 Maintenance Data Collection and Analysis of Road …

437

Table 43.1 Safety components of operating passenger cars Serial number

Classification

System component name

1

Dynamic system

Automatic fire extinguishing device in engine compartment

2 3

Automatic fire extinguishing device for power battery box Steering system

4 5

Hydraulic power steering system Electric power steering system

Driving system

Electronic stability control system

6

Tire pressure monitoring and alarm device

7

Tire burst emergency safety device

8

Advanced driving assistance system (LDWS, AEBS, etc.)

9

Braking system

Disc brake

10

Anti-lock braking device

11

Auxiliary braking device

12

Eddy current retarder alarm system

13

Speed limit/alarm function or device

14

Automatic brake clearance adjustment device

15

Brake lining replacement alarm

16

Brake air pressure display and pressure limiting device

17

Vehicle safety protection device

Safety belt

18

Safety belt wearing reminder device

19

Safety airbag

20

Satellite positioning system vehicle terminal (Video monitoring system)

21

Manual mechanical power-off switch

43.5 Conclusion Based on the national automobile maintenance electronic health archives system, this paper collects the maintenance records of nearly 60,000 operating passenger cars and trucks per time. Based on the analysis and reconstruction of the maintenance record data, this paper studies and determines the fault characteristics, fault mileage, maintenance items, and fault causes of each safety component, so as to provide technical support for product update and maintenance of new onboard intelligent devices, safety devices to ensure vehicle operation safety.

438

F. Liu et al.

Table 43.2 Safety components of operating trucks Serial number

Classification

System component name

1

Driving system

Electronic stability control system

2

Tire pressure monitoring and alarm device

3

Tire burst emergency safety device

4

Advanced driving assistance system (LDWS, AEBS, FCW, etc.)

5

Braking system

Disc brake

6

Anti-lock braking device

7

Electronic braking system

8

Anti-lock braking device failure alarm device

9

Auxiliary braking device

10

Eddy current retarder alarm system

11

Speed limit/alarm function or device

12

Automatic brake clearance adjustment device

13

Brake lining replacement alarm

14

Brake air pressure display and pressure limiting device

15

Vehicle safety protection device

Safety belt

16

Safety airbag

17

Satellite positioning system vehicle terminal (Video monitoring system)

VIN

LZ*TAGBW*E1050*04

LZ*TAGBW*K1033*27

LZ*TAGBW*H1034*20

LZ*TAGBW*H1034*12

LZ*TAGBW*E1054*24

LZ*TAGBW*E1020*39

Maintenance enterprise

Dalian Xingguang automobile assembly repair factory

Zhangjiakou bus repair General Factory

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

2015-8-3

2016-2-22

2019-4-11

2019-4-16

2020-9-1

2021-3-12

Repair date

Table 43.3 Maintenance record of operating passenger cars

50,995

48,637

61,185

20,556

23,652

111,677

Repair mileage (km)

Disassembly and assembly of combination switch assembly

Disassembly and assembly of air reservoir

Disassembly and assembly of heating radiator

Monitoring host SIM card replacement

Maintenance items

The car doesn’t go in gear

Fuse box assembly replacement

Cracked rubber pad of Disassembly and drive motor assembly of drive motor support rubber pad

Combination switch internal short circuit

Air leakage in air reservoir

The vehicle reported insulation failure

It cannot monitor the vehicle data in the background

Fault description

(continued)

Rear closed electrical box

Auxiliary support rubber pad

Combination switch

Combination switch

Wall mounted electric heater

Monitoring host SIM card

Maintenance accessories

43 Maintenance Data Collection and Analysis of Road … 439

VIN

LZ*TAGBW*E1054*08

LZ*TAGBW*H1034*22

LZ*TAGBW*E1054*19

LZ*TAGBW*G1071*39

Maintenance enterprise

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

Zhengzhou Yutong Bus Co., Ltd. Shanghai Sales Service Branch

Zhengzhou Yutong Bus Lanzhou sales service branch

Table 43.3 (continued)

2020-8-6

2015-8-7

2018-8-13

2016-3-17

Repair date

26,211

108

17,297

45,387

Repair mileage (km)

Motor fault

Motor cooling water pump does not work

Air filter clip broken

Motor rotary transformer line open circuit alarm

Fault description

Bearing disassembly

Disassembly and assembly of motor cooling water pump

Disassembly and assembly of air filter element

Disassembly and assembly of electric control harness

Maintenance items

Bearing puller

Webasto pump

Air cleaner element

Rotary transformer harness of pure electric motor

Maintenance accessories

440 F. Liu et al.

LN*1BAA*4LV600*24

LN*1BAA*5JV101*02

LN*1BAA*5JV101*02

LN*1BAA*5JV101*02

Shenyang Meilian Automobile Sales Service Co., Ltd

Tianjin Jinning Yuejin Automobile Sales Service Co., Ltd

Tianjin Jinning Yuejin Automobile Sales Service Co., Ltd

Tianjin Jinning Yuejin Automobile Sales Service Co., Ltd

2019-11-11

2019-12-8

2019-7-16

2021-1-19

2020-7-1

LN*1BAA*3JV101*90

Tianjin Jinning Yuejin Automobile Sales Service Co., Ltd

Repair date 2019-3-4

VIN

Jilin Jinyi LN*1BAA*1JV102*54 Automobile Distribution Co., Ltd

Maintenance enterprise

Table 43.4 Maintenance record of operating trucks

14,325

16,547

4771

19,339

26,993

14,288

Repair mileage (km)

Function failure of air conditioning electrical control box

Clutch hydraulic control assembly fails successfully

Multimedia host function failure

Clutch hydraulic control assembly fails successfully

Headlamp function failure

Combination switch card issuing

Fault description Integrated switch

Maintenance accessories

Replace the air conditioning electrical control box

Replace the clutch hydraulic control assembly

Replace multimedia host

Replace the clutch hydraulic control assembly

(continued)

Front air conditioning controller

Clutch hydraulic control

Head unit

Clutch hydraulic control

Remove the headlamp Headlights assembly

Disassembly and assembly of integrated switch assembly

Maintenance items

43 Maintenance Data Collection and Analysis of Road … 441

VIN

LN*1BAA*5LV100*98

LN*1BAA*6JV101*02

LN*1BAA*6JV101*02

LN*1BAA*6JV101*02

Maintenance enterprise

Henan Renhe Automobile Sales Co., Ltd

Tianjin Jinning Yuejin Automobile Sales Service Co., Ltd

Tianjin Jinning Yuejin Automobile Sales Service Co., Ltd

Tangshan Lunan senior car repair factory

Table 43.4 (continued)

2019-1-3

2019-11-13

2019-1-14

2020-12-3

Repair date

8075

24,727

8076

19,851

Repair mileage (km)

Air conditioning refrigerant

Cylinder head gasket, timing toothed belt, rocker arm assembly

Disassembly and Washer water storage assembly of radiator tank assembly and its air guide cover

Air conditioning Disassembly and system function failure assembly of automatic air conditioning system Engine performance degradation

Maintenance accessories

Remove the headlamp Headlights assembly

Maintenance items

Engine function failure Remove engine assembly

Poor sealing of headlamp

Fault description

442 F. Liu et al.

Driving system

Tire pressure alarm

Tire pressure monitoring and alarm device

4500, 27,919

AEBS fault light on

Advanced driving 8978 assistance system (Lane departure warning system LDWS, Automatic emergency braking system AEBS, etc.)

Heavy and laborious steering; The vehicle is difficult to shift ESC fault light on

17,317, 56,985

Electric power steering system (EPS)

Oil leakage of steering oil tank

Function failure of fire extinguishing device

Self-explosion of vehicle firebomb

Fault description

Electronic stability control 5860 system ESC

36,760, 7748, 26,575

Hydraulic power steering system

15,984

Automatic fire extinguishing device for power battery box

Steering system

136,763, 56,992

Dynamic system

Fault mileage (km)

Component

Automatic fire extinguishing device in engine compartment

Classification

Table 43.5 Fault information table of operating passenger cars’ safety components Fault cause

Sensor aging and damage

ESC line fault

EPS controller failure

Tire sensor replacement

(continued)

Tire pressure monitoring sensor failure

Disassembly and assembly Due to internal program of tachograph interference of tachograph

Maintenance line

EPS controller assembly replacement

Disassembly and assembly The joint of oil tank of power steering reservoir body is not tightly sealed, resulting in oil leakage

Sensor disassembly

Disassembly and assembly Rocker switch damaged of firebomb

Maintenance items

43 Maintenance Data Collection and Analysis of Road … 443

Braking system

Classification

25,594, 290,749, 141,000

Brake air pressure display and pressure limiting device

Instrument fault light on

Serious corrosion and falling off of friction plate

31,230, 32,253, 39,164

Brake lining replacement alarm

Warning lamp does not work

Abnormal braking

24,519

Eddy current retarder alarm system

Retarder cooling water pipe burst and vehicle broke down

Automatic brake clearance 47,262 adjustment device

93,353

Auxiliary braking device

Fault cause

Buzzer device failure

The warning lamp is not powered on

Insufficient antifreeze

Parking brake air pressure sensor replacement

(continued)

Parking brake air pressure sensor failure

Disassembly and assembly Friction plate wear of friction plate wear alarm sensor failure alarm sensor

Disassembly and assembly Brake pad clearance of adjusting arm adjustment mechanism failure

Replace the buzzer

Replace the warning lamp

Fill with antifreeze

ABS hydraulic modulator assembly failure

Disassembly and assembly Disc brake friction plate of disc friction plate damaged

ABS fault light is on; ABS ABS hydraulic pump system adjustment maintenance performance degradation

The braking force is poor, and the brake makes abnormal noise

The buzzer doesn’t sound

190,519, 1181

Anti-lock braking device ABS

Maintenance items

Damage of Disassembly and assembly Wheel explosion-proof explosion-proof tire device of tire explosion-proof tire safety device of automobile device damaged

Fault description

Speed limit/alarm function 42,560 or device

16,606, 51,506, 19,482

1934

Tire burst emergency safety device

Disc brake

Fault mileage (km)

Component

Table 43.5 (continued)

444 F. Liu et al.

352,116

Internal water ingress and rust

Airbag fault light on

Manual mechanical power-off switch

1281, 18,494

Safety airbag

The driver’s seat belt unfastened indicator does not light up

Communication error of traveling data recorder, unable to receive signal

20,110

Safety belt wearing reminder device

The safety belt cannot be used normally

Fault description

Satellite positioning 357,982, 328,887, system vehicle terminal 344,226 (video monitoring system)

59,751, 37,356

Safety belt

Vehicle safety protection device

Fault mileage (km)

Component

Classification

Table 43.5 (continued)

Instrument fuse failure

Safety belt anchor points fall off

Fault cause

Disassembly and assembly The sealing effect of the of manual mechanical device is not good power-off switch

Disassembly and assembly The internal of dashcam and communication module replacement of SIM card of the host is damaged

Disassembly and assembly Function failure of of airbag electronic control airbag electronic control unit unit

Replace the safety belt wearing reminder

Seat belt replacement

Maintenance items

43 Maintenance Data Collection and Analysis of Road … 445

31,000

Electronic brake system (EBS)

171,588

Vehicle forward collision warning system

2008, 11,379, 3498

16,541

Lane departure alarm

Anti-lock braking device ABS

8978

Automatic emergency braking system (AEBS)

57,514, 204,270, 34,129

72,776

Tire burst emergency safety device

Disc brake

4721, 7681, 7679

Tire pressure monitoring and alarm device

Braking system

11,254, 19,072, 2222

Electronic stability control system ESC

Driving system

Fault mileage (km)

Component name

Classification

Table 43.6 Fault information table of operating trucks’ safety components

EBS function failure

ABS function failure

Reduced braking performance

FCW function failure

LDWS function failure

AEBS fault light on

Tire burst

ABS electronic control unit function failure

ESC fault light on

Fault description

Fault cause

Disassembly and assembly of EBS module

Disassembly and assembly of ABS electronic control unit and sensor

Replace the front wheel brake pads

FCW impact sensor replacement

Camera disassembly

Disassembly and assembly of tachograph

Disassembly and assembly of tire explosion-proof device

Disassembly and replacement of tire pressure sensor

(continued)

Battery negative ground wire open circuit

ABS electronic control unit/sensor wire or electrical appliance short circuit

Brake pad wear

FCW collision sensor failure

Camera failure

Integrated machine host failure

Wheel explosion-proof tire safety device failure

Tire pressure sensor failure

Removing and Wheel speed sensor reinstalling wheel speed assembly failure sensor

Maintenance items

446 F. Liu et al.

Vehicle safety protection device

Classification 13,006, 72,590

1000, 1,700,000

10,195

9450 22,417, 25,360, 26,851

61,651

Anti-lock braking device failure alarm device

Auxiliary braking device

Eddy current retarder alarm system

Speed limit/alarm function or device

Automatic brake clearance adjustment device

Brake lining replacement alarm

13,540, 34,191, 158,563 4650, 15,668, 26,310

Safety belt

Safety airbag

Brake air pressure display 242,563, 141,000, 5000 and pressure limiting device

Fault mileage (km)

Component name

Table 43.6 (continued) Disassembly and assembly of ABS warning lamp

Maintenance items

ABS alarm bulb failure

Fault cause

Function failure and performance degradation of actuator of airbag electronic control unit

Safety belt curling and stretching failure

Instrument fault light on

The front brake pad alarm line is damaged

Brake pad alarm

Warning lamp does not work

Warning lamp does not work

Brake pad clearance sensor failure

The warning lamp is damaged

The contact piece of alarm lamp switch is burnt out

Disassembly and assembly of airbag electric control unit actuator

Seat belt assembly replacement

Parking brake air pressure sensor replacement

(continued)

Airbag electronic control unit logic error

Seat belt assembly failure

Parking brake air pressure sensor failure

Replace the brake alarm Alarm line assembly line

Disassembly and assembly of brake pad clearance sensor

Replace the warning lamp

Replace the warning lamp

The retarder fault light is Retarder electronic Retarder electronic on; The retarder function control unit replacement control unit fault is sometimes unavailable

Warning lamp does not work

Fault description

43 Maintenance Data Collection and Analysis of Road … 447

Classification

Fault mileage (km) 66,953

Component name

Satellite positioning system vehicle terminal (video monitoring system)

Table 43.6 (continued) The tachograph sometimes reports an overspeed warning in place

Fault description Brush the data of tachograph and debug the online information of vehicle

Maintenance items

The internal system data of traveling data recorder is disordered

Fault cause

448 F. Liu et al.

43 Maintenance Data Collection and Analysis of Road …

449

References 1. GB 7258-2017, Technical specifications for safety of power-driven vehicles operating on roads. Beijing: Standards Press of China (2018) 2. GB 38900-2020, Items and methods for safety technology inspection of motor vehicles. Beijing: Standards Press of China (2020) 3. JT/T 1094-2016, Safety specifications for commercial bus. Beijing: China Communications Press (2017). 4. JT/T 1178.1-2018, Safety specification for commercial vehicle for cargos transportation–Part 1: Goods vehicle. Beijing: China Communications Press (2018) 5. Gao, S., Zhang, S., Yao, C.: Study on foreign bus and coach passive safety. Bus Technol. Res. 3, 7–10 (2006)

Chapter 44

Effectiveness Evaluation Method of Marine Environmental Weapons and Equipment Based on Ensemble Leaning Honghao Zheng, Qingzuo Chen, Bingling Tang, Ming Zhao, Gang Lu, Jianjun Wang, and Guangzhao Song Abstract Marine environmental factors have complex but independent effects on the effectiveness of weapons and equipment. The application of theoretical model is less and the confidence of evaluation results is low. Using analytic hierarchy process modeling. By ensemble learning ideas fusion theory model and artificial intelligence model, breakthrough the limitation of the traditional model only considering the fixed elements, dig new rule and optimize the model parameters, obtain high degree of confidence of the evaluation results, to achieve a single method is difficult to achieve the balance of the objectivity, accuracy and robustness.

44.1 Introduction Effectiveness evaluation methods of weapons and equipment are mainly based on theoretical models and expert experience. ADC method, index method and AHP method are used to evaluate the effectiveness of weapon equipment based on experience and model. The results of weapon combat effectiveness assessment are significantly affected by Marine environmental factors. Different environmental factors have different effects on performance assessment. Therefore, the core of the effectiveness evaluation method of weapons and equipment is to determine the relationship between the effects of various Marine elements on the effectiveness of weapons and equipment. That is, the weight distribution of different environmental factors on the effectiveness of weapons and equipment is extremely important. Whether the weight is objective or not has a direct impact on the evaluation accuracy and accuracy of the model [1].

H. Zheng (B) · Q. Chen · B. Tang · M. Zhao · G. Lu · J. Wang · G. Song CSSC Ocean Exploration Technology Research Institute Co., Ltd, Wuxi 214000, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_44

451

452

H. Zheng et al. Carrier aircraft safety performance A

Target layer

Criterion layer

B1

environmental factor

B2

C6

Wave height C7

Visibility

Low cloud C5

C3

Crosswind trimming C4

Upwind trimming

C2

C1

Conscientiousness

Experience knowledge

Index layer

human factor

Fig. 44.1 Evaluation model of shipborne take-off and landing aircraft

44.2 Evaluation Method of Ensemble Learning Efficiency Based on AHP Modeling 44.2.1 Efficiency Evaluation Modeling Idea Based on AHP Modeling AHP is an analytical method for multi-objective decision making, which can not only qualitatively analyze the relationships among different levels, but also quantitatively analyze the weight ratio among the relationships among different levels. The main idea is to simplify the complex system into an ordered hierarchical structure by analyzing the related elements and their relationships and make these elements merge into different levels [2]. Figure 44.1 shows the AHP evaluation model.

44.2.2 Build a Performance Evaluation System Due to the complex but relatively independent influence of various Marine elements on the effectiveness of weapons and equipment, this method divides the effectiveness evaluation into three levels: combat performance effectiveness, single effectiveness and comprehensive effectiveness evaluation. The evaluation system of the impact of Marine environment on equipment efficiency is established [3] (Table 44.1).

44 Effectiveness Evaluation Method of Marine Environmental …

453

Table 44.1 Weapon effectiveness system table in Marine environment Influence index of comprehensive Effectiveness of Weapons and Equipment (First-level index)

Influence index of single effectiveness of weapons and equipment (Second-level index)

Influence index of weapon equipment performance and effectiveness (Third-level indicators)

Effect index of weapon equipment effectiveness

Survival effectiveness impact index

Overturning rate loss

Maneuvering effectiveness impact index

Movement speed gains and losses

Camouflage gains and losses

Avoid mobility profit loss Strike and operation effectiveness impact index

Missile profit loss

Shielding effectiveness impact index

Impact resistance profit and loss

The torpedo profit loss

Anti-interference ability to profit loss command and communication impact index

Radar influence profit loss Mast antenna loss

44.2.3 Evaluation Model Intelligent Learning Optimization Method Based on Ensemble Learning The ensemble learning method is used to evaluate the effects of Marine environmental factors on the performance indexes of weapons and equipment. That is, both theoretical models and artificial intelligence models are considered. And through the ensemble learning ideas for fusion. At the same time, multi-index fusion method is used to evaluate and optimize the single efficiency index of weapon equipment. The specific implementation steps are shown in Fig. 44.2. (1)

Selection of influencing factors of Marine environment

The following principles are followed when selecting the Marine environmental elements that may have an impact on the effectiveness of weapons and equipment and combat operations and initially determining their weights: (1)

(2)

(3)

Principle of representativeness: Each element of the Marine environment has an impact on the effectiveness of weapons and operations. However, it is impossible to monitor and calculate all factors in the actual assessment. We should pay attention to the key factors and ignore the secondary factors. Principle of independence: In the process of factor screening, it is required that all factors should have internal correlation and be independent of each other. Cannot exist equivalent or inclusion relation, to maintain the minimum information redundancy. Feasibility principle: When selecting factors, it is necessary to give full consideration to the source of data, the actual way of obtaining information and the

454

H. Zheng et al.

Fig. 44.2 Evaluation of ensemble learning efficiency based on AHP modeling

requirements of its quantity and accuracy. Ensure the availability of the data corresponding to the selected Marine environmental factors. (2)

Normalization of Marine environmental factors and efficiency impact index

Firstly, dimensionless and normalized data involved should be carried out. It includes information related to combat weapons and Marine elements to ensure uniform data format and facilitate subsequent calculation. Suppose the theoretical value range of a factor data is [x1 , x2 ] and the actual value is xα ,.The theoretical limit is converted to 200.[0, 100] indicates that the Marine environment adversely affects the effectiveness of the weapon. 0 is for devastating damage. Take the effect of lightning on helicopters as an example: Because the effect of wind speed on the boat is so obvious, When the wind speed affects the boat can not sail, set the impact assessment index of the wind speed to 0 (maximum damage); Set to 100 when there is no wind (general); When the wind speed affect boats speed ascend, then ser the values between [100,200]. The rest of the meteorological hydrological factors impact assessment index is relatively complex, need to be based on the corresponding security knowledge description and modeling. The final output effectiveness evaluation index of weapons and equipment adopts the same evaluation method, and the scoring range is [0,200]. Where [0,100] indicates that the environment has adverse effects on weapons and equipment; [100,200] indicates that the environment has a favorable effect on weapons and equipment; 100 ± 20 represents the boundary conditions of environmental effects on weapons. Artificial intelligence algorithms represented by deep neural network have obvious advantages over theoretical analysis methods in the aspects of feature extraction

44 Effectiveness Evaluation Method of Marine Environmental …

455

and evaluation accuracy of complex data. But it requires massive training data and sufficient model training. When the training data is small, the method of ensemble learning can be used to integrate the theoretical analysis model and obtain credible evaluation results. In addition, the assessment accuracy can be further improved by continuously collecting data on true fluctuations in weapon effectiveness during training and periodically retraining the AI model [4]. (3)

Evaluation of three-level index of weapon and equipment effectiveness

Theoretical analysis method has been widely used in operational effectiveness evaluation, but it is difficult to be applied to complex scenes because of its strong subjectivity. Therefore, artificial intelligence is introduced to evaluate combat effectiveness when simulating actual battlefield conditions. And by ensemble learning ideas fusion theory model and artificial intelligence model, obtain higher decision-making accuracy, improve objectivity and robustness of the model, evolution and generalization ability. Specific design ideas are as follows: A1 X 1 + A2 X 2 + A3 X 3 + · · · + An X n = S1 The X1 . . . Xn is the Marine element that has an impact on the effectiveness of the third-level index of weapon equipment. A1 . . . An represents the weight of each Marine element. S1 represents the impact index of Marine environment on the effectiveness of three-level indicators of weapons and equipment (Fig. 44.3).

Fig. 44.3 Evaluation algorithm of three-level index of weapon and equipment effectiveness impact

456

(4)

H. Zheng et al.

Evaluation of secondary index of weapon and equipment effectiveness

The secondary indexes of weapon and equipment effectiveness include survival effectiveness, maneuvering effectiveness, strike and operation effectiveness, protection effectiveness and command and communication effectiveness. The multi-index fusion integration method is used to calculate the weight of each three-level index and select the appropriate integration mode to fuse into a unified second-level index of performance impact. The weight calculation methods include G1 weight determination method, clever color correlation analysis method, comprehensive weight method and so on. Integration method mainly based on the experience of reasoning indicators fusion method and multiple attribute comprehensive evaluation method. In this step, the weight of each three-level indicator can be manually adjusted and corrected through human–computer interaction system. F(B1 S1 , B2 S, B3 S, . . . , Bn Sn ) = Q1 The S1 . . . Sn is the Marine factor influencing the effectiveness of weapon equipment secondary index. B1 . . . Bn represents the weight of each Marine element. F stands for multi-index integration method. Q1 represents the influence index of Marine environment on the effectiveness of secondary indicators of weapons and equipment (Table 44.2). (5)

Evaluation of the first level index of weapon and equipment effectiveness

Similar to the evaluation method of second-level indicators, first-level indicators are obtained by weighted fusion of second-level indicators. Table 44.2 Basic characteristics of different fusion algorithms Algorithm

Additive relation

Multiplication relation

Substitution relation

Fuzzy large and small

Index relationship

Independent

Relevant

Relevant

Independent

Compensation relationship

Linear compensation

Little compensation

Full compensation

Non compensation

Synthetic principle

Prominent main indicators

Index juxtaposition

Main index decision

Focus on secondary indicators

44 Effectiveness Evaluation Method of Marine Environmental …

457

44.3 Application Cases and Advantages of Ensemble Learning Assessment Model 44.3.1 Integrated Learning Application Case (1)

Aircraft engine fault diagnosis

In order to solve the problem of aero-engine fault diagnosis, the relationship between the distribution of aero-engine attribute value and its fault category is explored by using the back-propagation neural network which focuses on complex mapping relationship fitting. CNN, which focuses on feature learning, is used to mine features hidden among sequence attributes. Finally, dempster-Shafer evidence theory is used to fuse the decision information given by the two networks. Evidence theory regards multi-source data or multi-source information as multiple evidence bodies, and expresses the information of each evidence body as the basic probability distribution.. The fusion probabilities are constructed by using the synthesis formula, and the final judgment is given according to the decision rules. Under the action of more comprehensive information after fusion, the discrimination accuracy is improved obviously [5] (Table 44.3). In addition, the experimental results of the comparison between the ensemble learning method and other common fault diagnosis methods are shown in the Fig. 44.4. As you can see from the picture: The recall rate of the ensemble learning method for five gas path fault modes is higher than that of the other five comparison methods. The feasibility and high efficiency of the method are proved. In addition, in order to verify the anti-interference ability of the ensemble learning method, artificial noise was added to the data to repeat the above experiments. The experimental results are shown below (Fig. 44.5). As shown above: When artificial noise is added, ensemble learning method can still produce better diagnosis results. The results show that the ensemble learning method can improve the robustness of the model. (2)

Photovoltaic power prediction

Photovoltaic power generation mode is affected by many factors such as sunshine intensity and temperature, so the optical power will fluctuate greatly with time. Factors affecting optical power values can be divided into two categories. The first category is controllable and will not change for a long time, such as photovoltaic Table 44.3 Comparison of accuracy after integrated learning fusion Data set

SVM

Adaboost

DT

RM

CNN

BPNN

DS-NN

ModelA

0.8656

0.7533

0.7873

0.8470

0.9598

0.9489

0.9627

ModelB

0.8647

0.5923

0.902

0.924

0.9818

0.9753

0.9864

458

H. Zheng et al.

Fig. 44.4 Experimental comparison results

Fig. 44.5 Experimental results of anti-interference capability of ensemble learning method

44 Effectiveness Evaluation Method of Marine Environmental …

459

Table 44.4 Comparison of prediction results of accuracy and robustness of multiple models Data set

BPNN Correct rate

LSTM Correct rate

Optimal solution of genetic algorithm ω1

ω2

Accuracy after dual depth network fusion

Northeast Power Grid 86.43

86.57

0.656

0.344

87.04

Liaoning power grid

80.05

89.01

0.33

0.67

89.01

Jilin power grid

75.44

74.90

0.81

0.19

75.71

Heilongjiang power grid

87.31

87.25

0.52

0.48

89.15

Inner Mongolia east power grid

92.18

93.01

0.98

0.02

94.03

power generation equipment component installation tilt Angle, module serial-parallel matching mode, inverter capacity ratio, etc. These factors affect the efficiency of power generation equipment, which in turn determines the level of output over a long period of time. The second category is the factors that change with time and environment, mainly meteorological data, such as solar radiation intensity, temperature, wind speed and so on. Solar radiation intensity and spectral characteristics change with the change of meteorological conditions, so these factors will affect the real-time power, resulting in the fluctuation of power generation [6]. In view of the power level prediction of photovoltaic power generation. An optical power prediction method based on dual-depth neural network has been proposed. Based on the idea of ensemble learning, the advantages of LSTM and BPNN are utilized to learn pv power generation time series data. Genetic algorithm is used to find the weight ratio of the two neural network outputs in the fusion, so as to maximize the overall objective function accuracy of the model to search for the optimal solution. More accurate and robust prediction results are obtained (Table 44.4).

44.3.2 Advantage Analysis of Ensemble Learning Assessment Model Now, ensemble learning algorithm has achieved good results in many application fields. Multiple complementary models are obtained by using different types of algorithms with similar performance to learn tasks from different angles. According to the classical integrated learning idea, a large-scale diversified basic model is constructed, which improves the effect in a variety of tasks [7] (Fig. 44.6).

460

H. Zheng et al.

Fig. 44.6 Open evolvable evaluation model based on ensemble learning

Advantage Analysis of integrated Learning Assessment Model: (1)

Integrate the advantages of multiple models

In view of the complex but independent influence of ocean factors on weapon equipment effectiveness, the theoretical analysis model is rarely applied and the evaluation results have low confidence. The effects of Marine environmental factors on the performance indexes of weapons and equipment were evaluated by integrated learning method. The effects of multiple environmental factors on the effectiveness of weapons and equipment can be comprehensively considered, breaking through the traditional model of solidification only considering the effects of fixed factors. Through the method of self-learning and mining new rules of sample data and modifying model parameters, the product model is constantly perfected in the process of use. Reduce the error between the product result and the actual situation, realize the product model in the dynamic operation process of the system independent update, obtain higher confidence evaluation results. (2)

Human–computer interaction modified weights

Man–machine interaction interface is reserved. Experts can modify the weight (An , Bn ) through the man–machine interaction system, and obtain the corresponding evaluation results through model calculation. The weights and evaluation results set by experts can be used as calibration of model training, as well as data samples of subsequent model training, which can greatly increase the confidence of the model.

44 Effectiveness Evaluation Method of Marine Environmental …

(3)

461

Classification of unbalanced data

In the course of actual training, combat team members usually choose conventional weather conditions for training. However, ensemble learning requires a large amount of balanced data, that is, training data in extreme weather, so as to ensure the accuracy of training results. So how to classify data is very important in the process of decision-aided algorithm fusion. Eus-bag, a Bagging ensemble algorithm based on evolutionary undersampling, can effectively improve the classification performance of a few class samples in a class unbalanced learning environment [8].

44.4 Conclusion As the tasks become more complex, the scale of data and models becomes larger. Single theoretical model evaluation method and artificial intelligence evaluation method, most of the specific task and environment, has certain limitations. It is difficult to obtain accurate generalizable assessment results. By integrating the theoretical model and artificial intelligence model with integrated learning ideas, higher decision accuracy can be obtained, and the objectivity, robustness, evolution ability and generalization ability of the model can be improved. Especially in the case of incomplete information, statistical methods and data simulation techniques are difficult to be applied in evaluation and decision making. Ensemble learning can integrate expert knowledge, historical experience, a small number of samples and other information to synthesize evaluation decisions from different perspectives and achieve a balance of objectivity, accuracy and robustness that is difficult to achieve with a single method.

References 1. Ma, B. L., Liu, D. S.: Construction method of the assessment index framework in operation systems. Command Contr. Simul. 43(2), 7 2. Zhang, R.: Characteristic diagnosis of marine environment and risk assessment of marine military activities. Beijing Normal University Press, 195–203 (2012) 3. Cui, P.F., Yan, H.S., Fan, J.S.: Self-learning evaluation model of weapon equipment operational effectiveness under marine environment. Comput. Technol. Dev. 000(002), 32–36 (2013) 4. Li, N., Li, Y.H., Gong, G.H., Huang, X.D.: Intelligent effectiveness evaluation and optimization on weapon system of systems based on deep learning. J. Syst. Simul. 2020(8), 1425–1435 (2020) 5. Liu, Q. C.: Traffic state prediction based on ensemble learning. Southeast University (2015) 6. Li, G.Y., Guo, M.Y., Luo, Y.F.: Traffic congestion identification of air route network segment based on ensemble learning algorithms. J. Transp. Syst. Eng. Inf. Technol. 20(2), 170–177 (2020) 7. Ma, J., Yan, H.S., Yang, Q.H.: Improved RBF network evaluation model of weapon operational effectiveness under marine environment. Comput. Technol. Dev. 000(1), 19–23 (2015) 8. Qin, C., Gao, X.G., Chen, D.Q.: Distributed deep networks based on Bagging-Down SGD algorithm. Syst. Eng. Electron. 41(5), 90–96 (2019)

Chapter 45

A Dynamic Load Distribution Method for Multi-Robot Yang Li, Lin Geng, Jinxi Han, Jianguang Jia, and Jing Li

Abstract To solve the problem of dynamic load distribution in multi-robotic grasping system, a load distribution method based on parameterized generalized grasping inverse matrix is proposed. Firstly, the kinematics and force analysis of the multi-robot grasping system is carried out, dividing the grasping force into external force and internal force; then, the dynamic manipulability of serial robot is quantified by using acceleration ellipsoid, based on which the load distribution coefficient is determined; the virtual mass and virtual inertia of the object and multi-robot system are defined, and the parameterized generalized grasping inverse matrix is established by combining the dynamic manipulability. It is proved that the proposed method can satisfy the load distribution mode without internal force. Simulation and experiments show that the proposed method can adjust the output of the wrench at the end effector of multi-robots, effectively avoiding the overload of the robot joints, and realize the dynamic load distribution of the multi-robotic grasping system.

45.1 Introduction With the wide application of robots in industry, single robot has been unable to complete some tasks, such as handling massive objects, cooperative welding and rescue operations. Compared with single robot, multi-robot system has the advantages of more load capacity and higher dexterity [1]. When multi-robot operates the same object together, it is very important to reasonably distribute the load of the operated object to the end-effector of each robot. Given the force and torque required by the operated object, infinite distribution schemes exist in theory [2, 3]. The load distribution problem of multi-robot system is how to select the optimal distribution scheme.

Y. Li (B) · L. Geng · J. Han · J. Jia · J. Li Institute of Systems Engineering, Beijing, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_45

463

464

Y. Li et al.

Load distribution is usually treated as a constrained optimization problem [4, 5], however, the real-time performance of the optimization method is poor, so it is difficult to solve the dynamic load distribution problem of multiple robots effectively. Walker et al. [6–8]. proposed the concept of grasping internal force, decomposed the resultant force of the robot acting on the operated object into grasping external force and grasping internal force, and obtained the internal force free load distribution method of multiple robots by constructing a special grasping matrix. Chung [9] defined the concepts of multi-robot cooperative operation space and grasping space, and considered that the load should be distributed in the grasping space. The generalized inverse matrix is used to replace the grasping matrix proposed by Walker. However, the distribution method based on generalized inverse matrix is essentially a symmetrical uniform load distribution, which can not adjust the force and torque at the end of the robot. To solve this problem, Erhart [10] defined the grasping internal force based on the virtual work principle, parameterized the grasping matrix, and proposed a more generalized load distribution method. However, this method requires force, torque and position applied by each robot satisfies a certain relationship which can’t effectively solve the load distribution problem of uneven grasping. Based on Erhart’s work, Bais [11] parameterized the grasping matrix by using the distribution coefficient to realize the adjustment between force and torque, but the subjectively determined distribution coefficient will lead to the overload of joint torque. Zhao [12] proposed the concept of virtual concentrated inertial mass bar, which abstracts the operated object into particles, and carries out load distribution in combination with the distribution coefficient to ensure the non-internal force grasp of the operated object. Although non-internal force grasping avoids the internal energy consumption of multi-robots, in terms of dynamic load distribution, the continuous change of robot configuration will lead to the change of its dynamic manipulability, a small dynamic manipulability means that the ability of the robot to apply force in a specific direction decreases, and the load of the robot in this direction should be reduced to avoid overload of the robot joints, therefore the load distribution method without internal force cannot solve this problem. To solve the above problems, a multi-robot load distribution method based on dynamic manipulability is proposed in this paper. This method takes the dynamic manipulability as the index, determines the load distribution coefficient of each robot, parameterizes the inverse of the grasping matrix by using the virtual mass, virtual inertia and virtual centroid, and reasonably distributes the grasping force to the end of each robot. Based on satisfying the load distribution without internal force, this method can adjust the force and torque borne by the end of the robot, give a flexible load distribution mode to the multi-robot system, and avoid the joint overload caused by the decline of the dynamic manipulability of robot.

45 A Dynamic Load Distribution Method for Multi-Robot

465

45.2 Kinematics and Force Analysis of Multi-Robot Grasping System 45.2.1 Kinematics of Multi-Robot Grasping System Figure 45.1 shows the multi-robot grasping system. S is the inertial coordinate system, the origin of the object coordinate system coincides with the center of mass O, which is the robot grasping position described in the object coordinate system, and O* is the center of the applied force of the robot. D is the distance between O and O*, and the description of the robot grasping position in the inertial coordinate system is pi = po + S Ro · o ri

(45.1)

The velocity and acceleration of the ith robot end effector can be obtained by calculating the first and second derivatives of Eq. (45.1), respectively: p˙ i = p˙ o + ωo × ri

(45.2)

p¨i = p¨o + ω˙ o × ri + ωo × ωo × ri

(45.3)

where, ωo is the angular velocity of the object in the object coordinate system, ri = S Ro · o ri . Assuming that there is no sliding between the end of the robot and the object, so o ri is a constant value. The angular velocity of the end effector of the ith robot is equal to that of the object, we have: ωi = ωo , ω˙ i = ω˙ o

Fig. 45.1 Multi-robot grasping system

R2

(45.4)

R3 r2

O

*

d z O

x

ri

r3

y

pi po y

r1

x

S

R1

z

466

Y. Li et al.

45.2.2 Force Analysis of Multi-Robot Handling System The relationship between the force spinor and the equivalent force spinor of the robot end-effector acting on the object is h do = Gh id

(45.5)

where:  G=

I3 0 . . . I3 0 S(r1 ) I3 . . . S(rn ) I3

 (45.6)

where, S(ri ) ∈ R3×3 is the antisymmetric matrix of ri , I 3 is the three-dimensional unit matrix, h do = ( f od , τod ) ∈ R6×1 is the resultant force on the object, is the force exerted by the ith robot end effector on the object, and G is the grasping matrix. When the trajectory of the object is known, the acceleration of the object can be obtained by deriving the trajectory, and h do can be calculated by multiple mass of object. According to Eq. (45.6) equilibrium equation can be derived as: ⎧ ⎪ ⎪ ⎨

f od =

n  i=1

f id

n n   ⎪ ⎪ τid + ri × f id ⎩ τod = i=1

(45.7)

i=1

If O coincides with O*, that means d = 0, the grasping system will not generate grasping internal force. If d = 0, the system will generate grasping internal force τd : τd =

n

ri × f id

(45.8)

i=1

In order to maintain the motion posture of the object, the robot must provide compensation torque to balance the grasping internal force represented by −τd . The load distribution is to distribute h do and −τd reasonably to the end of each robot.

45.2.3 Dynamic Manipulability of Multi-Robot In order to avoid joint torque overload damaging the robot body, the ability of the robot to apply force in the acceleration direction of the object is taken as the index, which is reasonably distributed to the end of each robot. In order to quantify this parameter, the acceleration ellipsoid is established. For the series robot, the dynamic equation as below

45 A Dynamic Load Distribution Method for Multi-Robot

467

τ = M(q)q¨ + H(q, q) ˙ q˙

(45.9)

The relationship between robot joint speed and end effector speed is V = J(q)q˙

(45.10)

where, J(q) it is the Jacobian matrix of the robot, which can be obtained by deriving Eq. (45.10) and bringing the joint acceleration into Eq. (45.9) ˙ = JM−1 τ + aH V

(45.11)

Standardize the joint torque of the robot by τ = L−1 τ

(45.12)

where L = diag(τ1max , ν, τnmax ), it can be obtained from Eq. (45.11) ˙ = JM−1 Lτ + aH V

(45.13)

According to the definition of velocity manipulability ellipsoid, the robot acceleration manipulability ellipsoid can be obtained: ˙ + aH ) ≤ 1 ˙ + aH )T J−T QJ−1 (V (V

(45.14)

where Q=ML−1 L−1 M, Eq. (45.14) is a 6-dimensional ellipsoid, which describes the acceleration ability of the robot to move along the x, y and z directions and rotate around the x, y and z directions. For the multi-robot grasping system, due to the attitude of the object not change, this paper only selects the moving acceleration ellipsoid for discussion. Taking the dual-robot system shown in Fig. 45.2 as an example, the acceleration ellipsoid of a point on the trajectory of robots R1 and R2 is randomly selected. Figure 45.3a shows the acceleration ellipsoid of robot R1. The radius of the ellipsoid in the X and Z axes is large, indicating that robot R1 has strong force output capacity Fig. 45.2 Dual-robot grasping model

R2 z2

R1 z1 S1 x1

y2 y1

x2 S2

468

Y. Li et al.

Z

Z

Y

Y

Y

X

Y

X

X

X

Z

Z

Z

X

Z

Y

X

(a) Acceleration ellipsoid of robot R1

Y

(b) Acceleration ellipsoid of robot R2

Fig. 45.3 Acceleration ellipsoid of dual-robot

in the X and Z axes, and the radius of the ellipsoid in the Y axis is small, indicating that robot R1 has weak force output capacity in the Y axis, and the load of the robot in this direction should be reduced. Figure 45.3b shows the acceleration ellipsoid of robot R2. It is not difficult to find out that robot R2 has strong force output ability along Z axis and weak force output ability along X and Y axes. In order to quantify the force output capability of the robot during motion, the dynamic manipulability is defined as the radius of the acceleration ellipsoid in the acceleration direction, and then we have the load distribution coefficient of multiple robots: βi =

DMi n  DMi

(45.15)

i=1

where βi is the load distribution coefficient of the ith robot, 0 < βi < 1,

n 

βi = 1.

i=1

45.3 Dynamic Load Distribution of Multi-Robot Grasping System 45.3.1 Virtual Mass and Virtual Inertia In order to realize the dynamic load distribution of multiple robots, the concepts of virtual mass and virtual inertia are introduced, as shown in Fig. 45.4. Let the virtual mass, virtual inertia and virtual acceleration of the grasped object at a certain time be m ∗o ,Jo∗ and p¨o∗ , respectively, and the relationship between them are:

45 A Dynamic Load Distribution Method for Multi-Robot

469

Fig. 45.4 Virtual mass, virtual inertial and virtual acceleration of multi-robotic system

R1 r1

z

O*

d

O

y

x ri

Ri

m ∗o =

N

m i∗

r2 R2

(45.16)

i

Jo∗ =

N i

Ji∗ +

N

S(ri )m i∗ S(ri )T

(45.17)

i

45.3.2 Load Distribution Based on the virtual mass, virtual inertia and virtual acceleration determined in the previous section, the inverse of the grasping matrix G is parameterized to obtain the generalized grasping inverse matrix: ⎡

⎤ m ∗1 /m ∗o · I3 m ∗1 · [Jo∗ ]−1 · S(r1 )T ∗ ⎢ −J1o ⎥ · [Jo∗ ]−1 · S(r1 )/m o J1∗ · [Jo∗ ]−1 ⎢ ⎥ ⎢ ⎥ . . + .. .. Gp = ⎢ ⎥ ⎢ ⎥ ∗ ∗ ∗ ∗ −1 T⎦ ⎣ m i · [Jo ] · S(ri ) m i /m o · I3 −Jio∗ · [Jo∗ ]−1 · S(ri )/m o Ji∗ · [Jo∗ ]−1

(45.18)

where, m i∗ /m ∗o · I3 is the force distribution term, the value of the force distributed to the end of the ith robot, determined by m i∗ , and the proportion of m i∗ is determined by the robot load distribution coefficient βi ; −Jio∗ · [Jo∗ ]−1 · S(ri )/m o is the adjustment item of force and torque, in which Jio∗ =Ji∗ + S(ri )m i∗ S(ri )T ,the distribution mode of force and torque can be changed by adjusting the value of m i∗ and Ji∗ ; −Ji∗ · [Jo∗ ]−1 · S(ri ) is a torque compensation term, which is used to counteract the disturbance torque generated by asymmetric grasping, distributed to the end of the ith robot by −Jio∗ · [Jo∗ ]−1 · S(ri )/m o ; Ji∗ · [Jo∗ ]−1 is the torque allocation item, and the torque allocated to the ith robot is determined by Ji∗ .

470

Y. Li et al.

45.4 Simulation The model of a dual robot is constructed, as shown in Fig. 45.5. The trajectory of the dual robot is planned by cubic B-spline, and the acceleration curve of the object in the workspace is obtained, as shown in Fig. 45.6 Due to the asymmetric configuration of the two robots during movement, their dynamic manipulability is different. The dynamic manipulability of each robot is determined according to the acceleration trajectory and the dynamic load distribution coefficient is shown in Fig. 45.7. Define the virtual mass and inertia of the end effectors of robot as, m ∗1 = β1 ,m ∗2 =  β2 ,J1∗ = J2∗ = I3 , respectively, and the grasping position as r1 = −0.1 0 0 ,   r2 = 0.1 0 0 , respectively. The running time of the dual-robot is 3 s, setting the communication cycle to be 10 ms, and the mass of the object is 5 kg. The force curve of each robot’s end effector is obtained by G +p , as shown in Fig. 45.8. The force required by the movement of the object is reasonably distributed to the end effector of each robot by G +p . As shown in Fig. 45.8a, c, at 2.75 s, the dynamic manipulability of robot R1 decreases, therefore the force of R1 gradually decreases, in order to avoid joint overload, and robot R2 bears a larger load of object. Due to the force exerted by the two robots on the object is asymmetric, the system will produce Fig. 45.5 Dual-robot grasping system simulation model

Z

R1 R2

Y

X

Fig. 45.6 Acceleration curve of object in working space a (m/s2)

ax ay az

t (s)

45 A Dynamic Load Distribution Method for Multi-Robot

471 β1 β2

t (s)

Fig. 45.7 Dynamic load distribution coefficient f 1x f 1y f 1z f1/N

τ1/N·m

τx1 τy1 τz1

t (s)

t (s)

(a) The force curve at the end effector of robot R1

(b) The torque curve at the end effector of robot R1

τx2 τy2 τz2

τ2/N·m

τ1/N·m

τx1 τy1 τz1

t (s)

(c) The force curve at the end effector of robot R2

t (s)

(d) The torque curve at the end effector of robot R2

Fig. 45.8 Load curve at the end effector of dual-robot

grasping internal force, which needs to provide corresponding torque to balance. Because the value of S(ri )m i∗ is relatively small, the distribution mode of torque is dominated by the virtual inertia Ji∗ , the internal force moment is borne by the two robots equally, since J1∗ = J2∗ . We can change the distribution of internal torque by adjusting the value of the J1∗ and J2∗ , for example, let J1∗ = 2I and J2∗ = I , the internal torque distribution of the dual-robot is shown in Fig. 45.9. It is obvious that τ1 increases with the increasing of J1∗ , and τ1 is two times of τ2 , since J1∗ = 2J2∗ . Let m ∗1 = β1 ,m ∗2 = β2 ,J1∗ = J2∗ = I3 , the joint torque of the dual-robot is solved by inverse dynamics, Fig. 45.9a, b shows the curve of the joint torque of the robot R1 and R2 with time respectively. The torque of all joints of the robot R1 and R2 is controlled between 10 ~ −50 N·m and 40 ~ −80 N·m, respectively. The torque

472

Y. Li et al. τx2 τy2 τz2

τ2/N·m

τ1/N·m

τx1 τy1 τz1

T/s

(a) Torque curve at the end effector of robot R1

T/s

(b) Torque curve at the end effector of R2

Fig. 45.9 Torque curve at the end effector of dual-robot

of the joint changes smoothly without sudden change, also joint overload caused by the decrease of dynamic operability is avoided.

45.5 Conclusion (1)

(2)

(3)

(4)

The mathematical model of multi-robot grasping system is established, the resultant force of the object is decomposed into external force and internal force to maintain the motion of the object, and the source of internal force is analyzed. The dynamic manipulability ellipsoid of the robot is constructed, the force output capacity of the robot at the end acceleration direction is quantified, and the dynamic load distribution coefficient of the robot is determined according to this index. The virtual mass, virtual inertia and virtual centroid are used to parameterize the robot end grasping force, and a generalized grasping inverse matrix is proposed, which can satisfy dynamic load distribution of multi-robot grasping system. Simulation and experiments show that taking the dynamic operability of the robot as the load distribution coefficient and using the parameterized generalized grasping inverse matrix for dynamic load distribution, the load can be reasonably distributed to the end of each robot, so that the joint torque of the robot changes smoothly and avoids joint overload, which is conducive to improving the stability of multi-robot system. The output of force and torque between robots can be realized by adjusting the value of virtual mass and virtual inertia, which improves the flexibility of load distribution in multi-robot system.

45 A Dynamic Load Distribution Method for Multi-Robot

473

References 1. Peng, Y.C., Carabis, D.S., Wen, J.T.: Collaborative manipulation with multiple dual-arm robots under human guidance. Int. J. Intell. Robot. Appl. 2(2), 252–266 (2018) 2. Davide, O., Rajkumar, M., Alessandro, F., et al.: Dual-arm cooperative manipulation under joint limit constraints. Robot. Auton. Syst. 99, 110–120 (2018) 3. Jia, W. J., Yang, G. L., Zhang, C.: Dynamic modeling with non-squeezing load distribution for omnidirectional mobile robots with powered caster wheels. 2018 13th IEEE Conference on Industrial Electronics and Applications, pp. 2327–2332. IEEE, USA (2018) 4. Korayem, M.H., Nekoo, R.S., Abbasi, E.: Dynamic load-carrying capacity of multi-arm cooperating wheeled mobile robots via optimal load distribution method. Arab. J. Sci. Eng. 39(08), 6421–6433 (2014) 5. Tetsuyou, W., Tsuneo, Y.: Grasping optimization using a required external force Set. IEEE Trans. Autom. Sci. Eng. 04(01), 52–66 (2007) 6. Walker, I.D., Freeman, R.A., Marcus, S.I.: Analysis of motion and internal loading of objects grasped by multiple cooperating manipulators. Int. J. Robot. Res. 10(04), 396–409 (1991) 7. Koeda, M., Ito, T., Yoshikawa, T.: Shuffle turning in humanoid robots through load distribution control of the soles. Robotica 29(07), 1017–1024 (2011) 8. Bonitz, R., Hsia, T.: Force decomposition in cooperating manipulators using the theory of metric spaces and generalized inverses. IEEE International Conference on Robotics and Automation, vol. 2, pp. 1521–1527.IEEE, USA (1994) 9. Chung, J., Yi, B.J., Kim, W.: Analysis of internal loading at multiple robotic systems. J. Mech. Sci. Technol. 19(8), 1554–1567 (2005) 10. Erhart, S., Hirche, S.: Internal force analysis and load distribution for cooperative multi-robot manipulation. IEEE Trans. Rob. 31(5), 1238–1243 (2015) 11. Bais, Z. A., Erhart, S., Zaccarian. L.: Dynamic loading distribution in cooperative manipulation tasks. IEEE/RSJ International Conference on Intelligent Robots & Systems, pp. 2380–2385. IEEE, USA (2015) 12. Zhao, Z.G., Lu, T.S.: Coordinated dynamic load distribution for multi-Robot collaborative towing system. Robot 34(1), 114–119 (2012)

Chapter 46

Research on CNN-Based Image Denoising Methods Wei Liu, Chao Zhang, and Yonghang Tai

Abstract As a fundamental task in digital image processing, an effective denoising method can restore an image disturbed by noise to its original characteristics of a clean image. In this paper, we introduce the current mainstream image denoising methods and use a deep learning model, convolutional neural network, to implement image denoising. In our experiments, our method can effectively achieve denoising of noisy images and can restore the original features of the images better.

46.1 Introduction With the increasing popularity of various digital image capture devices, images have become the mainstream media for people to receive information in their daily lives, but there are various processes involved in the acquisition of information from image capture. Due to factors such as the external environment and transmission process during image capture, images can be affected by noise, which usually leads to loss and distortion of images. When noise is present in an image, it can have a negative impact on the post-processing of the image (e.g. feature information extraction, image fusion, etc.), so it is important to de-noise digital images. Image denoising of noisy images can transform the quality of the image, thus solving the problem of degradation of the image quality caused by noise interference in the analysed image. Existing image denoising methods can increase the signal-to-noise ratio of an image, allowing the information in the image to be better represented. Image denoising is an important task in the field of digital image processing, and colleagues are also basic image processing tasks. In recent years, the field of image denoising has made very great achievements [1, 2]. We can describe image denoising in the following way: y= x +d

(46.1)

W. Liu · C. Zhang (B) · Y. Tai (B) School of Physics and Electronic Information, Yunnan Normal University, Kunming, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_46

475

476

W. Liu et al.

In Eq. (46.1), y is regarded as a noisy image, x is a pure image without noise, and d is the sum of white noise and standard deviation. The denoising of digital images is mainly to reduce the noise in the original image, restore more image features, and increase the information utilisation of the image. The field of image denoising currently faces many challenges: (1) The flat area needs to be smoothed; (2) The edges of the image should not be blurred; (3) as much of the image features as possible should be preserved; and (4) no new noise should be introduced. The field of image denoising has received major research efforts in recent years in order to obtain good modelling methods for image denoising. There are two broad categories of image denoising methods [3]: spatial domain-based methods [4] and transform domain-based methods [5]. In this article, we use convolutional neural network as our main model. In our experiments, the method we used was able to achieve image denoising and was able to restore the original features of the image better.

46.2 Related Work 46.2.1 Spatial Domain Approach The main idea of the spatial domain-based image denoising method is to calculate the grey value of each pixel block of the image and use this to reduce the noise. Spatial domain-based methods can be broadly divided into variational denoising methods and spatial domain filtering methods. Filtering is an important method in image processing, filters [7, 8] are used in image denoising algorithms. The original filter attempts used linear filters for spatial domain noise removal, but this method discarded the texture features of the image. Gaussian noise reduction used the method of mean filtering [9], which has the problem of over-smoothing the image for high noise levels. Wiener filtering [10] overcomes this drawback but tends to trigger blurring of edges. Non-linear filters allow for edge smoothing and noise reduction, such as median filtering [11], weighted median filtering [12]. The use of low-pass filtering by spatial filters illustrates that noise is usually found in the image regions of the high frequency spectrum. Spatial filters can remove some of the noise and improve the quality of the image, but to some extent cause blurring of the image. Existing variational denoising methods calculate the denoised image by using the prior of the image and minimising the energy function.

46.2.2 Transformation Field Method Both spatial domain and time domain methods are based on the Fourier transform and the spatial domain method is earlier than the time domain method, such as wavelet

46 Research on CNN-Based Image Denoising Methods

477

domain transform methods [13] and 3D filtering [14]. The core idea of the transform domain approach to denoising is that noise and image information are characterised differently in the transform. In the transform domain-based approach, the noisy image is transferred to another region and then the noise at different locations of the image is denoised. Independent Component Analysis (ICA) [15] and PCA [16] have been successfully implemented to denoise images of non-Gaussian data. The wavelet transform method is widely used in the field of image denoising. The input data is decomposed into a proportional space after wavelet transform. Numerous studies have proven that the wavelet transform can effectively denoise image noise and graphically retain more image features [17, 18]. The wavelet transform has many excellent features, such as sparsity and multi-scale, so the wavelet transform is also used as a more widely used denoising method at present. However, the main factor of wavelet transform is the selection of wavelet basis. When the wavelet basis is not selected appropriately, then it causes the image in the wavelet domain cannot be displayed properly, and it is easy to cause the unstable effect of denoising. Threedimensional filtering (BM3D) stacks similar patches into a 3D group, which is then transformed into a wavelet domain. Wiener filtering will be used in the wavelet domain. Eventually, the colour blocks obtained earlier by estimation are aggregated into the reconstructed image by inverse transformation of the coefficients.

46.3 Method 46.3.1 Convolutional Neural Networks (CNN) Convolutional Neural Network (CNN) [6] was originally used for image modeling, and has also been widely used in the field of natural language processing. Convolutional neural network is a kind of network structure with shared weights, which has fewer adjustable parameters and reduced learning complexity compared with MLP, DBN and other networks in deep learning; convolutional neural network has high invariance to translation, tilt, scaling or other forms of deformation when processing two-dimensional images. Figure 46.1 shows a typical structure of a convolutional neural network, which consists of an infeed layer, a convolutional layer, a pooling layer, a fully connected layer and an output layer.

46.3.2 Convolutional Neural Networks for Image Denoising The methods for solving the modelling equations for image denoising are based on the image prior and the image degradation process, his method is divided into two main categories: methods based on model optimisation and methods based on

478

W. Liu et al. C-L

P-L

Input Layer

C-L

P-L

F-L

Output Layer

Fig. 46.1 Convolutional neural network model diagram, in which C-L represents the convolutional layer, P-L represents the pooling layer and F-L represents the fully connected layer

convolutional neural networks. CNNs achieve denoising by degradation cleaning of the image. In the text, we use a CNN to design our denoising network, where the input to the model is a noisy image and the output is a denoised image as well as a purely noisy image, with both the input and output of the model being of the same size. The middle contains n processing layers, each consisting of convolutional units and linear units. Each convolution operation is effectively a linear filter, so the CNN is easy to process images with continuous linear features. To compensate for the shortcomings of the convolution. operation, a nonlinear activation function is also added to the network, and a nonlinear activation function is added to the back of each convolutional layer, which is synthesised into the input of the next layer, thus enabling the network to process images with non-linear features. The integer linear unit (ReLu) is commonly used as the activation function in CNNs, which is a non-linear mapping function. Compared to sigmoid-like activation functions, ReLu can be trained efficiently and quickly on large and complex neural networks. The network implicit layer consists of several convolutional and deconvolutional layers. After the image is input to the network, the image is convolved using a (5X5) convolutional kernel, and our model structure is shown in Fig. 46.2.

46.4 Experiment 46.4.1 Experimental Setting We designed our model using training with Pytorch, a highly modular deep learning framework. It is a completely open-source framework that allows you to design

46 Research on CNN-Based Image Denoising Methods

479

Fig. 46.2 CNN model diagram

your own functions and change the interface. We used the python language for model writing. Our experiments were trained using an image workstation with two NVIDIA Tesla V100s, relying on NVIDIA’s powerful GPU computing power to enable fast training of our models. To assess the performance metrics of image denoising methods, PSNR [19] is used as a representative quantitative measure.

46.4.2 Experimental Results To demonstrate the effectiveness of our model, we selected images from the Berkeley segmentation dataset [20] and pre-processed them to build the dataset, and selected one of the images for testing after the model was trained. We set the number of layers of the CNN to 10, as shown in Fig. 46.3 for the image with noise and the image after processing by the model, we can see that our model can effectively denoise the image and retain more features of the image. The PSNR of the processed image is 34.1db. Fig. 46.3 Experimental results. The image on the left is the original image, the image in the middle is the image with noise added, and the image on the right is the processed image

480

W. Liu et al.

46.5 Conclusion In this paper, we investigate the methods of digital image denoising, comparing the advantages and disadvantages of existing image denoising methods. By analysing other methods, we use a deep learning model convolutional neural network for image denoising, and the experimental results show that our method can effectively achieve digital image denoising and can retain more image features.

References 1. Motwani, M.C., Gadiya, M.C., Motwani, R.C., Harris, F.C.: Survey of image denoising techniques. In Proceedings of GSPX 27, 27–30 (2004) 2. Jain, P., Tyagi, V.: A survey of edge-preserving image denoising methods. Inf. Syst. Front. 18(1), 159–170 (2016) 3. Diwakar, M., Kumar, M.: A review on CT image noise and its denoising. Biomed. Signal Process. Control 42, 73–88 (2018) 4. Wiener, N.: Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. MIT press, Cambridge (1950) 5. Jung, A.: An introduction to a new data analysis tool: independent component analysis. In: Proceedings of Workshop GK "Nonlinearity"-Regensburg 39(1), pp. 127–132 (2001) 6. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) 7. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Sixth international Conference on Computer Vision (IEEE Cat. No. 98CH36271), IEEE, pp. 839–846) (1998) 8. Bouboulis, P., Slavakis, K., Theodoridis, S.: Adaptive kernel-based image denoising employing semi-parametric regularization. IEEE Trans. Image Process. 19(6), 1465–1479 (2010) 9. Zohair, A.A., Shamil, A.A., Sulong, G.: Latest methods of image enhancement and restoration for computed tomography: a concise review. Applied Medical Informatics. 36(1), 1–12 (2015) 10. Benesty, J., Chen, J., Huang, Y.: Study of the widely linear Wiener filter for noise reduction. In: 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, pp. 205–208 (2010) 11. Pitas, I., Venetsanopoulos, A.N.: Nonlinear digital filters: principles and applications, vol. 84. Springer Science, Business Media (2013) 12. Yang, R., Yin, L., Gabbouj, M., Astola, J., Neuvo, Y.: Optimal weighted median filtering under structural constraints. IEEE Trans. Signal Process. 43(3), 591–604 (1995) 13. Licheng, J., Biao, H., Shuang, W., Fang, L.: Image multiscale geometric analysis: theory and applications. Xian Electronic Science and Technology University Press, Xi’an (2008) 14. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transformdomain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007) 15. Jung, A.: An introduction to a new data analysis tool: independent component analysis. In: Proceedings of Workshop GK "Nonlinearity"-Regensburg, vol. 39(1), pp. 127–132. (2001) 16. Zhang, L., Dong, W., Zhang, D., Shi, G.: Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recogn. 43(4), 1531–1549 (2010) 17. Portilla, J., Strela, V., Wainwright, M.J., Simoncelli, E.P.: Image denoising using scale mixtures of Gaussians in the wavelet domain. IEEE Trans. Image Process. 12(11), 1338–1351 (2003) 18. Combettes, P.L., Pesquet, J.C.: Wavelet-constrained image restoration. Int. J. Wavelets Multiresolut. Inf. Process. 2(04), 371–389 (2004) 19. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

46 Research on CNN-Based Image Denoising Methods

481

20. Martin, D., Fowlkes, C.C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics.In: Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001, 2, vol.2, pp. 416–423 (2001)

Chapter 47

Sheep Posture Recognition Based on SVM YaJuan Yao, Han Tan, Juan Yao, Cheng Zhang, and Fang Tian

Abstract The evaluation of body size parameters and morphology based on body size parameters can reflect the growth and development characteristics, production performance and genetic characteristics of sheep. Therefore, studying the measurement of body size parameters and morphological evaluation is an effective way to determine the reasonable breeding of sheep farms and improve the breeding efficiency. In the measurement of human body size based on machine vision, the change of posture has a great influence on the human body size parameters. In this paper, a segmentation algorithm of sheep head, body and legs based on the sheep contour is proposed. Through this algorithm, five posture features are extracted, which are the angle between the hip point and the lower right corner coordinate of the head, the angle between the hip point and the lower left corner coordinate of the head, the angle between the lower right corner coordinate of the body and the lower right corner coordinate of the head, the angle between the lower right corner coordinate of the body and the upper left corner coordinate of the head and the length–width ratio of the circumscribed rectangle. The test results show that the accuracy of the program compiled by the hierarchical vector machine algorithm reaches 95.68%.

47.1 The Significance of Sheep Posture Recognition With the popularization of scientific and fine breeding and the continuous attention to food safety, modern animal husbandry is becoming more and more large-scale, intensive and scientific. In the sheep industry, sheep physical parameters are important judgment factors for breed selection, which can reflect the process of sheep growth and development, and evaluate feed utilization and carcass quality. Therefore, it is of Y. Yao · J. Yao · C. Zhang · F. Tian (B) College of Informatics, Huazhong Agricultural University, Wuhan, Hubei, China e-mail: [email protected] H. Tan College of Mechanical Engineering, Wuhan Vocational College of Software and Engineering, Wuhan, Hubei, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_47

483

484

Y. Yao et al.

great significance for sheep industry to obtain the physical parameters of sheep. In the process of obtaining the characteristic parameters of sheep, the physical parameters and behavior recognition of sheep will change with the different postures of sheep. So posture recognition of sheep is the basis of obtaining physical parameters of sheep. The posture of animals reflects their physical state. Understanding the posture of animals plays an important role in animal breeding and management. In order to reduce the situation changes of animals caused by personnel negligence, posture estimation is one of the research hotspots (Cui Wei, Liu Juxiong, the indispensable “fulcrum” of healthy breeding—the research status of animal behavior and its application in practice: China Animal Health, 2007) [1]. Deep learning has made good progress in this field, the most prominent of which is human posture assessment. The behavior of animals is closely related to their physical state and health. Different behaviors convey different health information. Behavior is composed of different postures, and the acquisition of postures is the basis of understanding behavior. In zoo or breeding management, animals may get sick. If it cannot be found and treated in time, there will be unnecessary losses (Zhang Rong, Li Weiping, Mo Tong, review of deep learning research: information and control, 2018) [2]. Using pose estimation for detection can realize unmanned monitoring, discover physical changes of animals in time, understand animal behavior information, better protect endangered animals and improve the efficiency of breeding management.

47.2 Methods and Status of Animal Posture Recognition With the development of agricultural technology, people began to explore the machine learning methods combined with the actual scenes, so as to reduce manpower and make the acquisition of various data more convenient and accurate. Rousseau et al. [3] used image processing technology and neural network method to construct the behavior classification system of experimental rats. By analyzing the changes of nose, tail root and center of gravity of experimental rats, the posture of experimental rats was identified. In 2008, Zhang Wen and others proposed hog feature, trained texture and shape classifiers by AdaBoost algorithm, combined with the training results, used SVM algorithm to detect the animal posture. Zhao Kaixuan et al. used decision tree forest machine learning method to realize the fine division of each region of dairy cattle in 2017 [4]. In 2018, Han yibo studied the individual behavior recognition of pigs based on support vector machine and convolutional neural network, and effectively identified the walking and lying posture of pigs [5]. In 2019, BASANG Wang-dui et al. [6] used the machine learning methods to estimate the weight of yak, and its accuracy could reach 0.91. This paper proposes an algorithm to segment the head, body and legs of sheep. Through this algorithm, the information of sheep can be effectively transformed into a set of vector features. We analyze, process and optimize this set of features. Finally,

47 Sheep Posture Recognition Based on SVM

485

five feature values are selected to classify sheep posture based on SVM algorithm. The algorithm is fast and of high accuracy.

47.3 An Algorithm to Segment the Head, Body and Legs of Sheep 47.3.1 Definition of Sheep Posture Classification In supervised learning, all data are labeled, and the algorithm learns how to predict the output from the input data. The accuracy of labels in supervised learning is the basis of machine learning. In order to get an accurate labeled data set, we define the posture of sheep as follows: 1.

Definition of standard posture

2.

When measuring the body size of sheep, the sheep stand on the flat ground with a natural standing posture. The head and neck should stretch naturally, not be raised or drooped, neither to the left nor to the right. We define this as a standard posture. The head is at a distance from the upper right side of the body. As shown in Fig. 47.1. Definition of bowing posture

3.

When the sheep is in the state of lowing head, the measurement method of its body size parameters changes. We define the head on the right side or the lower right side of the body (slightly lower, not too low) as a bowing posture. As shown in Fig. 47.2. Definition of turning posture When a sheep turns its head, its body is distorted, and there maybe an error smaller than the actual measured body size parameter. We define the position that the body and the head cross or are very close to each other as the posture of turning the head. As shown in Fig. 47.3.

Fig. 47.1 Standard posture

486

Y. Yao et al.

Fig. 47.2 Bow posture

Fig. 47.3 Turning posture

47.3.2 Establish Posture Image Data Set In this paper, a total of 10,000 sheep images are collected and pre-processed to extract the contour information. Based on the contour information, the 10,000 images are classified according to the standards mentioned in Sect. 47.3.1 (standard posture, bowing posture and turning posture) and are labeled. It can be seen that the images of standard posture, bowing posture and turning posture account for 38.2%, 26.4% and 25.4%, respectively.

47.3.3 The Sheep Segmentation Algorithm In order to complete the posture recognition, it is necessary to segment the head, body and legs of the contour. A self-developed algorithm is used in this paper to segment the head, body and legs from the sheep contour. The specific methods are as follows:

47 Sheep Posture Recognition Based on SVM

1. 2. 3. 4. 5. 6.

7.

487

Find the circumscribed rectangle and the centroid of the contour. Draw a horizontal line through the centroid, intersecting the contour at two points. Regard the two intersecting points as the upper left corner and lower right corner of the inscribed rectangle. Generate a rectangle at each angle, calculate the proportion of rectangle in the posture and the area of rectangle. Set the threshold to select the largest rectangle that meets requirements as the body part. Take the rightmost point on the right, the highest point on the top, the lowest point between the highest point and the rightmost point (x) value below, and compare this lowest point (y) with the 1/3 place measuring from the top to the bottom of the circumscribed rectangle, then select the smaller one of these two values, and the difference between the highest point and the rightmost point (x) on the left is taken. The head part is obtained by using the highest point (x) to minus this difference value. Find the leftmost point, the rightmost point and the lowest point under the body, the rectangular part is the leg part.

47.3.4 The Feature Extraction The feature extraction plays an important role in posture judgment. In this paper, the original features are obtained by the above-mentioned segmentation method, then the secondary features are calculated according to the original features. At last, the five features that are most consistent are selected based on RBF. The original features are the upper left coordinate and the lower right coordinate of the head in the first frame of Figs. 47.1, 47.2 and 47.3, the upper left coordinate and the lower right coordinate of the body part, the upper left coordinate and the lower right coordinate of legs, the upper left coordinate and the lower right coordinate of the circumscribed rectangle, the hip point coordinate (local highest point) and the centroid of the contour. These original features are discrete data, which we sort out and calculate to get secondary features. For example, the angle between the hip point and the lower right corner coordinate of the head, the angle between the hip point and the lower left corner coordinate of the head, the angle between the lower right corner coordinate of the body and the lower right corner coordinate of the head, the angle between the lower right corner coordinate of the body and the upper left corner coordinate of the head, the length–width ratio of the circumscribed rectangle, whether the head is on the upper right, and the coincidence rate between head and body. In accordance with the experiment comparison, five characteristics are selected as the characteristics of posture judgment: the angle between the hip point and the lower right corner coordinate of the head, the angle between the hip point and the lower left corner coordinate of the head, the angle between the lower right corner coordinate

488

Y. Yao et al.

Fig. 47.4 The normalized data

of the body and the lower right corner coordinate of the head, the angle between the lower right corner coordinate of the body and the upper left corner coordinate of the head, and the length–width ratio of the circumscribed rectangle.

47.3.5 The Normalization To normalize the feature data, the data will be limited to [0–1] after related processing. Normalization facilitates later data processing, and at the same time, it can ensure the high convergence speed when the program runs. The following figure shows the normalized train data (Fig. 47.4).

47.4 The Posture Classification Based on SVM Method SVM (Support Vector Machine) is a model of binary classification algorithm. The basic idea of the model is to work out a hyperplane with the best effect in the space, thus maximize the interval between the positive and negative samples in the training set, so as to achieve the effect of sample separation. It means to find an optimal hyperplane in the N-dimensional space, to select the maximum distance between this hyperplane and different sample sets, which is a supervised learning algorithm used to solve the binary classification problem. In addition, support vector machine can be divided into two categories: linear and nonlinear Fig. 47.5. Hierarchical classification first divides all categories into two sub-categories, and then further divides the sub-categories into two sub-categories, so as to loop until a single category is obtained. For the detailed description of hierarchical support vector machine, please refer to the paper “generalization of support vector machine in multi class classification problems” (Liu Zhigang, an analytical overview of methods for multi-category support vector machines, 2004) [7].

47 Sheep Posture Recognition Based on SVM

489

Fig. 47.5 SVM schematic diagram

Posture classification based on hierarchical support vector machine method can be divided into two categories, which are standard and non-standard. Then, the nonstandard can be further divided into two categories: head bowing and head turning. In the first classification, we selected 70% as the training set and 30% as the test set. The input of SVM is the five features, the model category is C_SVC, the kernel function is RBF, the setdegree is 3, and the output is whether it is standard posture. The test results show that the accuracy of the program reaches 95.68%. In the second classification, we selected 70% as the training set and 30% as th e test set. The input of SVM is the five features, the model category is C_SVC, the kernel function is RBF, the setdegree is 3, and the output is whether it is a bowing head posture or a turning head posture. The test results show that the accuracy of the program reaches 95.68%.

47.5 Conclusion In this paper, the self-edited segmentation algorithm is used to segment the sheep head, body and legs. The related features are extracted, analyzed and summarized. Finally, five features are selected to analyze the posture classification based on hierarchical support vector machine method. The test result proves that this method is accurate and effective.

References 1. Cui, W., Liu, J.X.: The indispensable “fulcrum” of healthy breeding——research status of animal behavior and its application in practice. China Anim. Health, (11), 53–55 (2007) 2. Zhang, R., Li, W.P., Mo, T.: Review of Deep Learning. Inf. Control 47(4), 385–397 (2018) 4. Rousseau, J.B., Van Lochem, PB., Gispen, W.H., Spruijt, B.M.: Classification of rat behavior with an image-processing method and a neural network. Behavior research methods, instruments, and computers. J. Psychon. Soc. Inc. 32(1), 63–71 (2000)

490

Y. Yao et al.

4. Wang, J., Zhang, H.Y., Zhao, K.X., Liu, G.: Cow movement behavior classification based on optimal binary decision-tree classification model. Transactions of the Chinese Society of Agricultural Engineering. 34(18), 202–210 (2018) 5. Han, Y.B., Wang, C.G., Kang, F.L.: Piglets Group Characteristics under Different Temperature Behavior Detection Method Research. Journal of Agricultural Mechanization Research 39(5), 21–25 (2017) 6. Basang, W.D., Pingcuo Z.D., Zhu, Y.B., Dawa Y.L., E Guang-xin., Zhou, D.K., Yang, B.G., Peng, YY., Guo, Y.: Comparison of accuracy of linear model and machine learning model in predication of yak body weight[J]. Modern Agric. Sci. Technol. (23), 205–206 (2019) 7. Liu, Z.G., Li, D.R., Qin, Q.Q., Shi, W.Z.: an analytical overview of methods for multi-category support vector machines. Comput. Eng. Appl. 7(40), 10–13 (2004)

Chapter 48

Research on Application of Information Security Protection Technology in Power Internet of Things Dong Li, Yingxian Chang, Hao Yu, Qingquan Dong, and Yuhang Chen

Abstract With the development and progress of social economy, the Internet of things is widely used in various fields. Driving by various emerging technologies such as Internet technology, network communication technology and unlimited radio frequency technology, China’s Internet of things technology has gradually improved. Internet of things technology is involved in transportation, housing, military and financial fields, and these fields are closely related to national development. Therefore, only do a good job in network information security, can better promote national development. The Internet of things is an important part of information transmission in the future, so it will focus on the network information dissemination security control technology.

48.1 Introduction At present, industrial Internet of things technology has been widely used in the power system, in power generation, transmission, transformation and distribution and other aspects can see its shadow [1]. By consulting a large number of data related to the ubiquitous power Internet of things, it is found that there are nearly 530 million metering terminal meters, more than 3 million sets of different types of equipment monitoring terminals and 500,000 video monitoring camera terminals in China [2]. With the help of these Internet of things terminal equipment, wireless public network data transmission is widely used in power distribution system.

D. Li · Y. Chang · H. Yu State Grid Shandong Electric Power Co., Ltd. Jinan City, Shandong Province, China Q. Dong (B) · Y. Chen Information System Integration Company, NARI Group Corporation, Nanjing City, Jiangsu Province, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_48

491

492

D. Li et al.

48.2 The Structure and Security of Internet of Things 48.2.1 Perception Layer The sensing layer includes a large number of data acquisition devices. In order to execute the related instructions issued by the application layer, the necessary execution devices also exist in this layer. Generally speaking, sensors, execution devices, positioning devices, video and audio acquisition and playback terminals, barcode reading or RF reading and writing devices and related intelligent devices are all in this layer. Various devices in the perception layer can be divided into two types: readable and read–write according to the operability. Relatively speaking, simple readable devices have a high level of security, but it does not rule out that they will also suffer from external attacks. However, read–write devices face more security risks.

48.2.2 Network Layer All kinds of private networks, wired networks, wireless communication networks and the Internet together constitute the network layer of the Internet of things, which can efficiently achieve data reception and output. Compared with other industries, the transmission technology of ground, underground industrial ring network and wireless communication network used in power grid is not too different. It is also through flow monitoring, vulnerability scanning, intrusion detection and other means to solve various problems in operation. As for the network layer, because of its complex structure and great differences among its parts, it faces great security risks, that is, the security of the whole system data transmission will be largely determined by it.

48.2.3 Application Layer The application layer of the Internet of things includes many platforms, including safety production scheduling integration platform, three-dimensional geographic information platform and risk pre-control management information platform of hidden danger management group. It can realize a variety of applications, such as cloud computing, data storage and conventional network applications, so it is often regarded as a comprehensive and mature information system. At present, when discussing the security performance of this layer, we mainly focus on data security and cloud computing security.

48 Research on Application of Information Security Protection Technology …

493

48.2.4 Platform Layer The platform layer is a bridge connecting the network layer and the application layer. The significance of the platform layer is to modularize the common software functions and avoid repeated development. (1)

(2)

Platform layer security mainly ensures the security of information in the process of calculation, storage and transmission. The platform layer must adopt appropriate security strategies to ensure the confidentiality, integrity, availability and non-repudiation of information in the ubiquitous power Internet of things. In addition, it should also ensure access security and API security. The security of the platform layer can provide a two-way authentication mechanism for all accessed devices and applications according to the needs of applications, carry out identity authentication and authorization management, and ensure the security and reliability of accessed terminal devices and transmitted information; Strengthen the access control of API calls to prevent unauthorized access, authenticate users before calling API interfaces, verify user credentials and user identity, and prevent tampering and replay attacks; Encrypt sensitive data to prevent data from being tampered [7].

48.3 Information Security Protection Technology of Power Internet of Things 48.3.1 Network Security Vulnerability Scanning Technology Network security vulnerability detection and security risk assessment technology can accurately predict the various possibilities of network attacks on the main users of the network, and specifically point out the network attacks that will appear or occur and the possible security risk consequences. In recent years, it has been highly valued by the network security technology industry. The main application tasks of this security technology can help administrators identify, detect and monitor system resources, analyze various possible factors and indexes of system resources being attacked by hackers, understand how to support the security vulnerability of network monitoring system itself, and assess the security problems and risks that may exist in all systems. Network security vulnerability scanning technology, network firewall, intrusion system and detection network monitoring system cooperate with each other, which can effectively protect and improve the quality and security of the whole network. By scanning the security loopholes of the whole network, the network administrator can timely understand the security protection settings, system operation and application services of the whole network, timely discover the network security loopholes, and objectively evaluate the security risks and risk levels of the whole network. Therefore, network administrators can usually find network security vulnerabilities and various wrong security settings in the process of system operation

494

D. Li et al.

according to the situation and results of network scanning, so as to prevent the system from being attacked by hackers.

48.3.2 VPN Technology To build virtual private network on public network, the most commonly used technology is VPN. It can be divided into link layer VPN technology, network layer VPN technology and application layer VPN technology according to different application levels [3]. Link layer VPN technology, including PPTP or L2TP. The link layer VPN does not carry out special encryption, so the security level is low, that is, it uses PPTP or L2TP to verify the user’s identity. Network layer VPN technology, including MPLS and IPSec. The security level of network layer VPN technology is higher than that of link layer VPN. IPSec supports encryption authentication, and can also be applied to “gateway gateway”, “terminal gateway” and other scenarios. At the same time, it has good performance in system security transformation based on its own transmission mode and tunnel mode. In the security transformation, the public key cryptography algorithm based on SM2 algorithm (IPSec VPN technical specification), the power Internet of things terminal and the master station side to authenticate the user’s identity, and then carry out the relevant communication key agreement. In the above process, the communication key negotiation process is more complex, which can be roughly divided into two stages based on task. In the first stage, ikesa negotiation is carried out in the main mode, which is composed of six interactive operations, including certificate exchange, working key agreement and identity authentication. It is the basis of the second stage session key agreement [4]. The second stage is IPSec SA negotiation based on fast mode, which mainly focuses on session key and consists of three interactive operations. MPLS is a label-based fast and efficient data exchange and transmission technology based on network routing protocol. Taking the power dispatching data network as an example, two virtual VPN technologies based on MPLS are applied to real-time and non-real-time VPN respectively to transmit business data in the security area. Application layer VPN technology, including SSL/TLS.SSL in application layer VPN technology also has encryption authentication function, but SSL VPN can’t be applied in “gateway gateway” (i.e. sitetosite) scenario, so it is mostly used for encryption authentication of mobile office and remote access.

48.3.3 Security Audit Technology Security audit technology is to use one or several security detection tools (usually referred to as the security scanner), take the means and methods of detecting and scanning system security vulnerabilities to check the information security vulnerabilities of the whole system, get the security vulnerability inspection report of each

48 Research on Application of Information Security Protection Technology …

495

weak link of the Internet of things system, and propose the corresponding audit technology to enhance the system information security Comprehensive means and measures [5]. The audit of Internet of things information security is the best method and means to reveal the potential risks of its information security, the method and effective way to improve its information security technology and status quo, and the powerful means and weapon to meet the compliance requirements of its information security industry. The audit of Internet of things information security can make the organization understand and master the key of whether its Internet of things information security work can meet the requirements of security compliance audit, and also help the Internet of things organization to fully understand and accurately grasp the effectiveness, adequacy and suitability of its Internet of things information security audit work.

48.3.4 Lightweight Security Authentication Technology Intrusion protection includes pre intrusion and post intrusion. The protection before intrusion belongs to network boundary protection, and the common technology is network firewall. Post intrusion protection is intrusion detection (IDS), which is a mature security protection technology for information system. But for the Internet of things system with limited resources, intrusion detection has almost no ability, so the main technology is the security protection before intrusion. For many Internet of things devices, the use of firewall technology is also unrealistic, so the main technical means of border protection is identity authentication. Specifically, lightweight identity authentication is the key technology to improve resource-constrained IoT devices. It should be noted that the purpose of identity authentication is to establish a shared session key for secure communication, which can selectively provide the confidentiality and integrity of communication data [6]. However, many data in the Internet of things system are small data. In this case, identity authentication and data protection can be integrated to provide an authenticated data privacy lightweight protocol. Specifically, we should design a lightweight data transmission protocol that integrates identity authentication, data confidentiality, data integrity, data freshness and other security protection.

48.3.5 Biometric Technology Biometric identification technology is closely combined with high-tech means such as computer, bio optics, acoustics, biosensor and biostatistics. It can effectively identify and identify personal identity by using some physiological characteristics of human body (such as fingerprint, face image, iris, etc.) and psychological characteristics of human behavior (such as handwriting, voice, gait, etc.). Because the biometric

496

D. Li et al.

fingerprint identification of human body has many kinds of non-reproducible biological characteristics, the safety factor of this fingerprint identification technology can be greatly improved compared with the traditional biometric fingerprint identification mechanism. Retinal biometric fingerprint recognition features for human body mainly include fingerprint, voice, face, retina, palmprint, skeleton and so on. Among them, retinal fingerprint attracts more and more attention in academic circles because of its incomparable features, such as uniqueness, stability, regeneration and so on. In addition to retinal fingerprint identification technology, the application and research of retinal fingerprint identification technology and advanced signature fingerprint identification technology have also made remarkable progress and achievements in recent years.

48.3.6 The New Cryptographic Technology The new cryptographic technology is mainly based on basic cryptography to explore cryptographic technologies with more characteristics, including homomorphic encryption, white box cryptography, threshold cryptography, blockchain, quantum cryptography, etc. (1)

(2)

Homomorphic encryption. Homomorphic encryption originates from privacy homomorphism, which directly operates the ciphertext without decrypting the ciphertext. Homomorphic encryption makes it possible for people to compare, retrieve and analyze encrypted data [8]. Ubiquitous power Internet of things uses technologies such as “big cloud things move intelligence” to store a large amount of data in the cloud. Homomorphic encryption can be applied to the ubiquitous power Internet of things cloud platform to process encrypted data, which not only ensures the safe storage of data, but also protects the privacy of data. White box password. White box password is an encryption technology that can resist white box attack. Its core idea is to confuse the key with the original encryption algorithm [9]. The purpose is to protect the security of the key in the white box environment, so as to carry out encryption and decryption operations safely in the white box environment. A specific implementation form of white box password is to make some changes in the encryption process and introduce the concept of look-up table, in which the confusion can be the confusion of key and look-up table. White box password can be applied to ubiquitous power Internet of things intelligent terminals and cloud platforms. The white box password is adopted in the intelligent terminal to protect the key security of the perception layer and the security of the equipment; The white box password can be used for the software on the cloud platform to ensure that the user’s sensitive information will not be disclosed when the cloud platform performs data encryption and decryption operations.

48 Research on Application of Information Security Protection Technology …

(3)

(4)

497

Threshold password. Based on the secret sharing scheme, the threshold password shares the key among multiple members. When calculating the key, such as decryption and signature, each member uses the held key share to calculate respectively, and the final calculation result is obtained after synthesis [10]. Based on the threshold password scheme, the software password module of collaborative computing can be developed. The module is integrated into the mobile terminal app in various business scenarios of the power Internet of things in the form of SDK, which effectively solves the problem that the traditional ukey can not be applied to the mobile terminal; The threshold cipher scheme can also combine blind signature, ring signature, group signature and other means to realize the user privacy protection in some digital signature scenarios. Blockchain. The core of blockchain is cryptography. Blockchain is a new application mode of cryptography and computer technology such as distributed data storage, point-to-point transmission, consensus mechanism and encryption algorithm. Blockchain technology can open up information channels between different organizations, better realize the information exchange between the main chain of ubiquitous power Internet of things power system and provincial side chains, as well as the internal and external information exchange of State Grid Corporation of China. Blockchain adopts distributed ledger technology and establishes digital records by sharing, copying and synchronizing transaction records among network members, which can effectively solve the problems faced in the construction of ubiquitous power Internet of things, such as data integration, equipment security, personal privacy, rigid architecture and multi-agent collaboration [11].

48.4 Conclusion Based on the government’s policy support and the rapid development of the whole industry technology, China’s Internet of things has entered a period of rapid growth, and the connection with the Internet and access terminal equipment has become increasingly close. In the future, it will shine in various fields, especially the power industry, so as to bring more convenience to people. Acknowledgements This research is supported by the State Grid Corporation of China’s science and technology project “research and application of key technologies for end-to-end security protection system of power Internet of Things” (SGSDDKOOWJJS1900368).

References 1. Wang, G.F., Pei, J.Z., Song, Z.Z.: Sentiment analysis network public opinion research review. Cooperative Economy and Technology 2, 176–178 (2021). https://doi.org/10.3969/j.issn.1672190X.2021.02.072

498

D. Li et al.

2. Wen, B., He, T.T., Luo, L., et al.: Research on Text Sentiment Classification Method Based on Semantic Understanding. Computer Science 37(6), 261–264 (2010) 3. Kim, Y.: Convolutional Neural Networks for Sentence Classification. Eprint Arxiv, (2014) 4. Zhang, Y., Wallace, B.,: A Sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. Comput. Sci. (2015) 5. Dong, Y.D., Ren, F.J., Li, C.B.: EEG emotion recognition based on linear kernel principal component analysis and XGBoost. Optoelectronic Engineering 48(2), 12–20 (2021). https:// doi.org/10.12086/oee.2021.200013 6. Cai, H.P., Wang, L.D., Duan, S.K.: Emotion classification model based on word embedding and CNN. Comput. Appl. Res. (10), (2016) 7. Li, Z.Q.: Research of database sensitive data encryption model based on web. Comput. Meas. Control. 25(5), 184–187, 191 (2017) 8. Li, Z.Y., Gui, X.L., Gu, Y.J., et al.: Survey on homomorphic encryption algorithm and its application in the privacy-preserving for cloud computing. Journal of Software 29(7), 1830– 1851 (2018) 9. Lin, T.T., Lai, X.J.: Research on white-box cryptography. Journal of Cryptologic Research 2(3), 258–267 (2015) 10. Shang, M., Ma, Y., Lin, J.Q., et al.: A threshold scheme for SM2 elliptic curve cryptographic algorithm. Journal of Cryptologic Research 1(2), 155–166 (2014) 11. Zhao, Y.H., Peng, K., Xu, B.Y., et al.: Status and prospect of pilot project of energy blockchain. Autom. Electr. Power Syst. 43(7), 14–24, 58 (2019)

Chapter 49

Research on User Conversational Sentiment Analysis Based on Deep Learning Yongbo Ma, Xiuchun Wang, Juan Liu, Zhen Sun, and Bo Peng

Abstract As people’s demand for electricity continues to increase, the demand for customer service quality of power supply companies is also increasing. In order to meet the increasing demand for customer service and ensure the quality of customer service, this article takes user sentiment analysis as the starting point and proposes a sentiment analysis model based on CNN-XGBoost to analyze the sentiment of users in the process of customer service and provide guidance for further customer service. Experiments show that the sentiment analysis model proposed in this paper is effective which can provide sentiment analysis performance far superior to traditional models.

49.1 Introduction Sentiment analysis aims to dig out users’ opinions and emotional tendencies from information. It has been widely used in information security-related fields such as event monitoring, information prediction, and public opinion monitoring [1]. With the changes of the times, people’s lifestyles and behaviors have undergone profound changes. The characteristics of mobile and fragmented consumer demand are becoming more prominent, and customers’ demand for electricity is gradually increasing, from safe power supply to service quality and other aspects. Put forward very high demands [2]. At the same time, the demand for intelligent and interactive services is stronger, and the requirements for services are more inclined to be convenient, professional, intelligent and interactive, and tailor-made. In particular, the awareness of rights protection and self-management of customers is more prominent, which is important for power customer service. Y. Ma · X. Wang · J. Liu · B. Peng (B) State Grid Customer Service Center Co.Ltd, Tianjin 300306, China e-mail: [email protected] Z. Sun Nari Group Co.Ltd, Nanjing 215200, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_49

499

500

Y. Ma et al.

In order to meet the ever-increasing demand for customer service and ensure the quality of customer service, it is necessary to perform emotional analysis on user expressions in the customer service process, combine user perspectives and emotional tendencies, to achieve pre-warning and post-mortem analysis of customer service, and to serve the entire process of power customers Provide guidance. Therefore, based on the realization of intelligent voice self-service, how to further improve user satisfaction and provide customer service with targeted and better quality services based on customer emotional information is a question worthy of in-depth study and exploration [3].

49.2 Sentiment Analysis Model Based on Deep Learning The development of artificial intelligence technology has brought new ideas to sentiment analysis, deep learning technology has become more and more important which is used to analyze and process large-scale corpus to dig out some rules and characteristics [4]. The basic idea of sentiment analysis based on deep learning is to extract text features through neural network (convolutional neural networks or recurrent neural networks) to model sentences and then classify them. It also has achieved good results. The process of establishing a sentiment analysis model is shown in Fig. 49.1. First, the original conversation data is preprocessed (word segmentation, stop word removal, truncation) to form a trainable standard text data segment; then, the text is converted through the pre-trained word vector model, and converted to a twodimensional matrix representation (each phrase is represented by a vector); finally, the normalized data is put into the deep learning model for reasoning, and the final sentiment analysis result is obtained.

49.2.1 The Generation of Word Vectors In natural language processing tasks, it is necessary to convert user input information into a form that is understood by the computer [5]. The most commonly used method is word embedding. The Word2Vec model is one of the most famous word embedding models, which includes a Continuous Bag-of-Words Model (CBOW) and

Fig. 49.1 Construction process of sentiment analysis model

49 Research on User Conversational Sentiment Analysis…

501

Fig. 49.2 Word2vec’s CBOW model and Skip-gram model

a Continuous Skip-gram Model (Skip-gram). The basic ideas of both are based on word context prediction and Markov hypothesis. The CBOW model predicts middle words based on context words, and the Skip-gram model predicts context words based on middle words. Both models map each word to a high-dimensional vector space. Through training, the Word2Vec model can project words with similar uses to positions in the vector space to effectively capture and quantify the similarity between words (Fig. 49.2). This paper uses 750 MB of data from the Chinese Wiki and about 1 GB of electricity user session data as the basic corpus. First, convert the traditional characters in the corpus into simplified characters through OpenCC, then use Jieba for Chinese word segmentation, finally put the segmented text into Word2Vec for training to get the word vector model.

49.2.2 The Construction of Emotional Model The generation process of the emotion model is actually the adaptive process of the convolutional neural network [6]. Through the training of the text, the weight between each neuron is continuously adjusted, so that the network system can better fit the relationship between the input data and the result set. Specifically, the training process is the process of using the training data as input to calculate the output through the network, then compare the output with the actual category to calculate the loss, finally calculate the gradient to update the weight of each neuron. The trained emotion model can then be used to extract the features of the text, thereby preparing the material for classification.

502

Y. Ma et al.

Fig. 49.3 CNN-based feature extraction model

(1) Feature extraction based on CNN network The CNN feature extraction model used in this paper is shown in Fig. 49.3. The sentiment analysis model of the convolutional neural network is mainly composed of three parts: convolutional layer, pooling layer and fully connected layer. The convolutional layer is responsible for extracting local features of the text, the pooling layer samples the extracted local features, and the fully connected layer realizes emotion classification. From the above figure, it can be clearly seen that the convolutional neural network consists of four layers, which include the data preprocessing layer, the convolution layer, the maximum pooling layer and the fully connected layer. (1)

Convolution layer mainly extracts local features of the input sequence. Assuming that the input sequence becomes an n × d word vector matrix S through the word vector embedding layer, the convolutional layer obtains the word vector xi output by the embedding layer. The calculation formula of the convolutional layer is as follows: ci = f (w × xi:i+h−1 + b)

(2)

Pooling layer can use average pooling or maximum pooling methods to sample local features to achieve the screening and deletion of local features and retain important information. In the experiment, the maximum pooling method is used to sample each local feature, and the calculation formula is as follows: cmax = max(c)

(3)

The fully connected layer (classifier) performs intent classification based on the features extracted by the convolutional layer and the pooling layer. To prevent overfitting, the Dropout mechanism is introduced to reduce the weight connection during training and enhance the stability of the model.

49 Research on User Conversational Sentiment Analysis…

503

The specific training process is as follows: 1.

2. 3. 4. 5. 6.

Read in a line of text after word segmentation, convert each word segmentation into a vector according to the word vector table, then splice the line of text into a m × k two-dimensional data matrix, and perform zero-padded processing for samples with a length less than m; Use a variety of different sizes of convolution kernels (also called filters) to extract multiple sets of local feature maps; Pooling the results after the convolutional layer processing, and extracting features with the pooling layer; Then complete the mapping of positive and negative emotions through the fully connected layer, then output the type of text; Compare the mapping result with the actual result, and using the gradient descent method to adjust the parameters of each layer according to the residual error; Jump to step 1, read the next line of text for training until the text training is completed, and save the model after training.

(2) Sentiment classification based on XBGoost XGBoost (eXreme Gradient Bosoting) is proposed by Tianqi Chen on the basis of Gradient Boosting Decision Tree. Compared with the traditional GBDT [7], XGBoost uses the second-order Taylor expansion to find the optimal solution for the loss function and avoids overfitting. At the same time, the algorithm is improved to improve the classification accuracy of XGBoost [8]. (1) (2)

(3) (4)

Simple and easy to use. Compared with other machine learning libraries, users can easily use XGBoost and get quite good results. Efficient and scalable. When processing large-scale data sets, the speed is fast and the effect is good, and the requirements for hardware resources such as memory are not high. Strong robustness. Compared with the deep learning model, it can achieve close results without fine-tuning. XGBoost implements a boosted tree model internally, which can automatically handle missing values.

Setting the CNN model obtained in the previous chapters to non-trainable mode, and remove the fully connected layer at the end. Then, use the output of the pooling layer as the input of XBGoost directly [9]. Finally, use labeled sentiment analysis samples to train the entire CNN-XBGoost. The training process of the entire model is as follows: 1.

2.

Read in a line of text after word segmentation, convert each word segmentation into a vector according to the word vector table, and then splice the line of text into an m × k two-dimensional data matrix, and perform zero-filling processing for samples with a length less than m; Use convolutional neural network to preprocess the text to get the feature matrix of each line of text;

504

Y. Ma et al.

Table 49.1 Analysis of emotion model results Model

Positive

Negative

P

R

F1

P

R

F1

CNN-XGBoost

0.75

0.79

0.76

0.79

0.76

0.77

SVM

0.72

0.75

0.73

0.75

0.74

0.74

XGBoost

0.71

0.73

0.72

0.73

0.71

0.72

3. 4.

Use the text feature matrix to train the XBGoost model until the optimal solution is found; Save the XBGoost model after training.

49.3 Experiment and Analysis The experimental data in this paper comes from about 230,000 wiki data and 30,000 electric customer service conversation data with positive and negative labels on the Chinese Wiki website. The wiki data and the conversation data together form a word vector model, and the conversation data is used as a sample of sentiment analysis. In order to improve the accuracy of the data prediction of the evaluation model, this paper uses the 10-fold cross-validation method to evaluate the sample data. Finally, the average of 10 test results is taken as the accuracy of the model classification (Table 49.1). The above table contains the support vector machine and the combined model of the convolutional neural network and the support vector machine proposed in this paper. It can be seen that the proposed model is more accurate than the traditional machine learning model on the sentiment analysis of the corpus, which shows that the model proposed in this paper is theoretically valid. According to the CNN-XGBoost algorithm and the XGBoost algorithm, the use of deep learning methods to extract text can bring more improvements to the model, and it is much higher than traditional machine learning algorithms, such as SVM.

49.4 Conclusion Based on the analysis of the advantages and disadvantages of various traditional sentiment analysis methods, in order to better utilize the accuracy of traditional classification models and the advantages of deep learning that do not require manual feature extraction, try to combine CNN and XGBoost to achieve customers Text sentiment analysis in the service field. Experiments show that this article combines the convolutional neural network with XGBoost to obtain better adaptability and higher accuracy. It not only has high theoretical value, but also has broad application prospects.

49 Research on User Conversational Sentiment Analysis…

505

References 1. Wang, G.F., Pei, J.Z., Song, Z.Z.: Sentiment analysis network public opinion research review. Cooperative Economy and Technology 2, 176–178 (2021). https://doi.org/10.3969/j.issn.1672190X.2021.02.072 2. Wen, B., He, T.T., Luo, L., et al.: Research on Text Sentiment Classification Method Based on Semantic Understanding. Computer Science 37(6), 261–264 (2010) 3. Kim, Y.: Convolutional Neural Networks for Sentence Classification. Eprint Arxiv, (2014) 4. Zhang, Y, Wallace, B.: A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. Comput. Sci. (2015) 5. Dong, Y.D., Ren, F.J., Li, C.B.: EEG emotion recognition based on linear kernel principal component analysis and XGBoost. Optoelectronic Engineering 48(2), 12–20 (2021). https:// doi.org/10.12086/oee.2021.200013 6. Cai, H.P., Wang, L.D., Duan, S.K.: Emotion classification model based on word embedding and CNN. Comput. Appl. Res. (10) (2016) 7. Chen, T., Guestrin. C.: Xgboost: a scalable tree boosting system. In: Proceedings of the Proceedings of the 22Nd ACM SIGKDD and International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016) 8. Mnih, A., Hinton, G.: Three new graphical models for statistical language modelling. In: International Conference on Machine Learning. ACM, pp. 641–648 (2007) 9. Bengio, Y., Schwenk, H., Senécal, J.S., et al.: Neural Probabilistic Language Models. Springer, Berlin Heidelberg (2006)

Chapter 50

Research and Application of Edge Computing and Power Data Interaction Mechanism Based on Cloud-Edge Collaboration Bing Tian, Zhen Huang, Shengya Han, Qilin Yin, and Qingquan Dong Abstract At present, the development of science and technology breeds applications in all walks of life, the need to gather edge information collection, data processing, instruction execution and other functions have become more and more intense. The data interaction between cloud center and edge has become a problem. Based on the actual needs of electricity usage, this paper proposes a cloud-side collaborationbased power data interaction mechanism based on the study of edge computing, and discusses the management plan of data storage, calculation, and circulation under the environment of the power Internet of Things and cloud platform, so as to realize power The real-time computing and control capabilities of the edge business of the Internet of Things support the company’s digital business development.

50.1 Introduction The technology and application of power IoT in smart grids is currently a global research hotspot [1]. With the development of IoT technology, more attention is paid to the stable operation, latency, sharing, and security of the system, and the following technical challenges are brought to the existing data center system architecture and service mechanism: business data management is chaotic, and power business calculations insufficient planning of processing capacity and the disorder of data management restricts business development, cost and timeliness of the business. In order to meet the current challenges facing the intelligent development of the IoT, a new computing framework and related technologies are urgently needed to meet the business development of intelligent computing for the IoT [2]. B. Tian · Z. Huang · S. Han · Q. Yin Information and Communication Branch of State Grid, Shandong Electric Power Co., Ltd. Jinan City, Shandong Province, China Q. Dong (B) Information System Integration Company, NARI Group Corporation, Nanjing City, Jiangsu Province, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_50

507

508

B. Tian et al.

Aiming at the problems of data management in the traditional architecture of the IoT, this paper studies the key technologies of edge intelligent agents, and on this basis, proposes a power data interaction mechanism to realize the real-time computing and control capabilities of the power IoT edge business, and open up the management channel of cloud-edge data transmission, and support the company’s digital system construction [3].

50.2 Key Technologies of Edge Intelligent Agent 50.2.1 Cloud Edge Collaborative Architecture In this paper, in view of the large number of power IoT terminal equipment, diverse forms, complex protocols, and weak support business processing capabilities [4], this paper conducts research on technologies such as trusted access, terminal expansion, and edge computing, proposes a cloud-side collaborative architecture, and realizes a variety of network support capabilities and flexible access of underlying multi-service terminals, as well as computing and management of edge data. The central cloud extends the key capabilities of cloud computing architecture, functions, management, and interfaces to the edge cloud computing platform to achieve management and control of different kinds of resources in the system (Fig. 50.1). (1)

(2)

Cloud-side collaborative computing architecture First, the central cloud, edge cloud CPU, memory and other resources are brought into unified management and control through configuration, and the cloud-side collaborative computing architecture is completed. The edge cloud completes the installation and configuration of the cloud management system, and realizes the initialization of computing nodes and computing resources.

Fig. 50.1 Cloud edge collaboration architecture diagram

50 Research and Application of Edge Computing …

(3)

(4)

509

The edge cloud joins the cloud-side collaborative computing architecture, and the central cloud receives the joining of the edge cloud, manages the edge cloud and its resources, and adds its computing resources to the unified resource pool. After the central cloud completes the management of the edge cloud and its resources, it conducts multi-level use of resources, and conducts unified scheduling, unified scheduling, unified deployment, and unified operation and maintenance of resources.

50.2.2 Edge Computing In order to meet the demand for massive distributed data access and computing at the edge, and ease the pressure of centralized computing, communication and storage [5], it is necessary to adopt an edge-computing architecture to transform centralized computing into distributed computing, and move computing nodes closer to data source nodes. In order to select the computing edge of the power IoT with close geographical location and relatively close business connection, so as to realize the independent calculation of various business functions within the edge, realize the edge autonomy and the collaborative interaction between the edge and the large power grid [6]. The transmission cost is often high, the time is extended, can reach 25–200 ms, and the efficiency is low. If there is a network problem or a central node failure, it will have a significant impact on existing company operations [7]. Through edge computing, some simple data storage, calculation and management functions are decentralized to the edge, and the delay can be reduced to less than 20 ms. Edge computing uses high-performance servers, which can be expanded according to the scale of terminals. So as to ensure the continuity of business applications in the case of network failures or hardware failures, the computing, storage, and application resources carried on the edge cloud need to be transferred to available resources for management. This process is called drift [8]. (1)

(2) (3)

(4)

The central cloud and the edge cloud monitor the running status of the resources. When the resources are abnormal, they decide to drift on the edge resources. The central main control performs resource coordination, and schedules and allocates resources according to the type and quantity of resources. The center master controls and dispatches the corresponding resources, and transfers the control, configuration, monitoring, and security management authority of the resource, that is, the resource management authority adapted to the drifting application. After the central cloud completes the management of the edge cloud and its resources, confirms that the resource running status is normal after the migration, and the resource is occupied within the normal threshold range, and the resource drift process is ended.

510

B. Tian et al.

50.3 Power IoT Data Interaction Mechanism 50.3.1 Data Fusion Solution Architecture The smart grid is a virtual network that can perform a series of management and operations on the physical grid, including power load balancing control, clean energy control, power node control, security protection, etc. The application of energy load balancing runs on the edge of the network [9], such as smart meters and microgrids, which monitor the operating status of the power grid in real time and control the power business in real time to achieve load balancing and energy saving. Circuit networks are widely distributed, and it is unrealistic to upload all data to cloud computing. Edge computing uses its own feature of supporting geographic distribution to provide a new solution for it. On the basis of studying the existing data architecture and business architecture of the electric power business, formulate a data fusion solution that can support the landing application of the electric IoT, and realize the data collection once and used everywhere. State Grid Cloud provides an infrastructure environment, a data platform provides data assets, and an Internet of Things service platform uses data center computing, storage, network and data resources to process and analyze Internet of Things business needs. Establish a service interaction mechanism for the power IoT, formulate data fusion solutions that can support the landing application of the IoT, and solve the data fusion interaction problems such as difficulty in data sharing and high data delay (Fig. 50.2).

Fig. 50.2 Power IoT data fusion architecture

50 Research and Application of Edge Computing …

511

50.3.2 Data Interaction Mechanism The management of data is an important difference between IoT management and traditional network management [10]. Traditional network management mainly guarantees the connectivity of the network and does not conduct in-depth processing of the traffic data in it. However, the ultimate purpose of the IoT is to collect and analyze data and control terminal equipment. Therefore, data is the life of the IoT and must be finally sent to Industry application system. There are many ways to upload data. You can install an application agent on the IoT gateway to connect with the industry application platform of the data center, or you can connect with the IoT platform through the Agent, such as Huawei’s OceanConnect, GE’s Predix, etc. It can also be sent to the controller for simple protocol processing first, and then unified with the industry application platform or the IoT platform. However, in order to decouple the network connection and data connection, the management data of the Internet of Things itself must be sent uniformly through the controller, so data distribution and subscription are the basic capabilities of the controller.

50.4 Data Management and Application 50.4.1 Data Management The specific flow chart is shown in Fig. 50.3. The data collected from the edge of the power IoT is transferred to the edge data center time series database through the message queue, and the data is cleaned and calculated and analyzed according to business rules, and preliminary incremental results can be obtained. At the same time, the data generated by the calculation is uploaded to the cloud data center for offline

Fig. 50.3 Power IoT data fusion architecture

512 Table 50.1 Data calculation table for data room

B. Tian et al. Model

Entry real-time data (%)

Threshold setting (%)

Calculation result

CPU usage

56

80

0

Memory usage 79

80

0

I/O concurrency

214

300

0

Temperature

65

70

0

data processing, and batch results are obtained. After incremental combination and weighting of batch results, a result summary is formed and corresponding business data services are provided.

50.4.2 Power Business IOT Management Application This article selects the company data room as the application scenario, collects server, virtual machine and other equipment implementation operation and maintenance information, sets up edge computing modules in the computer room, sets basic business threshold rules, and performs real-time processing, and only reports the final status result (0 means normal, 1 is abnormal), store a large amount of operation and maintenance data locally, reduce network transmission pressure, and save operation and maintenance costs (Table 50.1).

50.5 Summary Aiming at the current development of the power IoT business, based on the study of edge computing, this paper proposes a cloud-based edge collaboration power IoT data interaction mechanism, through the installation of application Agent and the industry application platform of the data center to achieve data collection and access. Data transmission and real-time control. This method can effectively improve the efficiency of data transmission and management, And provide support for the company’s digital construction and business development. However, whether the method has obvious effects in other scenarios of the power IoT, it is necessary to conduct pilot applications in edge scenarios such as distribution networks and substations to further optimize the architecture and data management process. Acknowledgements This research was funded by the Science and Technology Project of the State Grid Shan dong Electric Power Company: Research on the key technologies of the power Internet of Things platform based on the cloud-Side collaborative architecture-Research on the key technologies of data interaction and identification analysis of edge IoT agents (520627200002).

50 Research and Application of Edge Computing …

513

References 1. Chen, L.H.: Design and research of unified data platform for power dispatching center based on power internet of things. Power Equip. Manag. (7) (2019) 2. Wan, L.S., Li, Y.C., Cao, Y.: Research on line state awareness monitoring and data sharing based on control cloud under ubiquitous power internet of things. Electr. Power Inf. Commun. Technol. 2019(11) (2019) 3. Lu, C.Q., Xie, L., Zhuang, X., et al. Research on the application of SDN NFV-based IoT ultralarge-scale authentication resource pool and live network experiments. Telecommun. Technol. (2018) 4. Su, J., Wang, J.Y., Liu, Q.: Research on cloudification scheme of industry gateway based on NFV. Sci. Res. 000(007), 196–196 (2016) 5. Shuai, J.: Research on the development and deployment of NB-IOT based on NFV virtualization. Digital Technology and Application, 6.2016, 000(011):35,38. Chu Yigang. Research on the construction strategy of atomization networking based on edge intelligence. Telecommun. Technol. 000(012), 8–11 (2018) 6. Li, J.L., Wan, X.L.: Analysis of Zhejiang Telecom Internet of Things Very Large-scale Certification Resource Pool Project. Mobile Communications 043(007), 13–21 (2019) 7. Xue, H., Ying, L.H., Wang, P., et al. Research on key technologies of cloud-side collaboration 5G PaaS platform. Telecommun. Sci. (S2) (2019) 8. Ma, X.Y.: Research on Trusted Collaboration Mechanism Based on Edge Computing. Beijing University of Posts and Telecommunications (2019) 9. Lei, B., Liu, Z.Y., Wang, X.L., et al.: A new edge computing solution based on the integration of cloud, network and edge: computing power network. Telecommunications Science 35(09), 50–57 (2019) 10. Ma, X.Y.: Research on Trusted Collaboration Mechanism Based on Edge Computing. Beijing University of Posts and Telecommunications (2019)

Chapter 51

Research and Application of Edge Resource Allocation Algorithm of Power Internet of Things Based on SDN Dong Li, Shuangshuang Guo, Yue Zhang, Hao Xu, and Qingquan Dong

Abstract With the maturity and promotion of edge computing technology, more and more computing functions in the Internet of Things are decentralized to the edge, but edge resources are facing the challenge of low resource capacity. How to plan to make limited resources and configure network nodes to meet the needs and distribution of computing services is a problem worthy of study. Based on the research on resourceconstrained edges, this paper abstracts the system model and load model, proposes the power Internet of Things edge resource allocation algorithm based on SDN, and simulates it in an experimental environment. The results show that the algorithm can effectively balance the edge load and improve computing power at the edge of the power Internet of Things.

51.1 Introduction With the development of the Internet of Things, edge computing allows computing functions to be implemented at the edge of the network. Compared with cloud computing, edge computing deploys services near devices or users [1], which has the advantages of low latency and high bandwidth. This is especially important for delay-sensitive applications, such as transformer operation control, equipment monitoring and other services. Combined with high-bandwidth 5G technology [2], edge computing has great potential in establishing ultra-low latency and high-throughput services. Running virtual network functions on edge devices such as the Internet of Things and enterprise gateways has recently attracted the attention of academia and industry to provide ultra-low latency and reduce traffic through the core network D. Li (B) · S. Guo · Y. Zhang · H. Xu Information and Communication Branch of State Grid Shandong Electric Power Co., Ltd., Jinan City, Shandong Province, China e-mail: [email protected] Q. Dong Information System Integration Company, NARI Group Corporation, Nanjing City, Jiangsu Province, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_51

515

516

D. Li et al.

[3]. The current power communication system is relatively complete in the backbone communication network construction, management, and operation and maintenance system [4]. However, with the continuous development and advancement of smart grids and power networking, the traditional power network operation structure has been difficult to effectively solve the smart power distribution business. Especially in the allocation of edge resources, traditional cloud resource transmission and computing still have disadvantages such as high delay and high cost. Therefore, the study of resource allocation methods at the edge is a topic worthy of study [5]. A major problem of the edge computing faces is low resource capacity. Compared with servers in the cloud, edge servers usually have less CPU resources. For example, the Netgear RAX120 wireless router [6] is only equipped with a dual-core 2.2 GHz processor. Even edge servers are easily overloaded by sudden and periodic user requests [7]. To make matters worse, we found from actual experiments that in addition to virtual network function calculations that consume CPU, communication between computing tasks also consumes a lot of CPU. Existing work has not yet explored this type of resource consumption, which may overload edge devices [8]. A potential solution is to offload the load to the remote cloud, but because the communication delay between the edge and the cloud is very long, and the long communication delay needs to be paid, offloading the task from the overloaded edge to the remote edge or cloud is only suitable for high latency request [9]. We consider deploying virtual network task chains on resource-constrained edges, and use peer-to-peer edge devices to cope with potential equipment overload. When the edge has sufficient resources to process virtual network task chain requests, it is deployed locally [10, 11]. Otherwise, the chain is divided into sub-chains, and each sub-chain is deployed on its adjacent edge.

51.2 System Model and Load Model 51.2.1 System Model The edge nodes connected in a single system of the power Internet of Things are represented by graph G(V, E), and the delay between any two edge nodes is given by the delay matrix l(u, v)[8]. When requests arrive online, and each request r is described   by sr , dr , tr0 , tr1 , r , br , lr , where sr represents the source and dr represents the destination, tr0 and tr1 indicates the time of arrival and the time of departure. Request r requires the deployment of the service chain r = { f 1 , . . . , fr }, { f 1 , . . . , fr } ∈ F, F is the collection of all network functions, br indicates the initial bandwidth requirement before entering the chain. lr is the delay requirement of r, where the delay of any two nodes is l(u, v). At time t, the set of active requests is denoted by R(t). With the widespread use of lightweight containers, we assume that any edge device can create any type of network function [12]. Due to the traffic change characteristics

51 Research and Application of Edge Resource Allocation …

517

of network functions, we assume that each network function changes the traffic at a rate β f . In order to process a packet in the network function, we express the ratio of the calculation cost and the communication cost as α f .

51.2.2 Network Load Model The network function f changes the flow rate at a rate β f [13]. For the network functions in the chain, the output flow is the accumulation of all the ratios of the previous network functions in the chain, multiplied by the initial request br . Mathematically, for the network function f in the chain , its ingress flow is expressed as follows: pr,inf = br ·



α fi

f i < f, f i ∈C

The outlet flow pr,outf is pr,outf = pr,inf · α f At time t, the ingress traffic pr,outf,n (t) of f ∈ r residing in edge device is n ∈ V: pr,outf,n (t) = xt,r, f,n · pr,inf xt,r, f,n is a decision variable, if and only if at time t, f ∈ r is deployed at n ∈ V and its value is equal to 1, otherwise it is 0. Similarly, the egress flow is: pr,outf,n (t) = xt,r, f,n · pr,outf Based on the load model discussed in Sect. 2.2, at time t, the total CPU load of device n ∈ V is the weighted sum of communication and computing loads, namely: loadn (t) =

 r ∈R(t), f ∈Cr

=





 pr,in f,n (t) + pr,outf,n (t) + α f · pr,in f,n (t)

  (1 + α f ) pr,in f,n (t) + pr,outf,n (t)

r ∈R(t), f ∈Cr

Inspired by existing research on load balancing problems, our algorithm consists of three steps. First, set the node weight as an index of its existing load. γ is a parameter. When the request r arrives, the load loadn (t) is known. cpu(n) is the CPU capacity of node n ∈ G, and ˆ is the estimate of the maximum load on all nodes in the offline optimal value. The second step is to try to find the effective least cost path from r source to r destination. Step 3 verifies the path found in step 2 and checks the

518

D. Li et al.

current load on all nodes. If there is a node whose load is greater than θ·ˆ, the request is rejected and doubled. θ is the parameter derived by γ. We assume that the delay effective path P is given in step 2, and the purpose of our deployment chain is to find the minimum cost allocation of virtual network functions on P devices. For the current time t and the current request r, solve: 

  xt,r, f,n · γ ωn (t+1) − γ ωn (t)

f ∈Cr , n∈ p

∀ f ∈ P, n ∈ / P, xt,r, f,n = 0 Variable : xt,r, f,n ∈ {0, 1}, ∀ f ∈ F, n ∈ P We designed a dynamic programming algorithm to obtain the optimal solution for load balancing. The equation is as follows:   , i == 0, , C cost P 0 r,0∼ j   cost P0∼i , Cr,0∼ j =  minl∈[0,i] cost(Pl , f 0 ), j ==  0  ⎩ min x∈[0, j] cost P0∼i−1 , Cr,0x + cost Pi , Cr,x j O.W. ⎧ ⎨

  In cost P0∼i , Cr,0∼ j , 0 ∼ i is the edge device index along the predetermined path P, and 0 ∼j is the network function index in the chain r . The dynamic programming equation expresses the deployment of the 0 ∼j network function of chain r to the front 0 − i edge device of the path. It can be divided into two situations. First, deploy the 0 ∼x network function on the 0 ∼ i−1 device, and deploy the x∼ y network function on the remaining device i. Find the minimum value by iterating all possible x∈ i−1. The dynamic programming equation ends in two special cases. For the special case of i = = 0, there is only one device, and the cost is to deploy  the entire chain on the device. At j = = 0 The minimum cost. In both cases, cost P0 , Cr,0∼ j and cost(Pi , f 0 ) can be obtained by the formula  2.2. The time complexity  2 in Sect. r . This is because the cost is · of the dynamic programming method is O |P| indexed by two dimensions, namely |P| and r , and each value requires at most |P| comparisons.

51.3 Simulation Experiment The random topology is generated from the Lemon library, which contains 30 nodes and 67 edges [14]. The link delay is randomly set from 1 to 5 ms. We consider a 60-s time period. In each second, the number of requests arriving is randomly selected from 4 to 9.

51 Research and Application of Edge Resource Allocation …

519

Compared with Geo-distributed, the maximum load reduction is shown in Fig. 51.1: (1) Comp-Comm or Comp’s virtual network function load balancing is 30– 40% better than Geo-distributed on average; (2) Comp-Comm virtual The network function load balancing is 10% higher than the average value of Comp alone. In more detail, compare Fig. 51.1a, b. They have the same β (rate of change in traffic) but different α (rate of calculation/communication). We can see that the Comp-VNF curve is sensitive to α, because the Comp-VNF curves in the two figures are obviously different. The Comp-VNF curve in Fig. 51.1b is better than the corresponding curve in Fig. 51.1a. It is worth noting that Comp-VNF only considers the CPU consumed during the allocation of virtual network functions, so when the computing load dominates the overall load, the allocation of Comp-VNF has a better guiding effect. The result of Algorithm 3 is shown in Fig. 51.2. The curve of Redeploy-3 describes the case of T = 5 and δ = 3. The curve of Redeploy-5 describes the case of T = 5

Fig. 51.1 Compared with Geo-distributed, the maximum load reduction in all nodes

Fig. 51.2 Performance comparison of redeployment algorithms

520

D. Li et al.

and δ = 5. In both cases, β = [0.1, 1], α = [0.1, 3]. Since the simulation duration is 60 s, there is almost no redeployment cost, that is, 12 times, so both curves can reduce the maximum load between all nodes. On average, Redeploy-5 is better than Redeploy-3 because in each redeployment, it redeploys more requests.

51.4 Summary This article considers the use of edge devices in the power Internet of Things to build ultra-low latency virtual network function services for users. The main challenge is to deal with the shortage of resources. To this end, we use peer-to-peer edge devices that are both close in distance and have relatively low load. We defined the problem of resource allocation through SDN, and we proved that it is NP-completely difficult. In order to solve this problem, we have given an optimal algorithm for the edge path of the power Internet of Things that has been deployed. The algorithm can realize the intelligent allocation of edge resources, which can further reduce the maximum load by 5%, and improve the stability and safety performance of the power network operation. Acknowledgements This research was funded by the Science and Technology Project of the State Grid Shandong Electric Power Company: Research on the key technologies of the power Internet of Things platform based on the cloud-Side collaborative architecture-Research on the key technologies of data interaction and identification analysis of edge IoT agents (520627200002).

References 1. Chen, L.H.: Design and research of unified data platform for power dispatching center based on power internet of things. Power Equip. Manag. (7) (2019) 2. Abbas, N., Zhang, Y., Taherkordi, A., Skeie, T.: Mobile edge computing: a survey. IEEE Internet Things J. 5, 450–465 (2017) 3. Aspnes, J., Azar, Y., Fiat, A., Plotkin, S., Waarts, O.: On-line routing of virtual circuits with applications to load balancing and machine scheduling. J. ACM (JACM) 44, 486–504 (1997) 4. Bilal, K., Erbad, A.: Edge computing for interactive media and video streaming. In: 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC), IEEE, pp. 68–73 (2017) 5. Cao, L., Sharma, P., Fahmy, S., Saxena, V.: Envi: elastic resource flexing for network function virtualization. In: 9th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 17) (2017) 6. Cao, L., Sharma, P., Fahmy, S., Saxena, V.: NFV-vital: a framework for characterizing the performance of virtual network functions. In: 2015 IEEE Conference on Network Function Virtualization and Software Defined Network (NFV-SDN), IEEE, pp. 93–99 (2015) 7. Chen, Y., Wu, J.: NFV middlebox placement with balanced set-up cost and bandwidth consumption. In: Proceedings of the 47th International Conference on Parallel Processing, ACM, p. 14 (2018) 8. Cziva, R., Anagnostopoulos, C., Pezaros, D. P.: Dynamic, latency-optimal VNF placement at the network edge. In: IEEE infocom 2018-IEEE conference on computer communications, IEEE. pp. 693–701 (2018)

51 Research and Application of Edge Resource Allocation …

521

9. Cziva, R., Pezaros, D.P.: Container network functions: bringing NFV to the network edge. IEEE Commun. Mag. 55, 24–31 (2017) 10. Dezso, B., J˝uttner, A., Koväcs, P.: Lemon: a C++ library for efficient modeling and optimization in networks. http://lemon.cs.elte.hu (2019) 11. Jia, Y., Wu, C., Li, Z., Le, F., Liu, A., Li, Z., Jia, Y., Wu, C., Le, F., Liu, A.: Online scaling of NFV service chains across geo-distributed datacenters. IEEE/ACM Trans. Netw. (TON) 26, 699–710 (2018) 12. Ko, S.W., Han, K., Huang, K.: Wireless networks for mobile edge computing: spatial modeling and latency analysis. IEEE Trans. Wirel. Commun. 17, 5225–5240 (2018) 13. Fei, X., Liu, F., Xu, H., Jin, H.: Towards load-balanced VNF assignment in geo-distributed NFV infrastructure. In: 2017 IEEE/ACM 25th International Symposium on Quality of Service (IWQoS), IEEE, pp. 1–10 (2017) 14. Gao, B., Zhou, Z., Liu, F., Xu, F.: Winning at the starting line: Joint network selection and service placement for mobile edge computing. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, IEEE. pp. 1459–1467 (2019)

Chapter 52

Research and Application of Key Technologies of Multi-round Dialogue in Intelligent Customer Service Xiubin Huang, Lingli Zeng, Xiaoyi Wang, Jun Yu, and Ziqian Li

Abstract Natural language human–computer dialogue is a comprehensive problem in the field of artificial intelligence, covering important aspects such as speech processing, language processing, decision control, and knowledge reasoning. Among them, the research on key technologies of multi-round dialogue is a challenging task. This paper analyzes and studies the key technologies of multiple rounds of dialogue, analyzes and compares the commonly used models in the deep question answering platform under the traditional pan-layer network structure (CD-AN), and proposes improvements to the existing algorithms to improve the quality of interaction and calculations based on their existing problems. effectiveness.

52.1 Introduction The intelligent human–machine dialogue system takes the user’s voice or text as input, understands the semantic information contained in human speech, makes dialogue decisions based on dialogue history information, organizes the language and returns it to the user in the form of voice [1]. In a multi-round dialogue system, extracting the global context information of the dialogue in real time and giving a reasonable answer plays an important role in providing long-term friendly human–computer interaction. During a long conversation, it usually involves switching between multiple conversation scenarios. The inconsistency of the context of different dialogue scenarios makes that not all context information is conducive to the generation of the current response. Therefore, real-time perception of changes in the dialogue scene, in order to make a correct response in line with the current scene, is a human–machine multi-round dialogue system that needs to be considered. This article aims to implement a non-object-driven human–machine multi-round dialogue X. Huang · L. Zeng · X. Wang · Z. Li State Grid Customer Service Center Co. Ltd, Tianjin 300306, China J. Yu (B) Nari Group Co. Ltd, Nanjing 215200, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_52

523

524

X. Huang et al.

system based on end-to-end deep neural network technology that can fully understand the context of dialogue. It mainly introduces a multi-round dialogue based on deep learning. Through research on this technology, it is proposed In-depth interactive text fusion method [2].

52.2 Key Technology Methods of Multi-round Dialogue Based on Deep Learning 52.2.1 Multi-round Dialogue Preprocessing In the traditional deep learning process, due to the segmentation and recognition of the data set [3], it is impossible to effectively understand the required information, but at the same time it will also cause some errors. First, a model based on a deep neural network must be established; secondly, how to study use the multi-layer convolution method to improve the performance of the model and make it closer to the real world; the last is to extract content that is meaningful or more important to the target from the training sample, and the preprocessing is to analyze and process on the basis of multi-dimensional data. Through further mining of the difference and heterogeneity between different targets in the image [4]. Taking text preprocessing as an example, it mainly includes the following steps (Fig. 52.1): (1)

(2)

(3)

Encoding conversion: Common Chinese encoding formats include GBK, UTF8, etc. GBK is an encoding format mainly for Chinese, which only occupies 2 bytes when encoding Chinese, which is more space-saving. UTF8 is a more general encoding, which occupies 3 bytes when encoding Chinese, but it can cover all characters of all countries in the world, so it is more versatile. Case conversion: There is a lot of mixed use of Chinese and English in Chinese text, such as 5G, T-shirt, big S, etc. Simply removing all English is not a good solution. To reduce ambiguity, preprocessing will convert all lowercase English to uppercase English. Traditional-Simplified conversion: Convert all the traditional characters and simplified characters into a dictionary. If there are traditional characters in the

Fig. 52.1 Text preprocessing graph

52 Research and Application of Key Technologies …

(4)

(5)

525

text, replace them with the corresponding simplified characters. This completes the preprocessing of the traditional-simplified text conversion. Punctuation mark conversion: Chinese uses full-width punctuation marks, and English uses half-width punctuation marks. The preprocessing only keeps the commonly used punctuation marks, and converts them into half-width punctuation marks, and replaces all other punctuation marks with spaces. Noise removal: for illegal characters, emoticons, and characters in languages other than Chinese characters and English numbers in the text. The noise removal module will only keep Chinese, English and numbers, and all other characters will be removed.

52.2.2 Multiple Rounds of Dialogue The representation of multiple rounds of dialogue requires a mapping of the input, that is, in deep learning, the output information is described in different positions [5]. When we use “1” to indicate, we must first write the data points (0 and 5). Then treat this point as a feature vector; if it is a feature vector, just extract the sample attribute value directly from it. Because for the same question, there are great differences between different types and different levels and have certain similar characteristics (for example: high-dimensional, low-frequency and other signal waveforms); secondly, in multiple rounds of dialogue, the process of expressing language is also shallow. Go deeper and gradually deepen gradually, and finally get the result after the relevant information and knowledge are merged.

52.2.3 Multi-round Dialogue Key Technology Matching The key technology of deep learning is multi-round collaborative filtering algorithm [6], based on Raastici neural network, and related theories such as deep vision. After the basic knowledge is realized, it can be analyzed and researched. For the above key technologies, the most important thing is how to associate the various modules with each other so as not to affect the overall performance and not cause a series of problems such as system crashes, data loss, etc., so as to obtain a good interaction effect to ensure the entire The stable operation of the process is one of the key goals, and only on this premise can the design and realization of multiple rounds of collaborative filtering strategy be successfully completed [7]. Based on user-based collaborative filtering algorithm, (taking video preference recommendation under big data as an example) implementation steps: (1) (2) (3)

Find a collection of users who have the same interests as the target user; Find items and rating predictions that users who have the same hobbies as the target user like, but the target user has never heard of; Generate TOP-N recommendation list.

526

X. Huang et al.

The measurement of similarity is the most important part of the algorithm. Common methods for measuring similarity are as follows (take film and television ratings as an example). (1)

Cosine similarity: When constructing the scoring matrix, if the user does not score the item, the score is considered to be 0, where i and j are m-dimensional vectors, where m represents the number of items. n cos(an , bn ) =  n

2 i=1 ai

(2)

· bi  n

i=1 ai

×

2 i=1 bi

Person correlation coefficient calculation formula  q∈ p (ra,q − ra )(r b,q − r b )  sim(a, b) =  2  2 (r − r ) a,q a q∈ p q∈ p (r b,q − r b )

The similarity between user a and user b, set p refers to the items that user a and user b work together, where ra refers to the average rating of user a, the advantage of this similarity compared with cosine similarity is that some users are accustomed to giving high scores, and some users are very conservative. When each score is subtracted from the average, there is a fairly obvious linear correlation between users.

52.2.4 Key Feature Extraction The key feature extraction of deep learning is based on different classification ideas, which are analyzed and researched to get a better understanding of the goals of deeplevel data mining algorithms [8]. The eigenvalue decomposition (FEIS) method is used to complete the multi-round dialogue content in-depth processing task and the ubiquitous evaluation index system is quantified to realize the interactive multiround collaborative filtering function when the similarity with the original text is high. In order to improve modeling efficiency and ensure the accuracy of model output results, it is necessary to perform pre-estimation, attribute extraction, and consistency verification of input samples [9]. (1)

(2)

Multi-round dialogue feature extraction based on deep learning. In this study, we will use the fourth layer of neural network for training, and use the third layer of data set to predict the experimental results. Multi-round dialogue feature extraction based on deep learning In this research, the multi-round dialogue key feature extraction based on deep learning mainly includes:

52 Research and Application of Key Technologies …

(1)

(2)

527

By decomposing the data set, the depth data is concentrated into different levels, and then the theme and content are determined according to the difference of the attribute information contained in each layer. Aiming at the problems in deep learning, we propose a method based on neural network technology to extract probabilistic semantic information features such as frequency and non-moment keywords in multiple rounds of dialogue.

52.2.5 Evaluation In multiple rounds of dialogue learning, in-depth is one of the most important, effective and fast-developing directions. It can not only help us improve our knowledge understanding, but also make the content learned easier to digest. This article mainly discusses A cognitive model that simulates the interaction between humans and the environment is established on the basis of a simple, easy-to-understand, fast-paced life and other multimedia technologies based on the deep-vision neural network, and its performance is optimized and improved through related training and simulation analysis. There is a visualization tool with strong in-depth capabilities and a high degree of generality. In the context of multi-dimensional deep learning, a set of effective, highly systematic and easy-to-use deep understanding models have been constructed based on neural networks. This method can well solve the problems in traditional ubiquitous research that cannot accurately predict object features and classifiers are difficult to determine sample types.

52.3 Research and Application of Multi-round Dialogue 52.3.1 Environment Setup Environmental construction is considered to be a very complicated, tedious, long and difficult technology in the process of in-depth chemical research. First, the method is systematically and scientifically verified and improved based on the theory of deep neural networks; secondly, a new idea of multi-sensor data acquisition and information integration based on the ultrasound image fusion (DBN) algorithm is proposed, and the concept of deep learning It is based on deep network analysis technology to realize multi-dimensional data processing in environment construction. In the current dialogue system, the model modeling of a single-round dialogue is relatively mature, but it still poses a big challenge for multiple rounds of dialogue. The biggest challenge is the co-referential relationship and lack of information in multiple rounds of dialogue. As shown below (Table 52.1): In order to solve the problem of co-referential relationship and missing information in multiple rounds of dialogue, the idea here is to train a speech rewriter to

528

X. Huang et al.

Table 52.1 Example diagram of relationship information Utterance1

Context1

Utterance2

Human:Electricity bill

ChatBot:Dear user, I’m Xiao E, how can I help you? Utterance2* Human:Check the electricity bill Utterance3

ChatBot:Please enter your account number and end with #

Utterance4

Human:ok

Utterance1

Context2

Utterance2

Human:power failure

ChatBot:Dear user, I’m Xiao E, how can I help you? Utterance2* Human:Query power outage information Utterance3

ChatBot:Please enter your account number or reserve your mobile phone number and end with #

Utterance4

Human:All right

Utterance5

ChatBot:Hello, your area is undergoing emergency repairs due to heavy rain, please be patient

convert multiple rounds of dialogue into a single round of dialogue. The purpose is to change Utterance2 to Utterrance2* as shown in the figure above. Since the information is completed, the multiple rounds at this time are equivalent to a single round of dialogue. Input it into the dialogue system and it can be processed as a single round of dialogue. In order to train the speech rewriter, the article created a data set containing 20,000 multiple rounds of dialogue, and each sentence exists in pairs; an efficient conversion-based speech rewriter is proposed, which has better performance than others Several strong baseline versions; finally, the discourse rewriter is applied to real-life online chat robots, and significant improvements have been obtained.

52.3.2 Modeling of Multiple Rounds of Dialogue In deep learning, it is mainly through the establishment of multi-dimensional data models and integrating them into deep-level language processing, semantic representation and other fields. Modeling technology mainly includes the following aspects: Obtaining depth information directly. Construct a generalized network structure framework or establish a virtual world model, and use virtual objects to simulate and analyze actual application scenarios, rather than through traditional computer assistance Means to achieve simulation results, etc., these all belong to the modeling technology in the construction of generalization technology. This article is mainly based on the data classification and fusion theory in deep learning, multi-dimensional deep model algorithms, and neural networks to construct

52 Research and Application of Key Technologies …

529

the experimental layer model. After the training sample is established, it is divided into 10 dimensions and divided into 4 different types from low to high. Structure type; At the same time, according to the classification results of the five features in different categories, three types of markers are used for identification. In the design process, a series of tasks such as data processing, analysis and output are required, which all rely on modeling software to complete; and for complex problems, they are inherently complex and cannot be directly input into the computer system. To solve many difficulties in the actual situation, such as large amount of information and complicated structure, it is necessary to combine the relevant knowledge of simulation technology and language simulation after establishing a suitable model.

52.3.3 Model Definition Defining each sample as: (H, Un → R), where H = {U1 , U2 , …, Un − 1 } represents the previous n − 1 rounds of conversation history, UnU–_nUn represents the first Before the n-round conversation, R represents the speech output after eliminating the co-referential relationship and missing information. The training UtteranceReWriter target automatically infers the rewritten UnU_nUn based on the session history data. This process first encodes (H, Un) (H, U_n) (H, Un) into a vector s sequence, and then uses a pointer network to decode R. The entire model architecture diagram is as follows (Fig. 52.2).

Fig. 52.2 Model architecture diagram

530

X. Huang et al.

52.4 Summary This paper mainly focuses on the research of key technologies in deep learning. Based on the analysis of various algorithms and related design ideas, this paper proposes a deep-based dialogue reconstruction and fusion method. Then start with the deep neural network model to establish a deep corpus and verify its feasibility and generalization ability through simulation experiments; finally use deep learning. The double-threshold rule studies the generalization effect. In the next work, the next step will be analyzed in detail. Since the deep learning of multiple rounds of dialogue is a long-term, systematic and open project management process, it has the characteristics of many uncertain factors, wide knowledge points, but scattered content. Therefore, it is necessary to realize resource sharing through effective technologies and solutions to improve its value level; secondly, based on the theory of deep learning, a complete and scientific system framework and development tools for multiple rounds of dialogue have been established to solve related problems. When researching the problems involved in the model.

References 1. Lu, X.W.: Research on open field multi-round dialogue system based on deep learning. Doctoral dissertation, East China Normal University (2020) 2. Xu, X.: Research on question answering system based on deep learning. J. Hubei Normal Univ. Nat. Sci. Ed. 39(01), 10–18 (2019) 3. Ren, F.J., Yu, B., Bao, Y.W.: A multi-round emotional dialogue method based on deep learning. CN108874972A. (2018) 4. Zhang, X.D., Wang, H.F.: An automatic analysis method of conversation emotion based on deep learning 5. Liang, S.Z.: Research on the key technology of intelligent chat robots based on deep learning. Doctoral dissertation, Guangxi University 6. Ye, Y.L., Cao, B., Fan, J., Wang, J., Chen, J.B.: Coarse-grained intention recognition method for task-oriented multi-round dialogue. Small Microcomput. Syst. 41(08), 54–60 (2020) 7. Wu, X., Du, Z.K.: A construction method of human-machine multi-round dialogue model based on scene context. CN108170764A (2018) 8. Jian, R.X., Yang, Z.X.: An artificial intelligence dialogue method and system. CN106599196A 9. Zhang, S., Cao, B., Xu, Y., Fan, J.: Number entities recognition in multiple rounds of dialogue systems. Comput. Model. Eng. Sci. 127(1), 309–323 (2021)

Chapter 53

Rolling Bearing Fault Diagnosis Method Based on SSA-ELM Long Qian, Zhigang Wang, Zhiqiang Zhou, Dong Li, Xinjie Peng, Binbin Li, and Bin Jiao

Abstract In the motor and its control system fault diagnosis, the rolling bearings’ fault diagnosis is particularly important. In its fault diagnosis, the main method is to classify and recognize faults by the classification of each fault type feature vector. In this paper, the sparrow search algorithm is proposed to optimize the machine learning method of extreme learning machine to classify and diagnose the rolling bearing faults of motor. Firstly, the sparrow search algorithm was used to promote the function of the extreme learning machine, and the SSA-ELM was used to accurately classify and diagnose the rolling bearing faults of the motor. Experimental results show that compared with the original ELM, the SSA-ELM can realize the classification and diagnosis of rolling bearing faults more quickly, which proves the time-based stability of the proposed model.

53.1 Preface Rolling bearings as important parts of machine equipment, long-term in complex environment, is also one of the most prone to failure parts, its running state monitoring is also very difficult, so its running performance of the motor running condition of equipment has a crucial role, if failure occurs, can cause a huge loss of mechanical equipment for the whole [1]. Deep learning technology is capable of learning, expressing and analyzing massive data [2]. At present, with the continuous progress in the field of artificial intelligence, a large number of researchers have made good use of artificial neural network (ANN), fuzzy theory and other technologies in the fault diagnosis of mechanical equipment, but in the actual working conditions and production, A large number of fault samples are often needed for deep learning network training, leading to the slow training speed of many neural networks, which are easy to fall into local minimum points, and the learning rate is very low. The extreme learning machine can solve the above problems L. Qian · Z. Wang (B) · Z. Zhou · D. Li · X. Peng · B. Li · B. Jiao School of Electrical Engineering, Shanghai Dianji University, Shanghai, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_53

531

532

L. Qian et al.

well, and has the characteristics of strong generalization ability. It is widely used in the fault diagnosis, attracting the attention of a large number of people at home and abroad. Various improved ELM methods also emerge in endlessly [3]. In order to optimize the initial weight and threshold of the ELM model, select the parameters more accurately, and improve the generalization ability and recognition accuracy of the ELM model, this paper proposes the SSA to promote the fault diagnosis of the ELM.

53.2 Sparrow Search Algorithm Swarms intelligence optimization algorithm refers to constructing mathematical models by imitating various physical or biological behaviors in nature, such as moths, bees, wolves and birds, and obtaining the best solution through multiple iterations [4–6]. SSA [7] was put forward by Shen and Xue in 2020. The algorithm obtains the optimal solution by simulating the population intelligence, foraging and antipredation behavior of sparrows, and has strong performance in precision, convergence speed, stability and robustness. In daily life, sparrows are divided into finders and followers. Through this relationship between discoverers and followers, sparrows can obtain food to survive [8]. The syngen made up of N sparrows: ⎡

x11 ⎢ x1 2 X =⎢ ⎣... xn1

x12 x22 ... xn2

... ... ... ...

⎤ 

⎤ ⎡ x1d f  x11 x12 . . . x1d x2d ⎥ d 1 2 ⎥ Fx = ⎣ ⎦

x2 . . . x2  f 1 x22 ...⎦ d . . . . f xn xn . . . xn xnd

(53.1)

There, d represents the dimension of the variable of the problem to be promoted, n is the figure of sparrows, and F is the fitness value. In SSA, During each iteration, the finder’s position is updated as follows: X i,t+1 j

=



i X i, j · exp − α.iter , if R2 < ST max X i, j + Q.L ,

if R2 ≥ ST

(53.2)

During the foraging process, some entrants keep an eye on the finders. The position update description of enrollees is as follows: X i,t+1 j

=

Q · exp

X

worst

  X t+1 P + X i, j

 , if i > n/2  + t+1  · A .L , otherwise −X −X i,t j

i2

(53.3)

P

Let’s say that these danger-aware sparrows make up 10–20% of the population. When the population is aware of danger, the model:

53 Rolling Bearing Fault Diagnosis Method Based on SSA-ELM

X it+1 j

  ⎧  t t  ⎪ + β ·  X i,r j − X best  if f i > f g ⎨ X best    t  t = X i, j −X worst  ⎪ ⎩ X i,t j + K · ( fi − fw )+ε if f i = f g

533

(53.4)

There, X best means that the sparrow in this position is the best in the population and is very safe. K is the step size control parameter.

53.3 Optimized ELM Based on Sparrow Search Algorithm 53.3.1 ELM ELM is put forward by Huang et al. [9]. ELM wildly selects the parameters of the network, and obtains the parameters through analytical calculation, which effectively overcomes the shortcomings of the traditional network. The structure is shown in Fig. 53.1, which is composed of input layer, hidden layer and output layer. The mathematical model of an ELM network: Hβ = T Among them: Fig. 53.1 Network structure

(53.5)

534

L. Qian et al.



g(w1 · x1 + b1 ) · · · g(w L

⎢ .. T T T .. H = h(x1 ) , . . . , h(x N ) =⎣ . .

⎤ · x1 + b L ) ⎥ .. ⎦ .

g(w1 · x N + b1 ) · · · g(w L · x N + b L )

N ×L

(53.6) In ELM, H is also called random feature mapping matrix [10]. Thus, Eq. (53.5) is transformed into the least norm β. 



β = H+ T

(53.7)

There, H+ represents the generalized inverse of H.

53.3.2 Sparrow Search Algorithm to Optimize ELM Fault Diagnosis Process The fitness function is designed as the MSE of the error of the training set:  fitness = argmin MSEpredict

(53.8)

The MSE error after training was selected by fitness function. The smaller MSE error of predicted data and original data coincidence degree is higher, the output of the final optimization for optimal initial weights and threshold, then using the best initial weights threshold training after the network test data of the test set, to get an optimal rolling bearing fault diagnosis model, to promote the recognition accuracy of ELM model. Specific implementation steps of SSA-ELM fault diagnosis are as follows: (1) (2) (3) (4) (5)

(6) (7)

First, set up training and testing sample sets. Initial SSA population size, max number of iterations, etc. Training samples were taken as the fitness of individual sparrows. Retain the best fitness value and location information. Update the location of the discoverers and followers according to Eqs. (53.1) and (53.2). According to Eq. (53.3), update the position of the sparrows that are aware of the danger. The sparrows in the periphery of the population will move to the safe area. Through the new position of the sparrow individual and update the global optimal information. If the number of iterations does not meet the termination conditions, repeat Step (3); otherwise, stop to get the optimal weights and thresholds. Input the test set samples into the optimal ELM model and output the diagnostic results.

53 Rolling Bearing Fault Diagnosis Method Based on SSA-ELM

535

53.4 Case Analysis of Rolling Bearing Fault Diagnosis 53.4.1 The Experimental Data In this paper, the rolling bearing data set from CWRU was selected for experimental verification. The test bed of CWRU bearing fault diagnosis model and the bearings used are shown in Fig. 53.2. The motor drive end deep groove ball bearing is SKF6205, the vibration signal two sampling frequencies are 12 and 48 kHz respectively, and the motor speed is approximately 1772 r/min. The experiment selects the data of the driving end of the rolling bearing with a sampling frequency of 12 K for research, which can be divided into four fault location in Table 53.1. There are 3 different degrees of fault for each position: 0.007, 0.014 and 0.021 inch, which add up to 10 working conditions. The sample data for this test were available in four different load states: 0, 1, 2, and 3 hp, where HP is an imperial horsepower and 1 HP = 0.75 kW. According to the load state, the samples are divided into four data sets: A, B, C and D. There are 1000 sample data for each health state of the bearing, and 2048 data are used for diagnosis for each sample. The details are shown in Table 53.1. Fig. 53.2 Case Western Reserve University bearing data testing platform

Table 53.1 Bearing data parameter Data set hp

Number of samples

Fault location

Fault diameter

A/B/C/D 0/1/2/3

1000/1000/1000/1000

Normal

0

1000/1000/1000/1000 1000/1000/1000/1000 1000/1000/1000/1000

Ball

0.007 0.014 0.021

1000/1000/1000/1000 1000/1000/1000/1000 1000/1000/1000/1000

Inner ring

0.007 0.014 0.021

1000/1000/1000/1000 1000/1000/1000/1000 1000/1000/1000/1000

Outer ring

0.007 0.014 0.021

536

L. Qian et al.

Fig. 53.3 Optimal curve

53.4.2 Rolling Bearing State Recognition Experiment SSA-ELM was used to classify and identify faults, and the maximum number of iterations was set as 50. The optimization iteration curve was shown in Fig. 53.3. Through the Fig. 53.3, the suggested algorithm was used to optimize ELM, and the MSE error reached the optimum after only 16 iterations. In this paper, two fault identification methods, SSA-ELM and ELM, are respectively compared. Through the Fig. 53.4, the accuracy of ELM optimized by SSA is 4.87% higher than that of ELM without optimization, and the expected output is almost the same as the actual output, indicating that the recognition accuracy of the unoptimized extreme learning machine is low. Can not achieve the desired effect. The results are shown in Table 53.2.

53.5 Conclusion In this paper, the parameters of the ELM are wild generated, which has a certain influence on the result of fault diagnosis. This paper proposes the use of SSA to promote the structure of the ELM, and this method is used to gain the required parameters of the ELM. The optimal solution can realize the fault diagnosis of the rolling bearing of the motor. By comparing the experimental results of SSA-ELM and ELM, it can be seen that SSA-ELM is better than ELM in terms of diagnostic

53 Rolling Bearing Fault Diagnosis Method Based on SSA-ELM

537

Fig. 53.4 Identify the result comparison chart

Table 53.2 Fault identification results under different identification methods

Diagnostic methods

Accuracy/%

Time/S

ELM

94.55

2368

SSA-ELM

99.42

2237

accuracy and time required, and the diagnostic accuracy rate is increased by 4.87% compared with the original ELM. Therefore, this paper adopts SSA to optimize the ELM algorithm, which can realize the effective identification of rolling bearing faults, which is of great significance to the rolling bearings’ fault diagnosis.

References 1. Zhu X.J.: Rolling bearing complex fault diagnosis based on sparse non-negative matrix factorization. Chin. J. Constr. Mach. 16(6), 553–558 (2018) 2. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–440 (2015) 3. Ge, X.L., Zhang, X.: Bearing fault diagnosis method based on singular energy spectrum and improved ELM. J. Electr. Mach. Control 22(1), 1–9 (2008) 4. Chen, T.G., Liu, C., Wang, M.Y., et al.: Artificial search group algorithm based on dynamic parameters. Control Decis. Mak. 34(9), 1923–1928 (2019) 5. Long, W., Cai, S.H., Jiao J.J., Tang, M.Z., Wu, T.B.: Improved whale optimization algorithm for solving large-scale optimization problems. Syst. Eng. Theory Pract. 37(11), 2983–294 (2017) 6. Wang, L., Lv, S.X., Zeng, Y.R.: Review on optimization algorithms for fruit fly. Control Decis. Mak. 32(7), 1153–1116 (2017)

538

L. Qian et al.

7. Xue, J., Shen, B.: A novel swarm intelligence optimization approach: sparrow search algorithm. Syst. Sci. Control Eng. 8(1), 22–34 (2020) 8. Li, D.Q.: A hybrid sparrow search algorithm. Comput. Knowl. Technol. 17(05), 232–234 (2021) 9. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing, 70(1–3), 489–501 (2006) 10. Huang, G.B., Zhou, H., Ding, X.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 42(2), 513–529 (2012)

Chapter 54

OneNet-Based Smart Classroom Design for Effective Teaching Management Junlin Shang, Yuhao Liu, and Yuxin Lei

Abstract In recent years, with the development of IoT, cloud computing and big data, there is an increasing number of smart devices for smart classrooms. Comparing to the traditional classrooms which lack attendance management and intelligent control of the environment facilities, these smart classrooms have great advantages in automatic control for facilities in the classrooms, ensuring security, and improving teaching quality. In this paper, we propose a new OneNet-based smart classroom architecture for teaching management effectively. In this architecture, we improve the traditional check-in method using both Bluetooth positioning technology and mobile terminal equipment. In addition, the architecture we propose can analyze environmental information in the classroom automatically through using sensors and adjust the classroom’s environment state using cloud platforms intelligently. In order to make it more convenient to control and manage the classrooms’ environment, we use data visualization technology to illustrate some real-time state parameters of the classrooms visually. Furthermore, we implement the architecture we propose in the real scenario. The results of the analysis and evaluation for this architecture we propose show that it is feasible to use and promote this kind of architecture in real teaching scenarios.

J. Shang School of Computer and Information Engineering, Tianjin Normal University Tianjin, Tianjin, China e-mail: [email protected] Y. Liu School of Electrical Engineering and Automation, Soochow University Suzhou, Suzhou, China e-mail: [email protected] Y. Lei (B) School of Computer and Information Engineering, Tianjin Normal University, Tianjin, China e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_54

539

540

J. Shang et al.

54.1 Introduction The classroom is the main place where education work is carried out, and the composition of its environment influences and determines the teaching effect to a large extent [1]. Although the facility in the classroom has been changed today it did not intelligent. Studies have shown that under the premise of the same level of knowledge, students studying in a smart classroom environment have significantly improved academic performance compared to students studying in a traditional multimedia classroom [2]. Ensuring the fluency of teaching will improve the education effect. Also due to the development of new teaching models nowadays, it is difficult to rely on the traditional classroom environment to develop student-oriented teaching models, such as inquiry-based teaching and case-based teaching [3, 4]. For the current traditional classrooms, the classrooms such as curtains, airconditioning, lighting, and other facilities need to be manually adjusted. However, because people cannot accurately perceive and adjust the classroom environment to a comfortable level [5]. In addition, the adjustment process will interrupt the teaching process, occupy limited teaching time, and affect the fluency of teaching. At the same time, most classrooms rely on manual check-in or GPS location-based checkin for student attendance [6], some classrooms using Fingerprint Biometrics [7]. The former requires teachers to roll-call one by one, which will take up a lot of teaching time [8]. And also the GPS location is easily deceived by GPS simulation software [9] and cannot accurately obtain the information of attending students. In addition, the locks are also a problem. The Fingerprint Biometrics equipment requires a high cost. For traditional classrooms, ordinary mechanical locks are mostly used. Under normal circumstances, the administrator needs to keep a large number of classroom keys, which causes great inconvenience for management. At present, some universities use IC door cards to replace traditional mechanical locks. However, IC card production and authority management need to be issued in separate classrooms. Once the door card is lost, the IC door lock needs to be reset, which is not conducive to the classroom’s High-efficiency management. For traditional classrooms, the operating status of teaching equipment, lighting equipment, and so forth can only be monitored in the classroom. The lack of remote visual information management capabilities often causes energy waste. In order to overcome the problems we mentioned above, we propose a smart classroom management architecture based on a cloud platform. In our architecture, we make full use of the IoT sensors which have high efficiency of data collection and realize real-time monitoring of the classroom environment with the help of cloud platforms. In addition, for the equipment inside the classroom, such as the height of blackboards can also be automatically adjusted through the system to improve the fluency of teaching. For today’s smartphones, the Bluetooth module not only has low power consumption but also has a small delay and high reliability within the effective range. Our architecture uses the Bluetooth module of smartphones to sign in to

54 OneNet-Based Smart Classroom Design for Effective …

541

improve the efficiency of sign-in in class. Similarly, to solve the problem of monitoring the equipment information in the classroom remotely, we designed an information visualization module to facilitate which the administrator can remotely know and operate the equipment, such as lighting, air conditioning, and other equipment. In this article, our main contributions are as follows: • We propose an intelligent equipment adjustment program, in which multiple types of IoT sensors are placed in the system to collect environmental information in the classroom. The system will automatically adjust equipment including air conditioners, curtains, and other equipment based on temperature and light information collection of the classroom. And also the blackboard can automatically adjust its height according to the current writing situation and the teacher’s height. • On the point of security, we propose a smart security program. This program adopts a dynamic password method, which solves the inconvenience caused by traditional mechanical locks or IC door locks to classroom management. Ensure the security of the classroom to prevent its abuse or waste of resources. • We also propose a smart sign-in system. The system uses smartphones Bluetooth instead of traditional sign-in, which not only greatly improves sign-in efficiency and prevents cheating, but also greatly reduces system costs compared to fingerprints and other methods. The system provides a historical data query function, which is convenient for teachers to count the attendance of students. • To achieve data visualization, we propose a data visualization program. The classroom environment information and student sign-in information collected in this system will be uploaded to the cloud visualization platform, and the collected information will be graphically displayed through the centralized statistics of the data set. • And we implemented the architecture we propose above and evaluated and analyzed its effectiveness and feasibility from the aspects of cost and improvement of teaching quality. The remainder of this article will consist of the following: Sect. 54.2 will review the existing smart classrooms and their technology applications. Section 54.3 describes the design scheme of each module of this system. Section 54.4 introduces the concrete realization of each module of this system.

54.2 Related Work 54.2.1 Smart Classroom There are multiple existing studies about smart classrooms. Wang et al. [10] analyzed each process of teaching in traditional classrooms and smart classrooms and concluded that smart classrooms have more advantages in education. And also, Shen et al. [11] through experiments in college derive the conclusion that smart

542

J. Shang et al.

classrooms can bring more advantages. Moeiz [12] proposes that the smart classroom will solve tasks that have nothing to do with the course, so it can improve the quality of education and ensure that the attention of teachers and students is on the course. Nenad et al. [13] propose to use IOT technology in classrooms to improve the quality of courses by collecting feedback data and analyzing data, but they did not point out the required sensors. Jose et al. [14] propose two frameworks using MASINA and apply them to a device and a software, respectively. Saini et al. [15] analyzed the smart classrooms in various schools today and found that due to the cost of hardware and software, the current smart classrooms still need to be improved, and there are challenges in the development of classrooms.

54.2.2 Indoor Positioning There are many indoor positioning methods nowadays. Yang et al. [16] proposed a WIFI-based positioning method, which can be applied to a large amount of data communication. García et al. [17] proposed a UMB-based indoor positioning system that can solve the complex indoor environment. But their equipment costs are higher. He et al. [18] pointed out that Wi-Fi Fingerprint-Based Indoor Positioning systems are more efficient and save energy. Xu et al. [19] proposed a GPS-based indoor positioning system with high accuracy. Zhou et al. [20] pointed out that under the latest standards, Bluetooth indoor positioning has great advantages in terms of power consumption and cost.

54.2.3 Data Visualization In recent years, data visualization is an intuitive and effective management tool. Dzemyda et al. [21] pointed out that data visualization can help people understand abstract data and make it easier for people to process data more efficiently. Agrawal et al. [22] pointed out that in big data, traditional static visualization methods cannot efficiently complete data statistics, and data visualization methods need to be further updated. Choo et al. [23] proposed a method based on low-precision computation and iteration-level visualization to realize big data visualization. Wang et al. [24] proposed a data visualization tool with the help of a cloud platform, which has advantages in data storage and analysis. Gong et al. [25] proposed a cloud-based data system called SMASH and pointed out that a cloud-based data visualization system will increase the utilization of data in today’s environment.

54 OneNet-Based Smart Classroom Design for Effective …

543

54.3 Proposed Methodology Our architecture functions are divided into four major modules (Fig. 54.1). They are the classroom hardware equipment and teaching equipment management control system, security system, student attendance management system, and data visualization system. The structure of the architecture is shown in Fig. 54.2. Mainly include the control of electrical appliances and teaching aids in the classroom, Each classroom is connected to a variety of sensors through its built-in main control unit and responds after data analysis. At the same time, in the school equipment control center, each classroom can be uniformly managed and monitored through the visualization platform.

54.3.1 The Control System of Classroom Hardware Equipment and Teaching Equipment 54.3.1.1

Light Control Module

Most traditional classrooms have the problem of light abuse, causing serious energy waste. Our module realizes the automatic control of classroom lighting and presets four classic scenes: Fig. 54.1 Classroom components

544

J. Shang et al.

Fig. 54.2 The structure of the architecture

• • • •

Preparation for class: Students are not ready, waiting for class Teaching scene: Teachers use blackboards or multimedia equipment for teaching During breaks: Teaching equipment is not used Self-study: Students use classrooms for self-study.

We monitor the indoor light situation by configuring sensors in the classroom. The control unit in the classroom sets the light intensity threshold according to the teaching scene. For example, if the value is higher than the threshold, the light intensity will be reduced by turning off the light. Through research and reference to various standards [26, 27], we have obtained the recommended value of light intensity in various scenarios. As shown in Table 54.1.

54 OneNet-Based Smart Classroom Design for Effective … Table 54.1 Values of light intensity in different classroom Settings

545

Preset Scene

Effective Area

Student area

Vertical

Horizontal

(Horizontal)

Prepare for class



300

150*

Teaching

Blackboard: 500

300

300

Multimedia whiteboard: Breaking



150*

150

Self-study





300

Note “* “ indicates recommended value. “–” Indicates no lighting requirements. Standard value without special symbol

Table 54.2 Indoor temperature and humidity threshold in different seasons

Parameter

Threshold value

Unit

Season

Temperature

22–26

°C

Summer

Humidity

30–45

%

Summer

16–24

Winter

30–65

54.3.1.2

Winter

Intelligent Ventilation and Temperature Control Module

There are air quality problems in traditional classrooms due to the high density of people. And suitable temperature, humidity, and air conditions can promote energetic teachers and students, which is conducive to the development of teaching work. In our module, we use sensors to read the temperature and humidity data inside and outside the classroom and control the windows and air conditioning in the classroom. For example, when the temperature in the classroom is higher than the set threshold, the control unit will turn on the air conditioning cooling mode. And when the humidity in the classroom is higher than the threshold, the windows will be closed and the air condition mode will be changed. In addition, each classroom is equipped with infrared remote control, the equipment can be manually controlled when necessary. According to the standard [28], our module uses indoor air and temperature and humidity thresholds as shown in Table 54.2.

54.3.1.3

Blackboard Adjustment Module

In the daily teaching of teachers, blackboard writing and PowerPoint are the most important teaching methods. However, due to the teacher’s height, the blackboard cannot be fully utilized. We design the blackboard height automatic adjustment module. We added rails on both sides of the existing blackboard, and installed sensors on the top and middle of the blackboard. Adjust the blackboard height dynamically

546

J. Shang et al.

by checking the teacher’s height. The blackboard will also drop when the teacher is writing, and it will rise during the explanation process, so that the students in the classroom can see all the content on the blackboard. After the teacher turns on the function, the height of the blackboard will be automatically adjusted according to the teacher’s height and the current blackboard writing situation, so that the blackboard can be fully utilized.

54.3.1.4

Infrared Manual Remote Control Module

In addition to the automatic control of the system, the manual control mode of the infrared remote control is added to the operation of every piece of equipment in the classroom, which is convenient for teachers to carry out the personalized operation according to their teaching context.

54.3.2 Smart Security Module The security Module includes fire prevention and anti-theft detection. The fire prevention cross-work with the first module. When each sensor detects equipment failure, it will trigger the alarm device. The information will upload to the platform through the control unit in the classroom, and inform the classroom equipment control center. On the other hand, human body infrared detection and intelligent access control can also be used to reduce safety problems. Each classroom is equipped with a password door lock, and the door lock password is dynamically issued by the platform.

54.3.3 Attendance Management Module The bottom layer of the attendance management system uses a Bluetooth module to sense the location of students. The network communication layer transmits the data to the back-end database for comparison with the list in the educational administration system. The application layer uses the smartphones application to set the users to match the smartphones to achieve “one-to-one” sign-in. The characteristics of Bluetooth technology can prevent cheating such as signing on behalf of others. Teachers can directly export the current or historical student sign-in information through the management platform to improve statistical efficiency.

54 OneNet-Based Smart Classroom Design for Effective …

547

54.3.4 Data Visualization Module This module is located in the background of the entire architecture. It transmits data to the control center inside the classroom through various sensors in the classroom, and uploads the data to the cloud platform through the transmission layer after processing the data in the control center. After the cloud platform completes the storage and processing, the current equipment information of the classroom, as well as the temperature and humidity in the classroom within the most recent period of time, are displayed through the visualization module. School administrators can remotely understand the situation of the equipment in the classroom directly through the visualization module, and can remotely manage the switch of the equipment in the classroom.

54.4 System Implementation The main function of our architecture is to realize the intelligent control of the classroom, which can be divided into five levels: perception layer, Transport layer, control layer, client application, and cloud application as we can see in (Fig. 54.3). Fig. 54.3 The system architecture

548

J. Shang et al.

Fig. 54.4 DHT11 (a) HC-SR04 (b)

54.4.1 Perception Layer 54.4.1.1

DHT11 Digital Temperature and Humidity Sensor

In Fig. 54.4a the DHT11 temperature and humidity sensor is mainly used for the convenience of detecting temperature and humidity, which mainly contains a resistive humidity measuring element, and uses a single bus for communication. 40-bit data is transmitted during communication, the operating current is 0.5 mA under 5 V voltage, the power consumption is low, and the overall energy consumption of the system is saved. The 8bit parity bit is already included in the data transmitted, which can ensure the accuracy of the data. In the simulation experiment, two DHT11 modules are used indoors and outdoors to detect the temperature and humidity conditions. According to the set temperature and humidity threshold, the equipment in the classroom including air conditioners, windows, etc., is automatically adjusted.

54.4.1.2

HC-SR501 Human Body Infrared Sensor Module

The human body infrared sensor module is mainly used to detect the human body within the range. Our system is located in the classroom security module and blackboard height adjustment module. In the security system, if a human body is detected when the classroom is not unlocked, the detection information will be fed back to the control unit inside the classroom, and transmitted to the cloud in real-time to notify the administrator.

54.4.1.3

HC-SR04 Ultrasonic Ranging Module

HC-SR04 uses two general piezoelectric ceramic ultrasonic sensors (Fig. 54.4b), the farthest detection example can reach 450CM, with high accuracy. In this system, the height of the blackboard is automatically adjusted with the infrared sensor module of

54 OneNet-Based Smart Classroom Design for Effective …

549

the human body. If the teacher turns on the automatic adjustment function, the main control unit in the classroom will adjust the height of the blackboard to an appropriate height according to the height of the teacher in class, which is convenient for the teacher to write on the blackboard or give multimedia explanations. After entering the classroom, the teacher can turn on the function through the control device (remote control) to adjust the blackboard to the most suitable height.

54.4.1.4

Photoresistor Sensor Module

This system mainly uses a 4-wire photosensitive sensor module to collect light intensity information inside and outside the classroom. In the simulation experiment, the sensor is installed inside and outside respectively, and when the light intensity reaches the threshold, it controls the switch of the lights or curtains in the classroom.

54.4.2 Transport Layer 54.4.2.1

ATK-HC05 Master–Slave Bluetooth Serial Port Module

This module is mainly used for students to sign in. In the slave mode, the students can sign in by using the mobile APP within the range of the teacher. When transmitting, the student’s name, student ID, and smartphones number will be transmitted. After uploading to the cloud, the student ID will be compared with the administration list to obtain attendance information.

54.4.2.2

ATK-ESP8266 WIFI Module

The ESP8266 (Fig. 54.5) module serves as the communication interface between the local and the cloud in the system and supports the TCP/IP protocol. The STA mode is Fig. 54.5 ESP8266

550

J. Shang et al.

used in this system to connect to the campus network to synchronize the information in the classroom control unit to the cloud to realize remote monitoring.

54.4.3 Control Layer In the simulation experiment, STM32F103 is used as the central control unit of the classroom, and the LCD touch screen is configured to manually control the devices in the classroom. In this system, the control layer serves as the core control layer, which connects the perception layer and the Transport layer to automate data processing, connects to the upper computer, and transmits the data to the cloud application and visualization platform.

54.4.4 Client Application The client of this system is developed using Android Studio 4.1.1, based on JDK15 and SQLite database, and is mainly used for student attendance and check-in. Students need to fill in their name and student ID when opening the client for the first time and do not need to fill in again after opening the client. Turn on the Bluetooth of smartphones and click sign in within the time to transmit the smartphone’s identification code and student identity information to the classroom control center. The MCU will upload the collected student sign-in information to the cloud platform for statistics and comparison.

54.4.5 Cloud Application 54.4.5.1

Cloud Data Management

The OneNET cloud platform is used for cloud data management in the simulation experiment (such as Figs. 54.6 and 54.7). OneNET is an open environment based on technologies such as the Internet of Things, cloud computing, and big data created by China Mobile. It provides functions including private network connection, online monitoring, data storage, message distribution, capability output, event warning, data analysis, etc., to ensure network connection and realize the seamless connection with upstream and downstream products. The system mainly uses OneNET’s protocol EDP to connect with the cloud to transmit environmental data collected by sensors and count the number of students. The administrator can use the tools on the platform to draw the data in a certain format into a table chart to analyze the changing trend of various parameters in the classroom.

54 OneNet-Based Smart Classroom Design for Effective …

551

Fig. 54.6 Temperature record

Fig. 54.7 Humidity record

54.4.5.2

Visualization Application

The visualization application of this project uses OneNET View2.0, supports multidata docking, supports the use of codes to filter data, and has a variety of 2D/3D components built-in, and the page automatically adapts to the screen. OneNET platform is equipped with official documentation, which is convenient for users to develop.

552

J. Shang et al.

The time-based Bluetooth sign-in module can ensure that every student arrives in the classroom, so that the number of attendance is credible. The adjustment of classroom internal equipment such as blackboards can be adaptive to different teachers and environments, and thresholds can be set according to current international standards to ensure the credibility of module operation. The dynamic password door lock module can get rid of the unsafe consequences caused by the loss of keys and IC door cards, thereby ensuring the safety of classroom property.

54.5 Conclusion Compared with traditional classrooms, smart classrooms we propose have the advantage of using multiple technologies to improve teaching quality. The smart classroom we propose saves limited teaching time by automatically controlling the equipment in the classroom, allowing students and teachers to focus on their main tasks. Our framework is designed based on the real classroom environment and real teaching process, and the feasibility and reliability of our framework have been proved after implementing it in the real world.

References 1. Hannah, R.: The effect of classroom environment on student learning (2013) 2. Phoong, S.Y., et al.: Effect of smart classroom on student achievement at higher education. J. Educ. Technol. Syst. 48(2), 291–304 (2019) 3. Chu, S.K.W., et al.: 21st Century Skills Development Through Inquiry-Based Learning from Theory to Practice. Springer International Publishing (2021) 4. Vedi, N., Puja, D.: Students’ perception and learning on case based teaching in anatomy and physiology: an e-learning approach. J. Adv. Med. Educ. Prof. 9(1), 8 (2021) 5. Castilla, N., et al.: Subjective assessment of university classroom environment. Build. Environ. 122, 72–81 (2017) 6. Ayop, Z., et al.: Location-aware event attendance system using QR code and GPS technology. Int. J. Adv. Comput. Sci. Appl. 9(9), 466–473 (2018) 7. Ahmed, A., et al.: A multifactor student attendance management system using fingerprint biometrics and RFID techniques. In: International Conference on Information and Communication Technology and Its Applications 11th (2016) 8. Kohalli, S.C., et al.: Smart wireless attendance system. Int. J. Comput. Sci. Eng. 4(10), 131–137 (2016) 9. He, W.B.,Liu. X., Ren M.: Location cheating: a security challenge to location-based social network services. In: 2011 31st International Conference on Distributed Computing Systems. IEEE (2011) 10. Wang, X., Li, M.Y., Li, C.: Smart classroom: optimize and innovative-based on compared with traditional classroom. Int. J. Inf. Educ. Technol. 9(10) (2019) 11. Shen, F.F., et al.: Smart classroom: an improved smart learning paradigm for college education. In: 3rd International Seminar on Education Innovation and Economic Management (SEIEM 2018). Atlantis Press (2019) 12. Miraoui, M.: A context-aware smart classroom for enhanced learning environment. Int. J. Smart Sens. Intell. Syst. 11(1), 1–8 (2018)

54 OneNet-Based Smart Classroom Design for Effective …

553

13. Gligori´c, N., Uzelac, A., Srdjan, K.: Smart classroom: real-time feedback on lecture quality. In: 2012 IEEE International Conference on Pervasive Computing and Communications Workshops. IEEE (2012) 14. Aguilar, J., et al.: Conceptual design of a smart classroom based on multiagent systems. In: Proceedings on the International Conference on Artificial Intelligence (ICAI). The Steering Committee of the World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) (2015) 15. Saini, M.K., Neeraj, G.: How smart are smart classrooms? A review of smart classroom technologies. ACM Comput. Surv. (CSUR) 52(6), 1–28 (2019) 16. Yang, C.C., Shao, H.R.: WiFi-based indoor positioning. IEEE Commun. Mag. 53(3), 150–157 (2015) 17. García, E., et al.: A robust UWB indoor positioning system for highly complex environments. In: 2015 IEEE International Conference on Industrial Technology (ICIT). IEEE (2015) 18. He, S.N., Chan, S-H.G.: Wi-Fi fingerprint-based indoor positioning: Recent advances and comparisons. IEEE Commun. Surv. Tutor. 18(1), 466–490 (2015) 19. Xu, R., et al.: A new indoor positioning system architecture using GPS signals. Sensors 15(5), 10074–10087 (2015) 20. Zhou, C., et al.: Bluetooth indoor positioning based on RSSI and Kalman filter. Wireless Pers. Commun. 96(3), 4115–4130 (2017) 21. Dzemyda, G., Olga, K., Zilinskas, J.: Multidimensional data visualization. Methods Appl. Ser.: Springer Optim. Its Appl. 75, 122 (2013) 22. Agrawal, R., et al.: Challenges and opportunities with big data visualization. In: Proceedings of the 7th International Conference on Management of computational and collective intElligence in Digital EcoSystems (2015) 23. Choo, J., Haesun, P.: Customizing computational methods for visual analytics with big data. IEEE Comput. Graphics Appl. 33(4), 22–28 (2013) 24. Wang, Y.E., et al.: WebMeV: a cloud platform for analyzing and visualizing cancer genomic data. Cancer Res. 77(21), e11–e14 (2017) 25. Gong, Y., Morandini, L., Sinnott, R.O. :.The design and benchmarking of a cloud-based platform for processing and visualization of traffic data. In: 2017 IEEE International Conference on Big Data and Smart Computing (BigComp). IEEE (2017) 26. National Health Commission of the People’s Republic of China: Hygienic requirements of lighting design and setting in primary and middle school. GB/T 36876-2018. 2018-09-17 27. Rea, M.S.: IESNA lighting handbook. Am. J. Cardiol. 50(4), 886–893 (2000) 28. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China: Indoor Air Quality Standard. GB/T 18883-2002.2002-11-19 (2002)

Chapter 55

Research on Image Denoising Method in Spatial Domain by Using MATLAB Shuying Li and Wei Liu

Abstract In the process of image processing, image acquisition, acquisition and transmission, the image is usually affected by noise, resulting in “distortion” to varying degrees. Image denoising is a very important graphics processing method, which can effectively improve the quality of distributed images and solve the problem of image quality degradation after noise pollution in reality. This article introduces spatial denoising methods, including mean filter, Wiener filter and median filter, and analyzes these methods. By using MATLAB software, the method of image denoising in the spatial domain is studied and tested, and these results are analyzed.

55.1 Introduction Images are an important means of transmitting and acquiring information in modern society, because they contain a lot of information, can be transmitted efficiently and over long distances. If we choose to convert the observed and measured images into digital images for computer processing, the quality of the images will be degraded in different degrees due to manual, optical, and technical factors during the conversion process, which we do not want to see. A common example of this is that the image becomes hazy and contains noise. Therefore, it is very important to remove the noise from the image. Image processing techniques mainly include image enhancement, restoration, coding and compression, etc. [1]. In general, the image seen by the human eye is a noisy image, because during image formation and transmission, the image is influenced by instruments or algorithms. In order to achieve good results in terms of accuracy and clarity and to improve image quality, it is necessary to remove noise from existing images before image processing. This is not only the primary condition for all image processing, but also extremely important and significant in image processing. Therefore, in practice, noise in images is reduced or removed according to people’s needs in order to highlight important S. Li · W. Liu (B) School of Physics and Electronic Information, Yunnan Normal University, Kunming, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_55

555

556

S. Li and W. Liu

and useful information in images. The noises contained in images are generally salt and pepper, Poisson noise, and Gaussian noise. In practical applications, people have summarized many ways to remove image noise, which are mainly divided into frequency domain denoising and spatial domain denoising, which are summarized based on the characteristics of image noise and spectral features. The spatial domain denoising methods appeared earlier, mainly including median filtering, mean filtering and Wiener filtering [2]. Image denoising algorithm is one of the most important image processing methods, the purpose of which is to remove the noise and improve the image quality in order to lay the foundation for digital image processing [3], so the research on it is continuous.

55.2 Noise Model and Principle of Spatial Domain Denoising Image noise refers to signals other than useful signals in an image, i.e., signals that affect the image quality. There are generally three types of noise contained in an image, namely: pepper noise, Poisson noise, and Gaussian noise.

55.2.1 Models of Noise Pepper Noise: Pepper noise, also known as impulse noise, is a kind of noise often seen in digital images, mainly including pepper noise and salt noise. The black pixels in the image are pepper noise and the white pixels are salt noise, similar to pepper and salt particles [4]. The randomly generated black and white pixels in an image are pepper and salt noise. Poisson noise: Poisson noise is also known as scattered noise, and its noise model conforms to a Poisson distribution. The distribution of the number of occurrences of a random probability event per unit of time can be described by a Poisson distribution. For example, the number of service requests received by a service facility, the number of passengers waiting at a bus stop, the number of mechanical failures, the number of natural disasters, the number of calls received at a telephone exchange, etc., over a period of time. Gaussian noise: The probability density function of Gaussian noise obeys a normal distribution (i.e., Gaussian distribution) and is a random noise. Gaussian noise is related to the moving average and two instantaneous covariance functions over a given time period. If the noise is stable, then the average value is time-independent, whereas a large number of independent pulses produce Gaussian noise so that the value of each of these pulses is negligible compared to the sum of the values of all the pulses over a finite time interval [5]. Gaussian noise is caused by the interaction

55 Research on Image Denoising Method in Spatial ...

557

between the various parts of the circuit and by the high temperature of the image sensor and low illumination of the image sensor after a long period of operation [6].

55.2.2 Principles of Spatial Domain Denoising Spatial domain denoising is achieved by spatial filtering. Spatial filtering is a method that performs a series of operations on the pixel neighbors of a spatial domain to achieve the corresponding effect. Spatial filtering can be implemented by specific algorithms, and direct operations on the pixel values of an image are characteristic of spatial filtering algorithms. In the spatial domain, a template is first selected, and the shape and size of the template are determined by the nature of the image. Usually, the shapes chosen can be rectangles, squares, and crosses. The shape and size of the template generally remain the same during image processing, but can be changed depending on the characteristics of the image. In general, templates of odd sizes are used. The image is then denoised according to a specific algorithm.

55.3 Three Methods of Spatial Domain Denoising Based on MATLAB A good noise smoothing method should have two conditions [7]: (1) the noise in the image is effectively eliminated; (2) the edge information and lines of the image are not blurred. Median filtering, mean filtering, and Wiener filtering are three commonly used methods to remove image noise in the spatial domain.

55.3.1 Median Filter Median filtering is a nonlinear filtering algorithm that effectively reduces noise through statistical ranking theory. The basic principle of median filtering is to replace the value of each pixel with the median value of all the pixels in a neighborhood around that point, so that the surrounding pixels are close to the true value, in order to eliminate isolated noise points [8]. Median filtering not only effectively eliminates image noise, but also protects the edges of the image from damage, and the algorithm is relatively simple, so it is a widely used method in image denoising. However, median filtering is not applicable to images with many dots and lines because it can destroy the edge details while smoothing the noise [9].

558

S. Li and W. Liu

55.3.2 Mean Value Filtering Mean value filtering is a typical linear filtering algorithm that uses the neighborhood averaging method [10]. Mean value filtering is also known as linear filtering. The basic principle of mean value filtering is to replace the value of each pixel with the average of all pixels in a neighborhood around that point. After domain averaging, the noise intensity is reduced and the noise is effectively suppressed. Let Sxy denote a filter window with a center point at (x, y) and a size of m × n. The arithmetic mean filter simply calculates the average value of the pixels in the window area, and then assigns the average value to the pixel at the center of the window: f(x,y) =

1 mn



g(s, t)

(x,y)εSxy

Among them, g(s, t) represents the original image, f(x,y) represents the image obtained after mean filtering.

55.3.3 Wiener Filter Wiener filtering is a linear filter that extracts useful signals from the noise that people want. Wiener filtering is a method to eliminate noise by adaptive denoising of the image, mainly by estimating the local mean value of the image pixels. Wiener filtering is the earliest known method and the most commonly used method for signal processing.

55.4 Experimental Procedures, Results and Analysis With the addition of salt and pepper, Poisson noise and Gaussian noise respectively; then the median filter, mean filter and Wiener filter are applied to the noise processed images respectively; at the same time, different size templates are selected to observe the effect of image denoising, and finally the experimental results are analyzed to draw corresponding conclusions.

55.4.1 Experiment 1: Median, Mean, and Wiener Filtering of the Salt and Pepper, Respectively. The results of the experiment are shown in Fig. 55.1.

55 Research on Image Denoising Method in Spatial ...

Source Image

Image with salt and pepper noise

Mean filter image

559

Median filter image

Image after Wiener filtering

Fig. 55.1 Median, mean, and Wiener filtering for pepper noise, respectively

Experimental Results: Compared with the other two filtering methods, the median filter effectively removes salt and pepper while protecting the edge information of the image from damage. However, Wiener filtering can barely remove the salt and pepper. Therefore, the median filter is more effective in suppressing the salt and pepper.

55.4.2 Experiment 2: Apply Median, Mean, and Wiener Filters to Poisson Noise, Respectively The results of the experiment are shown in Fig. 55.2. Analysis of Experimental Results: In terms of visual effect, Wiener filtering can better eliminate the effects of Poisson noise compared with the other two filtering methods. Therefore, Wiener filtering is more effective in suppressing Poisson noise.

560

S. Li and W. Liu

Mean filter image

Mean filter image

Image after Wiener filtering

Image after Wiener filtering

Fig. 55.2 Poisson noise with median, mean, and Wiener filters, respectively

55.5 Conclusion Through the above experiments, it is found that the median filter can significantly remove the salt and pepper noise contained in the image, and protect the image edge information from being destroyed; the mean filter can effectively remove the Gaussian noise contained in the image; Wiener filter can not only effectively remove the Gaussian noise contained in the image, but also effectively remove the Poisson noise contained in the image, with the disadvantage that it is easy to destroy the image edges and lose the detailed information of the image. If the image contains different noises, you can combine the three methods together. All three spatial domain-based denoising filtering methods can achieve some noise reduction, but all of them have some shortcomings. No matter which filter is used, the noise cannot be completely removed, and the image will be blurred during denoising, which cannot effectively remove the noise and protect the detailed information of the image. Different sizes of templates have different denoising effects. The larger the template size chosen, the better the image denoising will be for the same noise, but it will make the image more blurred and destroy more details [11]. The above experimental findings are of some significance for the selection of appropriate filtering methods to remove the noise contained in images. How to protect the detailed information of the image from being damaged while image denoising is still a thorny problem that needs further study.

55 Research on Image Denoising Method in Spatial ...

561

References 1. Do. R.G., Dugelay, J.L.: A guide tour of video watermarking. Signal Process. Image Commun. 18(4), 263–282 (2003) 2. Ding, Y., Li, Z., Zhang, S.: MATLAB-based typical denoising algorithm for digital images. J. Sci. Teach. 30(06), 10–13 (2010) 3. Gonzalez, R.C., Woods, R.E.: Digital Image Processing (3rd Edition). Prentice-Hall, Inc. (2007) 4. XiaoJian, H., FuMing, W.: Study of the digital image denoising algorithm based on 8 directions. Electronic test (9), 25–29+80(2010). 5. Xie, Q.L.: Adaptive Gaussian smoothing filter for image denoising. Jisuanji Gongcheng yu Yingyong 45(16), 182–184 (2009) 6. Wang, J.F., Wang, S.X.: Comparison of image denoising techniques based on MATLAB software. J. Gansu Agric. Univ. 4, 163–166 (2011) 7. Gonzalez, R.C., Woods, R.E., Eddins, S.L.: Digital Image Processing Using MATLAB: and “Mathworks, MATLAB Sim SV 07”. Prentice Hall Press (2007). 8. Kasparis, T., Tzannes, N.S., Chen, Q.: Detail-preserving adaptive conditional median filters. J. Electron. Imaging 1(4), 358–364 (1992) 9. Feng, D.C., Yang, Z.X., Qiao, X.J.: The application of wavelet neural network with orthonormal bases in digital image denoising. In International Symposium on Neural Networks, pp. 539– 544. Springer, Berlin (2006) 10. Ni, J.: Study on smooth processing methods of digital image 8(1), 183–185 (2009) 11. Juneja, M., Sandhu, P.S.: Design and development of an improved adaptive median filtering method for impulse noise detection. Int. J. Comput. Electr. Eng. 1(5), 627–630 (2009)

Chapter 56

An Improved Low-Light Image Enhancement Algorithm Based on Deep Learning Wen Chen and Chao Hu

Abstract At present, most low-light image enhancement algorithms rely on artificial design with a priori information and constraints, but it cannot accurately capture deep structural features of the images. In this paper, we propose an improved algorithm with EnlightenGAN as the base framework and consider the phenomenon that the distance reflected by Mean Square Error (MSE) is quite different from human intuition. By introducing Structural Similarity (SSIM) in the loss function, we try to improve the structural similarity of the enhanced images to make them more consistent with natural and human intuition. For the instability problem during Generative Adversarial Networks (GAN) training, the relatively loose and easy-to-compute Energy-Based GAN (EBGAN) is used instead of Wasserstein GAN (WGAN). The improved algorithm is finally tested by using Low-light Image Enhancement (LIME) dataset as well as the test dataset of EnlightenGAN. The Peak Signal to Noise Ratio (PSNR) and SSIM values of the images processed by using the modified algorithm are calculated and compared with other algorithms, and the results show the proposed method is effective.

56.1 Introduction 56.1.1 Reasons for Research Eighty percent of the information that humans receive from the outside world comes from human vision [1]. Computer vision is a part of artificial intelligence technology, which focuses on how to use machines to “see” the world.

W. Chen (B) Jiangxi University of Science and Technology, Ganzhou, Jiangxi Province 341099, China e-mail: [email protected] C. Hu NingboTech University, Zhejiang Province, Ningbo 315199, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_56

563

564

W. Chen and C. Hu

High-quality data is a prerequisite for computer vision techniques, such as the image analysis and processing. However, the actual application scenario is affected by the surrounding environment. The image captured by the imaging equipment often has a lower quality, and the most representative ones are low-light images. The basic principle of camera imaging is the transformation of light signals into electrical signals. In low-light environments, the light source entering the sensor is insufficient, and the images acquired by the acquisition module generally suffer from low brightness, low contrast, color distortion, and loss of detail.

56.1.2 Traditional and Modern Methods The existing classical low-illumination image enhancement algorithms can be broadly classified into several methods such as histogram equalization-based methods [2], wavelet transform-based methods [3], and Retinex [4] theory-based methods. Until the advent of AlexNet [5], traditional image algorithms were mainly used to process images focused on the “computational approach”, which simply means that the image matrix is processed mainly by computation. Recently, we often refer to the use of deep learning in computer vision, which usually refers to deep convolutional neural networks (DCNN), and it gradually replaced the algorithms of traditional image processing [6]. Deep learning for image processing focuses on the word “learning”, and the working machine starts to use the trained conditions to judge the information of the image, instead of just calculating for the images. The related algorithm categories can be divided into three main categories: convolutional neural networks, which are commonly used to analyze and process image data, recurrent neural networks, which are commonly used for text analysis or natural language processing, and generative adversarial networks, which are commonly used for data generation or unsupervised learning applications.

56.1.3 Research Content The method proposed in this paper builds on the recent emergence of EnlightenGAN [7] and Energy-Based GAN [8]. Due to their excellent characteristics, both methods are of unique value. Here, we explore the areas for improvement in the loss function of the EnlightGAN algorithm and combine this algorithm with Energy-Based GAN.

56 An Improved Low-Light Image Enhancement Algorithm …

565

56.2 Basic Network for Improvement 56.2.1 EnlightenGAN Artificial intelligence has been in an explosive period in recent years, and more and more unexpected applications are being proposed. The same form of data can be analyzed by using different methods to solve different topics. Deep learning methods for image recovery and enhancement rely heavily on datasets containing paired data with low and normal illumination. These data sets are generally acquired by first fixing the camera and then reducing the exposure time under normal lighting conditions or by extending the exposure time under low light conditions. Due to some experimental requirements, such as the need to fix the camera and so the object cannot be moved, the upper conditions cannot be met. Therefore, it is somewhat difficult to obtain paired images with different lighting, and the resulting images may deviate from the true mapping between natural low-light and normally lit images. Especially in spatially varying lighting situations, simply increasing or decreasing the exposure time may result in localized overexposure or underexposure artifacts. EnlightenGAN uses an unpaired training strategy to eliminate the dependence on paired training data in training and allows us to train a wider variety of images from different domains. This algorithm also avoids the problem of overfitting specific data generation protocols or imaging device dependencies of previous methods and can therefore be used in a wide range of scenarios. EnlightenGAN has been able to achieve significant performance gains mainly due to the introduction of the following: • A global–local discriminator structure for processing spatially varying lighting conditions in the input image. • Self-regularization idea. This is achieved through self-feature retention loss and self-normalized attention mechanisms. The self-normalization mentioned in this approach plays a key role in the success of the model, as there is no strong form of external supervision available in the case of datasets that are unpaired images.

56.2.2 Wgan People often find a phenomenon when training GAN: one cannot train the Discriminator too well, otherwise the performance of Generator will hardly be improved. Since the original GAN uses the Jensen-Shannon scatter to measure the difference between pdata and pG , the distance between them is always log2 as long as the two distributions do not overlap, which results in the generator not being updated efficiently.

566

W. Chen and C. Hu

In terms of instability theory analysis, Martin Arjovsky et al. rigorously elaborated that the instability of network training is mainly caused by the disappearance of the generator gradient due to the JS scatter between the generation distribution and the target from the perspective of differential popularity, and then proposed WGAN [9].

56.3 The Proposed Improved Method 56.3.1 Structural Similarity Loss In the field of image reconstruction and image compression, there are many algorithms to calculate the difference between the output image and the original image, the most commonly used of which is the mean squared error loss. The MSE loss is the expected value of the squared difference between the parameter estimate and the parameter value. The MSE [10] loss is used in EnlightenGAN. A smaller value of MSE indicates that the prediction model describes the experimental data with better accuracy. Traditional MSE-based losses are insufficient to convey the human visual system’s intuitive perception of a picture. For example, sometimes two images differ only in brightness, but the MSE loss between them is quite different. And the MSE loss of a very blurry picture and another truly clear picture may be relatively small. Zhou Wang et al. combined with neuroscience research to argue that we humans measure the disparity between two images with more emphasis on the structural similarity of the two images rather than calculating the difference between the two images pixel by pixel [11]. From this result, they proposed a metric based on structural similarity. The human eye is insensitive to illumination but sensitive to changes in local (different parts of the image) illumination. It is insensitive to grayscale but sensitive to the relative degree of change in the grayscale of each part (contrast change), as well as sensitive to the local structure. SSIM is a perception-based computational model that is able to take into account the ambiguous changes in the structural information of images in human perception, and the model also introduces some perceptual phenomena related to the perceptual changes, and the SSIM is calculated as follows:    2μx μ y + C1 2σx y + C2   SSIM(x, y) =  2 μx + μ2y + C1 σx2 + σ y2 + C2 For two images of the same size, their SSIMs are always less than 1, with 1 indicating perfect similarity. where μx , μ y is the average of all pixels of the image, σx , σ y is the variance of the image pixel values. C1 and C2 are constants to avoid errors caused by the denominator being equal to zero. SSIM is actually obtained by multiplying the parts of three comparisons, including illumination comparison, contrast comparison and structure comparison. When using

56 An Improved Low-Light Image Enhancement Algorithm …

567

SSIM as a loss function, the equation is: LSSIM ( p) = 1 − SSIM( p)

56.3.2 Energy-Based GAN WGAN can be regarded as a bulldozer, and the two distributions, pdata and pG , are regarded as two piles of soil. The average distance required for the bulldozer to push the two piles of soil so that they come together is W-met. Since there are many ways to make them close to each other, it is necessary to traverse all the moving plans in order to arrive at the solution with the smallest average moving distance. This makes W-met often difficult to compute. EBGAN modifies the discriminator so that it no longer discriminates which distribution the input image comes from, but rather determines the degree of image reconfigurability. With a given positive margin m, and data samples x and generation samples G(z), the discriminant loss L D and generator loss LG are calculated by: L D (x, z) = D(x) + [m − D(G(z))]+ LG (z) = D(G(z)) among them: [·]+ = max(0, ·) The minimization G with respect to the parameter G is similar to maximizing the second term L D . When D(G(z)) ≥ m, it has the same minimum and non-zero gradient.

56.4 Experiments 56.4.1 Test Data In order to achieve the purpose of testing different scenes and different types of images, a total of four images from the LIME [12] dataset and the test dataset of EnlightenGAN are taken for testing and comparison. As shown in Fig. 56.1, the top 102 and 624 images are from the LIME dataset, and the bottom low10426 and low10459 are from the test dataset of EnlightenGAN.

568

W. Chen and C. Hu

Fig. 56.1 The original images used for testing. 102 and 624 are from the LIME dataset, and low10426 and low10459 are from EnlightenGAN

56.4.2 Experimental Results In this paper, Retinex, MBLLEN [13], LIME, EnlightenGAN and this improved algorithm were used to enhance the data for testing. The experimental results are shown in Figs. 56.2, 56.3, 56.4, 56.5 and 56.6. Peak Signal-to-Noise Ratio (PSNR) [14], an objective criterion for evaluating images. Often, after image compression, the output image will be different from the original image to some degree. In order to measure the quality of the processed image, we usually refer to the PSNR value to measure whether a particular processing procedure is satisfactory or not. A larger value of PSNR indicates that the two images are more similar. The PSNR values of the results of these methods with respect to the original images are shown in Table 56.1. Similarly, we calculate the SSIM value of the processed image and the original image. the closer the SSIM is to 1, the better the result. The results are shown in Table 56.2.

56 An Improved Low-Light Image Enhancement Algorithm …

Fig. 56.2 The effect of Retinex

Fig. 56.3 The effect of MBLLEN

569

570

Fig. 56.4 The effect of LIME

Fig. 56.5 The effect of EnlightenGAN

W. Chen and C. Hu

56 An Improved Low-Light Image Enhancement Algorithm …

571

Fig. 56.6 The effect of ours

Table 56.1 PSNR values of the four images after enhancement using different algorithms 102

624

low10426

low10459

Retinex

6.01

6.42

7.36

9.96

MBLLEN

9.01

9.03

11.30

18.01

LIME

9.31

7.97

10.37

10.50

EnlightenGAN

12.29

8.42

11.42

8.70

Ours

12.44

9.38

12.51

8.99

Table 56.2 SSIM values after enhancement of four images using different algorithms 102

624

low10426

low10459

Retinex

0.655

0.592

0.599

0.734

MBLLEN

0.959

0.929

0.938

0.940

LIME

0.560

0.683

0.747

0.842

EnlightenGAN

0.928

0.923

0.851

0.952

Ours

0.966

0.943

0.896

0.941

572

W. Chen and C. Hu

56.4.3 Conclusion and Outlook From the images and tables above, it is easy to see that Retinex has the worst results. MBLLEN performs well in the evaluation data, but it appears overexposed as does Retinex, and sometimes does not restore images well in normal light. LIME has a slight lack of detail in the dark areas. Compared with the original EnlightenGAN, our method can achieve better noise suppression as well as better quality for the color and light reproduction. We tried using another loss function in the network and combined the two networks. From the processed images and the calculated data obtained from the experiments, a slight progress was achieved. We hope that in the future there will be more inspired innovations or better ways to solve such problems.

References 1. Kaur, M., Kaur, J., Kaur, J.: Survey of contrast enhancement techniques based on histogram equalization. Int. J. Adv. Comput. Sci. Appl. 2(7) (2011) 2. Heil, C.E., Walnut, D.F.: Continuous and discrete wavelet transforms. SIAM Rev. 31(4), 628– 666 (1989) 3. Land, E.H.: The retinex. Am. Sci. 52(2), 247–264 (1964) 4. Rahman, Z., Jobson, D.J., Woodell, G.A.: Multi-scale retinex for color image enhancement. In: Proceedings of 3rd IEEE International Conference on Image Processing. IEEE, vol. 3, pp. 1003–1006 (1996) 5. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012) 6. Xu, L., Ren, J.S., Liu, C., et al.: Deep convolutional neural network for image deconvolution. Adv. Neural. Inf. Process. Syst. 27, 1790–1798 (2014) 7. Jiang, Y., Gong, X., Liu, D., et al.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021) 8. Zhao, J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. arXiv:1609. 03126 (2016) 9. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning. PMLR, pp. 214–223 (2017) 10. Allen, D.M.: Mean square error of prediction as a criterion for selecting variables. Technometrics 13(3), 469–475 (1971) 11. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. IEEE, vol. 2, pp. 1398–1402 (2003) 12. Guo, X., Li, Y., Ling, H.: LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2016) 13. Lv, F., Lu, F., Wu, J., et al.: MBLLEN: Low-Light Image/Video Enhancement Using CNNs//BMVC. 220 (2018) 14. Huynh-Thu, Q., Ghanbari, M.: Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 44(13), 800–801 (2008)

Chapter 57

Image Recognition of Wind Turbines Blade Surface Defects Based on Mask-RCNN Dong Wang, Yanfeng Zhang, and Xiyun Yang

Abstract Wind turbine blade failure will reduce power generation efficiency and increase operating costs. Serious faults will lead to production accidents. In this paper, a method of blade fault diagnosis based on deep learning algorithm is proposed. Mask-rcnn model is used to identify the defects in blade images, and the method is verified by the pictures of unmanned aerial vehicle (UAV) patrolling blades. Good recognition results are obtained. Most of the blade defects can be identified and the phenomenon of misidentification is very few. It can be used to upgrade wind farm power generation. Efficiency and daily operation and maintenance provide reliable information support.

57.1 Introduction Recent years with the tremendous progress in the field of wind power technology, more and more wind turbines are installed and put into use. Due to the long-term alternating load of the wind turbine, the wind turbine has different levels of damage [1]. As a key component to capture energy, blades are one of the vulnerable parts of wind turbines. Blade damage accounts for about 7% of all wind turbine failures [2, 3]. The consequences of blade damage include economic loss of repairing or replacing blades, reduction and impacts for power generation efficiency caused by non-planned shutdown on power generation dispatching, cause fire to threaten safety and environment [4–6]. Therefore, it is necessary to perform efficient and reliable monitoring of wind turbine blades.

D. Wang (B) Zhong Neng Power-Tech Development Co, Ltd, Beijing 100034, China e-mail: [email protected] Y. Zhang · X. Yang School of Control and Computer Engineering, North China Electric Power University, Beijing 102206, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_57

573

574

D. Wang et al.

Most wind turbine blades condition monitoring solutions are based on sensors, such as vibration [7], acoustic emission [8], strain [9], ultrasonic [10], and so on. However, in practical applications, these solutions need to consider the following issues: (1) More manpower and resources are required for the installation sensors for in-service wind turbines; (2) Sensor reliability and maintenance complexity; (3) Reliability of signal transmission. With the wide application of SCADA system in wind farms, some scholars monitor the health status of blades by analyzing SCADA data. Reference [11] proposed a method of deep neural network training based on the SCADA data of the wind turbine to detect the icing state of the blade surface, and the effectiveness of the method was verified by the actual operation cases. In order to improve the efficiency of wind farm operation and maintenance, the technology of using unmanned aerial vehicle (UAV) to detect defects on the surface of the blades of in-service wind turbine has been applied. Reference [12] proposed a method for automatic detection of blade surface cracks using UAV. The continuous sliding window method is used to detect the crack. However, in the application of wind farms, the blade crack defect is only a type of blade fault. How to accurately detect more common blade defects is still challenging. Deep learning has been successfully applied in areas such as product defect recognition and medical imaging diagnosis. Reference [13] the model trained by deep learning algorithm was successfully applied to the photoacoustic tomography of breast cancer. Reference [14] realized the fault diagnosis of the time domain image of the gearbox vibration signal based on the transfer learning algorithm. Compared with the above-mentioned image recognition applications, the defects of blades are usually subtle and the features are diverse. The Mask-rcnn deep learning algorithm has a good recognition effect for small targets. Mask-rcnn deep learning model has good recognition effect for small targets. In reference [15], Mask-rcnn was applied to the target recognition of home service robots, and good application results are achieved. For the problem of identifying defects in wind turbine blades, this paper proposed a deep learning recognition model of blade surface defects based on Maskrcnn. The simulation experiment using the images of wind turbine blades inspected by UAV verifies that the proposed model can better realize the recognition of blade defects.

57.2 Blade Surface Defect Recognition Model Based on Mask-Rcnn Mask-rcnn is a classic target recognition and detection network framework, and its excellent performance has been verified in many application fields. Mask-rcnn is an improved network model for Faster-rcnn, which includes three output branches to achieve target classification, target frame coordinate determination and target object mask determination. Its network structure and algorithm flow are shown in Fig. 57.1. The core structure and process of Mask-rcnn network include region

57 Image Recognition of Wind Turbines Blade Surface Defects …

575

Fig. 57.1 Block diagram of blade surface defect recognition model based on Mask-rcn

proposal network(RPN), feature pyramid network (FPN), the ROIAlign operation and multitask Loss.

57.2.1 Region Proposal Network RPN traverses the feature map by sliding windows of different sizes to obtain different candidate rectangle regions. Its operating mode is shown in Fig. 57.2. RPN network is a multi-task model based on convolutional neural network. Its output includes softmax binary classification and rectangular box regression. The output results of softmax classification are used to determine whether the candidate rectangular region belongs to the foreground. The rectangular box regression can obtain the coordinates and size information (width, height) of the center point of the candidate rectangular region. The loss function of RPN is proportionally composed of the loss of softmax classification and rectangular box regression.

Fig. 57.2 RPN operation mode

576

D. Wang et al.

Fig. 57.3 Basic FPN structure

57.2.2 Feature Pyramid Network In order to detect small target objects more accurately, the idea of FPN is introduced in Mask-rcnn. Structure of FPN includes three parts: bottom-up, top-down and horizontal connection. By fusing the feature maps of each layer, the phenomenon of small and medium target features in the middle layer is avoided, and the expression ability of feature information is enhanced. The concrete structure of FPN is shown in Fig. 57.3.

57.2.3 ROIAlign Operation The role of ROIAlign operation is to pool Region of Interest to pool the different size feature maps into a fixed uniform size feature map. For the RoIPooling operation, the integer quantization error makes the pixel positioning of the feature map deviate. The ROIAlign operation removes the rounding and quantization process when generating the feature map, introduces the bilinear interpolation algorithm,

57 Image Recognition of Wind Turbines Blade Surface Defects …

577

and retains the floating point coordinates, thereby reducing the quantization error, so that the generated feature map pixels correspond to the original image pixels. The bilinear interpolation is shown in Fig. 57.4. Suppose f (x, y) is the pixel value of the point P to be interpolated, A11 , A12 , A21 , A22 is the known pixel, the linear interpolation in the X direction is B1 and B2 , and the linear interpolation in the Y direction is P. f (B1 ) ≈

x2 − x x − x1 f (A11 ) + f (A21 ) x2 − x1 x2 − x1

(57.1)

f (B2 ) ≈

x2 − x x − x1 f (A12 ) + f (A22 ) x2 − x1 x2 − x1

(57.2)

where B1 = (x, y1 )

where B2 = (x, y2 ) f (P) = f (x, y) ≈

Fig. 57.4 ROIAlign bilinear interpolation

y2 − y x − x1 f (B1 ) + f (B2 ) y2 − y1 x2 − x1

(57.3)

578

D. Wang et al.

57.2.4 Multi-task Loss In this paper, the blade surface defect recognition model based on Mask-rcnn completes three tasks: defect frame detection and positioning, defect and background classification, and defect and background segmentation. Therefore, the loss function of the model proposed in this paper also includes the target frame positioning loss, classification loss and segmentation loss. The specific definitions are as follows: L = L box + L cls + L mask

(57.4)

where L box is target frame positioning loss, L cls is classification loss and L mask is segmentation loss.

57.3 Overall Process of Blade Surface Defect Recognition Based on Mask-Rcnn The blade surface defect recognition is divided into model pre-training, model iteration and model testing, as shown in Fig. 57.5. First, use the coco data set to pre-train the defect recognition model to enhance the generalization ability of the model, and then use the defect training samples for iterative training, and finally obtain the blade defect recognition model. The detailed steps of model training and testing are as follows: (1) (2) (3)

(4) (5)

(6)

(7) (8)

Prepare to train and test the image data of wind turbine blades, and label the images with LabelMe tool to generate the corresponding Mask label; Pre-training the model using Coco dataset; The training data and the corresponding Mask tags are sent to the pre-trained model for iterative training, and the feature map of the training blade image data is generated through Backbone network (ResNet101); Set several ROIs for each pixel of the feature map generated in step(3) to generate multiple candidate ROIs; The candidate POI generated in Step (4) is input into the RPN network for softmax binary classification and rectangular box regression, and some candidate ROIs are selected for ROIAlign operation; Defect classification (crack, spalling, corrosion), defect region rectangle regression and defect region MASK generation for candidate ROI selected in step (5); Repeat steps (5) and (6) to complete the training of all training image samples to obtain the optimal blade defect recognition model; The test blade image is input into the trained optimal model to obtain the blade defect recognition results.

57 Image Recognition of Wind Turbines Blade Surface Defects … Fig. 57.5 Overall process of blade surface defect recognition

579

580 Table 57.1 Environmental configuration

D. Wang et al. Operational system

Windows 10

CPU

Xeon E5-2680 v2 @ 2.80 GHz

RAM

32.0 GB

GPU

NVIDIA GeForce RTX 2070

Deep learning environmnt

Keras

Programmng language

Python 3.6

57.4 Simulation Case 57.4.1 Environmental Configuration and Deep Learning Framework At present, popular deep learning includes Tensorflow, Theano, Caffe, Keras, CNTK, etc. Keras is a highly modular neural network library, which aims to enable users to carry out the fastest prototype experiments and the shortest process of thinking into results into results. Therefore, Keras selected in this article runs on Tensorflow based on Python. The environment configuration of this case simulation is shown in Table 57.1.

57.4.2 Blade Image Data Set Construction and Model Pre-training A training set and a test set are made to train and test the proposed model by using UAV images of a wind farm in China. There are 176 blade images (100 without defects and 76 with defects) in the training set and 44 blade images in the test set (22 without defects and 22 with defects). The existing blade inspection images are few, resulting in fewer training samples. The recognition performance of deep learning model usually requires sufficient training samples and a large number of training time to ensure. In this paper, the online public COCO2014 dataset is used for pre-training the model. The COCO2014 dataset contains 9000 images, and the background in the photos is complex and the number of targets is large. Therefore, the blade defect recognition model in this paper takes the network model of the COCO2014 dataset based on transfer learning as the pre-training model, and on this basis, the blade image samples are used for training to obtain the final blade defect recognition model. The transfer learning strategy is used to reduce the cost of training model, accelerate the convergence of model, and enhance the generalization ability of model.

57 Image Recognition of Wind Turbines Blade Surface Defects …

581

57.4.3 Experiment and Analysis In this simulation experiment, the blade surface defects include cracks, blisters, spalling, etc. The partial recognition results of the blade defect recognition model proposed in this paper are shown in Fig. 57.6, where Fig. 57.6a, c, e are the original blade diagram, Fig. 57.6b, d, f are the recognition results of this model. In the recognition result of Fig. 57.6, the rectangular frame represents the specific location of the blade defect, and the upper part of the rectangular frame is marked with the blade defect category and the probability of belonging to this category. The mask area represents the general outline of the blade defect. In this paper, in order to improve the accuracy of the blade defect recognition model, reduce the amount of calculation of the blade defect rectangular box in the model and prevent the model from overfitting, the threshold of the probability of

(a) Blade original image 1

(c) Blade original image 2

(e) Blade original image 3

(b) Results of model recognition 1

(d) Results of model recognition 2

(f) Results of model recognition 3

Fig. 57.6 Partial recognition results of blade defect recognition model

582

D. Wang et al.

Table 57.2 Comparison of test results

TPR (%)

FPR (%)

LPR (%)

Test 1

88.64

0

11.36

Test 2

93.18

0

6.81

Test 3

86.36

0

13.63

the rectangular frame of Mask-rcnn is set to 0.8, that is, the rectangular frame with the probability of defect less than 0.8 is automatically deleted. It can be seen from Fig. 57.6 that the blade defect recognition model proposed in this paper can accurately identify the blade surface defects. In Fig. 57.6a, the area with corrosion defects on the blade surface was automatically labeled by the model with a rectangular frame, and the probability of determining the defect as corrosion was 0.999. However, there was a partial deviation in the Mask contour of the corrosion area, which was mainly due to the limited training set data in this paper and the failure to fully train the model to achieve the optimum. In Fig. 57.6c, two slight crack defects on the blade surface were accurately identified by the model, and the probability of recognition results as cracks was 0.964 and 0.968, respectively. There is an obvious crack defect on the blade surface in Fig. 57.6e. The output results of the model are shown in Fig. 57.6f. The crack defect is judged as a crack by the model and the Mask contour of the crack is accurately depicted. In order to verify the performance of the proposed model in identifying defects, this paper generally used true positive rate (TPR), false positive rate (FPR) and loss positive rate (LPR) in traditional target detection experiments as the evaluation indexes of the model, and used the test set to test the model for three times. The performance indexes of the test are shown in Table 57.2. As shown in Eq. (57.5), TPR refers to the ratio of the number of positive samples (np) detected to the actual total number of targets (n) in the test sample. T PR =

nP n

(57.5)

As shown in Eq. (57.6), FPR refers to the ratio of the number of misjudged targets (nfp ) to all detected positive samples (np ) in a test sample. FPR =

nFP nP

(57.6)

As shown in Eq. (57.7), LPR refers to the ratio of the number of undetected targets (nL ) to the total number of targets (n) in the test sample. LPR =

nL n

The results according to Table 57.2 are as follows:

(57.7)

57 Image Recognition of Wind Turbines Blade Surface Defects …

(1) (2)

(3)

583

The average TPR reached 89.39%, indicating that the model can identify most of the blade surface defects. The FPR value is 0, which means that the model does not appear false defects in defect recognition. This is because we set the threshold of rectangular frame probability for Mask-rcnn ass 0.8, and the low probability recognition results are automatically filtered, which contains a rectangular frame that is falsely recognized as a defect. The average value of the three experiments of LPR is 10.61%, indicating that the model in this paper still has the phenomenon of missing blade defects. This is mainly due to the limited number of positive samples in the training set in the normal industrial scenarios, and the insufficient generalization ability of the model, which leads to underreporting.

57.5 Conclusion In this paper, a method for identifying surface defects of wind turbine blades based on deep learning models is proposed. Under the Keras deep learning framework, this method is based on the Mask-rcnn algorithm, and the convergence of the model is accelerated through the pre-learning of the Coco data set, and the automatic recognition of defects in the blade image is realized, and verified in the images which inspected by UAV. The rapid and intelligent operation and maintenance of wind turbine blades is a challenging problem. Due to the diversity of blade defect features, establishing an open and effective blade defect data set is one of the directions to solve the poor generalization of the recognition model.

References 1. Li, G., Hu, H.L., Li, Y.N.: Review on recent development of technology of monitoring & diagnosis of wind turbine generator blades. Ind. Instrum. Autom. 05, 16–20 (2017) 2. Chen, C.Z., Zhao, X.G., Zhou, B., Gu, Q.: Study on extracting crack fault feature of wind turbine blades. Proc. Chin. Soc. Electr. Eng. 33(02),112–117 (2013) 3. Proceedings of The Chinese Society for Electrical Engineering 4. Liu, J., Huang, X.X., Liu, X.L.: Icing prediction of wind turbine blade based on stacked auto-encoder network. J. Comput. Appl. 39(05), 1547–1550 (2019) 5. Fausto Pedro, G.M., Andrew Mark, T., Jesús María, P.P., Mayorkinos, P.: Condition monitoring of wind turbines: techniques and methods. Renew. Energy 46(none), 169–178 (2012) 6. Takoutsing, P., Wamkeue, R., Ouhrouche, M., Slaoui-Hasnaoui, F., Tameghe, T.A., Ekemb, G.: Wind turbine condition monitoring: state-of-the-art review, new trends, and future challenges. Energies 7(4), 2595–2630 (2014) 7. Yan, J.Z., Zhou, J., Ji, C.N., et al.: Preliminary study on cracking defect detection of the fan blades based on aerodynamic noise. Instumentation Anal. Monit. 02, 1–6 (2018) 8. Fausto Pedro, G.M., Tobias, A.M., Jesús María, P.P., Mayorkinos, P.: Condition monitoring of wind turbines: techniques and methods. Renew. Energy 46, 169–178 (2012) 9. Tang, J., Soua, S., Mares, C., et al.: An experimental study of acoustic emission methodology for in service condition monitoring of wind turbine blades. Renew. Energy 99, 170–179 (2016)

584

D. Wang et al.

10. Oh, K.Y., Park, J.Y., Lee, J.S., et al.: A novel method and its field tests for monitoring and diagnosing blade health for wind turbines. IEEE Trans. Instrum. Meas. 64(6), 1–1 (2015) 11. Yang, R., He, Y., Zhang, H.: Progress and trends in nondestructive testing and evaluation for wind turbine composite blade. Renew. Sustain. Energy Rev. 60, 1225–1250 (2016) 12. Chen, L., Xu, G., Liang, L., Zhang, Q., Zhang, S.: Learning deep representation for blades icing fault detection of wind turbines[C]. In: 2018 IEEE International Conference on Prognostics and Health Management (ICPHM), Seattle, WA, pp. 1–8 (2018) 13. Wang, L., Zhang, Z.: Automatic detection of wind turbine blade surface cracks based on UAV-taken images. IEEE Trans. Industr. Electron. 64(9), 7293–7303 (2017) 14. Zhang, J., Chen, B., Zhou, M., Lan, H., Gao, F.: Photoacoustic image classification and segmentation of breast cancer: a feasibility study. IEEE Access 7, 5457–5466 (2019) 15. Cao, P., Zhang, S., Tang, J.: Preprocessing-free gear fault diagnosis using small datasets with deep convolutional neural network-based transfer learning. IEEE Access 6, 26241–26253 (2018)

Chapter 58

TCAS System Fault Research and Troubleshooting Process Xiaomin Xie, Renwei Dou, Kun Hu, Jianghuai Du, and Yueqin Wang

Abstract The function of TCAS system is to identify other aircraft equipped with TCAS equipment in its surveillance airspace, determine whether the aircraft has potential threats by calculating the possible flight paths of other aircraft, and send visual and voice reminding information to the pilot to remind the crew to take necessary maneuver measures. All the above tasks are completed automatically without crew intervention. The consultation information sent by TCAS is divided into two levels: traffic consultation TA and decision consultation RA. Traffic advisory information is used to remind pilots to pay close attention to the existence, movement status, and trend of the intruding aircraft around the aircraft, while decision advisory information requires pilots to immediately take the recommended vertical avoidance measures to avoid imminent danger approaching and ensure the safety of the aircraft.

58.1 Introduction Traffic alert and collision avoidance system (TCAS) is referred to as aircraft collision avoidance system for short. With the increasing number of modern aircraft, the density of aircraft in airspace increases, which greatly increases the possibility of an unsafe approach of aircraft [1–5]. In order to avoid the dangerous approach or collision between aircraft, TCAS system is installed in all kinds of civil passenger aircraft.

X. Xie (B) · R. Dou · K. Hu · J. Du School of Mechatronic Engineering, Anhui Vocational and Technical College, Hefei 230011, China Y. Wang School of Electronic Engineering, Anhui Xinhua University, Hefei 230088, China © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7_58

585

586

X. Xie et al.

Upper antenna ATC/TCAS controller

Upper antenna (directivity) Auditory warning

ATC/ mode s transponder TCAS computer Lower antenna

Workaro und (RA) EFIS EADI

navigator Lower antenna (directivity)

Traffic tips (TA) EFIS EHSI

Fig. 58.1 TCAS system structure diagram

58.2 Introduction of TCAS System Structure TCAS system is shown in Fig. 58.1 and mainly consists of the following components: (1)

(2)

(3) (4) (5)

TCAS processor (or computer) is the core component of TCAS system, and its main functions are to send out inquiry signals, receive response signals from invading aircraft, receive digital and discrete signals from other systems of this machine, calculate based on local data and received data, and generate traffic consultation and decision consultation. Antenna: It is equipped with upper and lower antennas. The antenna is a fourelement phased array antenna, which is connected with TCAS processor (or computer) by four coaxial cables [6, 7]. It is used to transmit the interrogation signal of TCAS processor (or computer), and receive the response signal generated by invading aircraft and send it to TCAS processor (or computer). TCAS/ATC control box: used for cockpit man–machine interface. It mainly has the functions of TCAS mode selection and transponder code selection. EFIS system: used to display visual information of TCAS system. Audible warning system: used to generate audio information of TCAS system.

58.3 Research on Function Module of TCAS Processor TCAS processor (or computer) is the core component of TCAS system, and its main functions are to send out inquiry signals, receive response signals from invading aircraft, receive digital and discrete signals from other systems of this machine, calculate based on local data and received data, and generate traffic consultation and decision consultation.

58 TCAS System Fault Research and Troubleshooting Process

587

TCAS processor includes a main board and a series of expansion boards such as power board, memory board, Data Processor board, and video memory board, etc. this paper will focus on component-level deep maintenance of TPA-81A TCAS processor data processor, formulate troubleshooting scheme and establish an expert system. TPA-81A TCAS data processor board can be divided into nine modules: CPU processor, CPU coprocessor, multifunction peripheral (MFP), DP control bus signal, fast address/data buffer, slow address/data buffer, external control buffer, real-time clock, and self-check display. Specific analysis is as follows: (1)

The CPU controls the transmission of TCAS, monitors the processed return data, and calculates the traffic environment conditions to provide video and audio consultation. CPU cooperates with devices on DP module to calculate through internal data exchange, address, and control bus, and it and devices on two memory boards calculate through buffered "fast" (70 ns) data, address, and control bus integrated into unit interconnection line. The calculation with other processor modules is completed by buffering "slow" (120 ns) data, address and control bus. Buffered slow data and address bus are also used to calculate the real-time clock with Greenwich Mean Time (GMT), and the slow data bus can drive the self-check display of DP. The functions to be executed and the order of execution are controlled by the CPU operating program of the ROM included on the MEM. The CPU program is programmed into ROM during the manufacturing process, and the processor program memory is downloaded by using the data downloader designed according to ARINC603 or 615 specification. A CPU receives a 32 MHz master clock signal from a DP master clock source. The main clock source of DP consists of crystal oscillator and buffer. The oscillator circuit also generates clock signals of other circuit boards of the digital data coprocessor. The 32 MHz CPU clock signal provides basic timing for CPU, which is divided by two internally to generate internal CPU clock signal for executing instructions. The address bus provides an output signal composed of the physical memory address on the I/O port address, and the address allowed output is (BE0—BE3), which directly indicates that those bytes in the 32-bit data bus (D0-D31) are related to the current transmission. A total of three bus cycle definition signals are generated: W/R distinguishes read and write cycles; D/C distinguishes data and control loop; M/IO distinguishes between memory and I/O cycle. These signals control data transmission throughout the DP module. The CPU receives a READY input signal from the waiting state generator, indicating that the current bus cycle (read, write or interrupt) has been completed and relevant data has been received or provided through peripheral devices with addresses on the internal address bus. A CPU receives an interrupt request (int) from a programmable interrupt controller on the MFP. Run two interrupt receiving CPUs to receive interrupt requests (int) from the programmable interrupt controller on MFP, and run two interrupt receiving bus cycles to respond to these interrupt requests. then, the

588

(2)

(3)

X. Xie et al.

controller obtains an 8-bit interrupt vector (on d0-D7) to confirm the interrupt source, and also receives an interrupt signal from the power terminal latch when the power supply is insufficient. this interrupt (NMI) cannot be masked by software, and can be processed immediately during the uninterrupted receiving cycle. The CPU receives a data co-processing request from the coprocessor and an input signal that the coprocessor is busy, indicating that the coprocessor is executing an instruction, and the coprocessor will provide an ERROR input to the CPU when the previous coprocessor instruction generates a co-processing error. The 5 V monitor circuit and watchdog timer are connected with the RESET input of CPU. the 5 V monitor resets the CPU when the + 5 V signal from the power supply exceeds the tolerance, while the watchdog timer flashes regularly under the control of software. once the flash signal (WDT) loses lS, a reset signal is generated to indicate failure or obstacle. Digital data coprocessor extends CPU to provide arithmetic instructions and execute a large number of intrinsic and logical functions such as tangent, sine and cosine. Coprocessor effectively extends register and CPU instruction settings for existing data types, and also adds several new data types. The coprocessor generates data request, busy status and error status output to CPU, communicates with CPU through bidirectional internal data bus (D00— D31), receives status signal (ADS) from CPU through control bus, and transmits an output signal (NPXRDY) to wait state generator, indicating that the previous read/write cycle has ended, and also receives READY output signal of wait state generator when other peripheral devices finish reading/writing operation. The clock of CPU is used to time its bus control logic, and its reset input is connected with the reset pin of CPU. The address line input A31 from the CPU indicates that the purpose of the current bus cycle is to communicate with the coprocessor, and the input A2 indicates whether the CPU is transmitting code or data. MFP closely integrates the functions of DMA controller, programmable interrupt controller (PIC), programmable interval timer, and internal bus controller on one integrated circuit board. PIC provides terminal control to microprocessor, which receives hardware interrupts from other processor modules and on-board peripheral devices, including IOP module, DPIO module, Greenwich real-time clock, power terminal latch (NMI), video processor, shared memory, overall DMA controller, and interval timer. When the CPU is initialized, the CPU executes a series of write instructions to the interrupt controller. The data loaded by these instructions are used to program the control format, interrupt priority identification and interrupt vector address of the interrupt controller. When receiving an interrupt request, the interrupt controller will decide its priority and interrupt the CPU based on this decision. When the CPU receives this interrupt request, it operates two interrupt receiving cycles, and then reads the data from D0 to D7 on the data bus to determine the interrupt source.

58 TCAS System Fault Research and Troubleshooting Process

589

When a device other than CPU requires bus control, MFP sends HOLD request to CPU. When CPU is ready to give up control, it will receive HOLD request and send an effective HLDA output after setting all other outputs and bidirectional pins in high impedance state, and give up the control bus to the device requesting bus. The MFP’s programmable interval timer section includes three programmable counters, which use 1 MHz clock as the clock reference. these counters are programmed by PIC to generate outputs (TOUT1 and TOUT2), which are used as timing interrupts to process the regular data of TCAS. one of these timer outputs can provide 5 ms real-time clock square wave interrupt to update the timing counter. A DMA controller part of MFP allows DP to operate and process I/O and module memory transfer. This device can improve the function of system performance, allowing external devices to directly transfer information from system memory, transfer between memories, and dynamically reload under program control. DMA controller provides 8 high-speed channels to required peripheral circuits for possible connection.

58.4 TCAS Processor Fault Test Flow 58.4.1 Test System Design Scheme TPA-81A TCAS processor includes a motherboard and a series of expansion boards such as power board, memory board, data processor board, and video memory board. After research, a preliminary scheme is made for deep board-level maintenance of four boards (Video Memory board, memory module I/II board, and Data Processor board) of TCAS processor. On the basis of the successful development of maintenance equipment for these four boards, other boards are further studied and corresponding solutions are made. Figure 58.2 shows the overall framework of the test system. During the test, the circuit board to be tested is inserted into the corresponding slot of the signal adaptation unit, and the power supply board supplies power to the circuit board to be tested, then the software on the computer controls the generation of board peripheral signals, and at the same time judges according to the signals returned by the signal adaptation unit, so as to determine whether the fault exists and why. This test system mainly develops computer test software, signal generation, and acquisition unit and signal adaptation unit, which can locate the quality of the board, the fault area and even some components on the board, and provide the basis for maintenance and deep repair.

590

X. Xie et al.

Monitor Test information Test results and fault information

Signal acquisition unit Control command

computer

control command

Signal adaptation unit Peripheral signal of board to be tested

Circuit board to be tested

Power Supply

Signal generator

Fig. 58.2 Basic block diagram of test system

58.4.2 Troubleshooting Process of TCAS Processor The hardware of TCAS fault test system includes air pump, needle bed fixture, data acquisition interface board, power supply (5 V), data acquisition card, and industrial computer. The whole test system is shown in Fig. 58.3: (1) (2) (3) (4) (5)

Air pump: it provides power air source for needle bed fixture. Needle bed fixture: test wires are led out from 397 test points of the circuit board. Data acquisition interface board: adapted to the acquisition card, realizing the chip selection function for each chip of the board to be treated. Data acquisition card (PCI-1751): it sends data to the input end of the board to be treated and receives data from the output end. Industrial computer: it is the control unit of the whole detection system. By running the program, it compares the signal collected from the chip to be tested with the truth table of the corresponding chip in the chip library, and returns the comparison result. The troubleshooting process of TCAS processor is as follows:

(1) (2)

use the existing TCAS fault test system to test the TCAS processor board, and locate the fault at the component level; The test system will give the fault status of the board, and the fault information will be given at the same time for the faulty board;

58 TCAS System Fault Research and Troubleshooting Process

591

Fig. 58.3 Physical diagram of test system

(3) (4) (5)

according to the test report of the test system, repair, replace or reprogram the faulty components; Take the repaired board card to the test system again for testing, and check whether the test results meet the required standards; If the test fails, repeat the previous operation, otherwise, the test ends.

58.5 Conclusions TCAS is an advanced airborne electronic system, which does not depend on any ground electronic system and completes the collision avoidance function autonomously. therefore, the industrial control computer is used as the host computer, supplemented by the signal generator, adapter, and bus of TCAS system, to realize the automatic test of the basic unit component level of aircraft TCAS system, so as to bring into full play the maximum social and economic benefits of the system, which plays an important role in improving the maintenance technology and benefits of civil aviation electronic system and promoting the development of civil aviation industry, and is also the purpose and significance of this paper. Fund Project Key Projects of 2020 Excellent Talents Support Program in Colleges and Universities (gxyqZD2020055); Natural Science Research Key Project of Anhui Province Higher School (KJ2019A0991); Key Natural Research Projects of Anhui Vocational and Technical College in 2020 (azy2020kj04);

592

X. Xie et al.

2021 Anhui Vocational and Technical College Quality Engineering Quality Improvement and Excellence Action Plan Special Project-Key Research Project of Education and Teaching (2021xjtz029).

References 1. Cheon, S., Lee, H., Kim, C.O., Lee, S.H.: Convolutional neural network for wafer surface defect classification and the detection of unknown defect class. IEEE Trans. Semicond. Manuf. 32(2), 163–170 (2019) 2. Chien, C.F., Chen, Y.H., Lo, M.-F.: Advanced quality control of silicon wafer specifications for yield enhancement for smart manufacturing. IEEE Trans. Semicond. Manuf. (2020) 3. Fu, W., Chien, C.F.: UNISON data-driven intermittent demand forecast framework to empower supply chain resilience and an empirical study in electronics distribution. Comput. Ind. Eng. 135, 940–949 (2019) 4. Jang, J., Min, B.W., Kim, C.O.: Denoised residual trace analysis for monitoring semiconductor process faults. IEEE Trans. Semicond. Manuf. 32(3), 293–301 (2019) 5. Kang, S.: Joint modeling of classification and regression for improving faulty wafer detection in semiconductor manufacturing. J. Intell. Manuf. 31(2), 319–326 (2020) 6. Kim, G.Y., Kang, S.H., Nah, W.: Novel TDR test method for diagnosis of interconnect failures using automatic test equipment. IEEE Trans. Instrum. Meas. 66(10), 2638–2646 (2017) 7. Kim, E., Cho, S., Lee, B., Cho, M.: Fault detection and diagnosis using self-attentive convolutional neural networks for variable-length sensor data in semiconductor manufacturing. IEEE Trans. Semicond. Manuf. 32(3), 302–309 (2019)

Author Index

A Abe, Jair M., 3

B Bernardini, Flávio Amadeu, 3 BiaoJin, 249 Bi, Hongmei, 195

C Cao, Yang, 273 Cao, Yuhao, 289 Cestero, Julen, 239 Chang, Ying, 321 Chang, Yingxian, 491 Chen, Qingzuo, 451 Chen, Rong, 249 Chen, Wen, 563 Chen, Yuhang, 491 Cui, Yani, 321 Cui, Zheming, 95

D Dengpan, Li, 351 Diván, Mario José, 17 Domínguez, Johnny Alvarado, 17 Dongdong, Chen, 351 Dong, Qingquan, 491, 507, 515 Dou, Renwei, 585 Duan, Xiaole, 135 Duan, Zhenyu, 79 Du, Jianghuai, 585 Du, Xingyue, 207

Du, Yuxuan, 95

F Fan, Luping, 71 Fan, Minxing, 299 Fei, Qiang, 249 Fuentes, Rachel Pairol, 17

G Gao, Fengyin, 195 Gao, Yu, 403 Geng, Lin, 463 Gong, Chibing, 361 Gonnet, Silvio, 17 Guo, Gaicong, 229 Guo, Shuangshuang, 515

H Han, Jinxi, 463 Han, Shengya, 507 Han, Wenrui, 381 Huang, Xiubin, 523 Huang, Zhen, 507 Huang, Zhimin, 255 Hu, Bin, 283 Hu, Chao, 217, 563 Hu, Kun, 585

J Jia, Furong, 135 Jia, Jianguang, 463

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2022 K. Nakamatsu et al. (eds.), Advanced Intelligent Technologies for Industry, Smart Innovation, Systems and Technologies 285, https://doi.org/10.1007/978-981-16-9735-7

593

594 Jiang, Huimin, 229 Jiao, Bin, 531 Jinqiu, Wang, 351

L Lan, Wenze, 255 Lei, Yuxin, 539 Liang, Bing, 395 Liang, Fangchi, 187 Liang, Zhichao, 169, 179 Li, Binbin, 531 Li, Bingjie, 423 Li, Dong, 491, 515, 531 Li, Jing, 463 Lima de, Luiz Antônio, 3 Li, Mengda, 169, 179 Lin, Hai, 273 Linping, Yao, 339 Li, Shuying, 555 Li, Tianyu, 403 Liu, Fujia, 435 Liufu, Yuliang, 289 Liu, Hongtian, 273 Liu, Juan, 499 Liu, Songjiang, 89 Liu, Songxian, 89 Liu, Tianyu, 79 Liu, Wei, 475, 555 Liu, Yimao, 369 Liu, Yuhao, 539 Li, Yang, 463 Li, Yiqing, 207 Li, Yuanwei, 313 Li, Ziqian, 523 Lu, Gang, 451 Lu, Zhuoran, 105

M Maiza, Mikel, 239 Martinez, Angel Antônio Gonzalez, 3 Ma, Rui, 59 Ma, Ying, 59 Ma, Yongbo, 499 Ma, Yuanfei, 29 Mengda, Li, 339

N Nakamatsu, Kazumi, 3 Ni, Nan, 217 Noman, Sohail M., 283

Author Index P Peng, Bo, 499 Peng, Wenhao, 403 Peng, Xinjie, 531 Peng, Yan, 369 Q Qian, Long, 403, 531 Qi, Ziying, 331 Quartulli, Marco, 239 R Ren, Jia, 321 Roldán, Luciana, 17 S Sabetzadeh, Farzad, 229 Sakamoto, Liliam Sayuri, 3 Shang, Junlin, 539 Shi, Qingzi, 163 Shuang, Gu, 351 Song, Chao, 273 Song, Guangzhao, 451 Souza de, Jonatas Santos, 3 Souza de, Nilson Amado, 3 Suescún, Elizabeth, 239 Sun, Zhen, 499 T Tai, Yonghang, 475 Tang, Bingling, 207, 451 Tang, Jiaxuan, 163 Tang, Lian Yao, 249 Tang, Liyao, 29 Tang, Xilang, 283 Tan, Han, 483 Tan, Hanhong, 255, 331 Tian, Bing, 507 Tian, Fang, 483 V Vegetti, Marcela, 17 Velásquez, David, 239 W Wang, Dong, 573 Wang, Dongjun, 273 Wang, Guosheng, 395 Wang, Jianhao, 283

Author Index Wang, Jianjun, 207, 451 Wang, Jinguo, 411 Wang, Shuai, 411 Wang, Tao, 95 Wang, Xiaoyi, 523 Wang, Xiuchun, 499 Wang, Yabin, 411 Wang, Yueqin, 585 Wang, Zhigang, 531 Wang, Zihan, 299 Wu, Chuang, 283 Wu, Guofang, 435 Wu, Hongwei, 273 Wu, Wenqing, 361 Wu, Ying, 59 Wu, Yue, 115

X Xia, Dong, 59 Xiaoming, Ren, 351 Xiao, Tianshuo, 307 Xie, Xiaomin, 585 Xubin, Zheng, 339 Xue, Yong, 45 Xu, Hao, 515 Xu, Kehu, 395 Xv, Shuquan, 435

Y Yang, Peng, 71 Yang, Xiaojuan, 435

595 Yang, Xiyun, 573 Yang, Yang, 369 Yan, Lijing, 249 Yan, Yuqiao, 135 Yao, Juan, 483 Yao, Linping, 169, 179 Yao, YaJuan, 483 Yin, Qilin, 507 Yu, Hao, 491 Yu, Jun, 523 Yu, Yunhan, 135 Yu, Zhentao, 125

Z Zeng, Lingli, 523 Zeng, Zhicheng, 163 Zhang, Chao, 475 Zhang, Cheng, 483 Zhang, Lisheng, 187, 423 Zhang, Yanfeng, 573 Zhang, Yue, 515 Zhao, Ming, 451 Zhao, Xuejun, 195 Zheng, Aoyu, 423 Zheng, Honghao, 451 Zheng, Mingfa, 187, 423 Zheng, Ting, 369 Zheng, Xubin, 169, 179 Zhichao, Liang, 339 Zhong, Haitao, 187 Zhou, Libing, 263 Zhou, Zhiqiang, 403, 531